TL;DR – those discussing what should be appropriate regulatory benchmarks for API performance and availability under PSD2 are missing a strategic opportunity. Any bank that simply focusses on minimum, mandatory product will rule itself out of commercial agreements with those relying parties who have the wherewithal to consume commercial APIs at scale.
As March approaches, those financial institutions in the UK and Ireland impacted by PSD2 are focussed on readiness for full implementation. The Open Banking Implementation Entity (OBIE) has been consulting on Operational Guidelineswhich give colour to the regulatory requirements found in the Directive and Regulatory Technical Standards which support it. The areas covered are not unique to the UK, and whilst they are part of an OBIE-specific attestation process, the guidelines could prove useful to any ASPSP impacted by PSD2.
The EBA at guidelines 2.2-4 are clear on the obligations for ASPSPs. These are supplemented by the RTS – ” [ASPSPs must] ensure that the dedicated interface offers at all times the same level of availability and performance, including support, as the interfaces made available to the payment service user for directly accessing its payment account online…” and “…define transparent key performance indicators and service level targets, at least as stringent as those set for the interface used by their payment service users both in terms of availability and of data provided in accordance with Article 36″ (RTS Arts. 32(1) and (2)).
This places the market in a quandary – it is extremely difficult to compare, even at a theoretical level, the performance of two interfaces where one (PSU) is designed for human interaction and the other (API) for machine. Some suggested during the EBA’s consultation period that a more appropriate comparison might be between the APIs which support the PSU interface and those delivered in response to PSD2. Those in the game of reverse engineering confirm that there is broad comparability between the functions these support – unfortunately this proved too much technical detail for the EBA.
To fill the gap, OB surveyed the developers, reviewed those existing APIs already delivered by financial institutions, and settled on an average of 99% availability (c.22hrs downtime per quarter) and 1000 m/s per 1MB of payload response time (this is a short summary and more detail can be read on the same). A quick review of the API Performance page OB publish will show that, with average availability of 96.34% across the brands in November, and only Bank of Scotland, Lloyds and the HSBC brands achieving >99% availability, there is a long way to go before this target is met, made no easier by a significant amount of change to platforms as their functional scope expands over the next 6-8 months. This will also been in the face of increasing demand volumes, as those organisations which currently rely on screen scraping for access to data begin to transfer their integrations onto APIs. In short, ASPSPs are facing a perfect storm to achieve these goals.
At para 2.3.1 of their guidelines, the OBIE expands on the EBA’s reporting guidelines, and provides a useful template for this purpose, but this introduces a conundrum. All of the data published to date has been the banks reporting on themselves – i.e. the technical solutions to generate this data sit inside their domains, so quite apart from the obvious issue of self-reporting, there have already been clear instances where services haven’t been functioning correctly, and the bank in question simply hasn’t known this to be the case until so informed by a TPP. One of the larger banks in the UK recently misconfigured a load balancer to the effect that 50% of the traffic it received was misdirected and received no response, but without its knowledge. A clear case of downtime that almost certainly went unreported – if an API call goes unacknowledged in the woods, does anyone care?
Banks have a challenge, in that risk and compliance departments typically baulk at any services they own being placed in the cloud, or indeed anywhere outside their physical infrastructure. This is absolutely what is required for their support teams to have a true understanding of how their platforms are functioning, and to generate reliable data for their regulatory reporting requirements.
[During week commencing 21st Jan, the Market Data Initiative will announce a free/open service to solve some of these issues. This platform monitors the performance and availability of API platforms using donated consents, with the aim of establishing a clear, independent view of how the market is performing, without prejudicial comment or reference to benchmarks. Watch this space for more on that.]
For any TPP seeking investment, where their business model necessitates consuming open APIs at scale, one of the key questions they’re likely to face is how reliable these services are, and what remedies are available in the event of non-performance. In the regulatory space, some of this information is available (see above) but is hardly transparent or independently produced, and even with those caveats does not currently make for happy reading. For remedy, TPPs are reliant on regulators and a quarterly reporting cycle for the discovery of issues. Even in the event that the FCA decided to take action, the most significant step they could take would be to instruct and ASPSP to implement a fall-back interface, and given that they would have a period of weeks to build this, it is likely that any relying party’s business would have suffered significant detriment before it could even start testing such a facility. The consequence of this framework is that, for the open APIs, performance, availability and the transparency of information will have to improve dramatically before any commercial services rely on them.
Source : https://www.linkedin.com/pulse/api-metrics-status-regulatory-requirement-strategic-john?trk=portfolio_article-card_title
At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.
With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.
He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
Gerd then summarized the session as follows:
The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.
My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently
When looking at AI, consider trying IA first (intelligent assistance / augmentation).
My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement
Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.
My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value
We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.
My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice
“The best way to predict the future is to create it” (Alan Kay).
My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens
Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf
Naval Ravikant recently shared this thought:
“The dirty secrets of blockchains: they don’t scale (yet), aren’t really decentralized, distribute wealth poorly, lack killer apps, and run on a controlled Internet.”
In this post, I want to dive into his fourth observation that blockchains “lack killer apps” and understand just how far away we are to real applications (not tokens, not store of value, etc.) being built on top of blockchains.
Thanks to Dappradar, I was able to analyze the top decentralized applications (DApps) built on top of Ethereum, the largest decentralized application platform. My research is focused on live public DApp’s which are deployed and usable today. This does not include any future or potential applications not deployed yet.
If you look at a broad overview of the 312 DApps created, the main broad categories are:
I. Decentralized Exchanges
II. Games (Largely collectible type games, excluding casino/games of chance)
III. Casino Applications
IV. Other (we’ll revisit this category later)
On closer examination, it becomes clear only a few individual DApps make up the majority of transactions within their respective category:
Diving into the “Other” category, the largest individual DApps in this category are primarily pyramid schemes: PoWH 3D, PoWM, PoWL, LockedIn, etc. (*Please exercise caution, all of these projects are actual pyramid schemes.)
These top DApps are all still very small relative to traditional consumer web and mobile applications.
Further trends emerge on closer inspection of the transactions of DApps tracked here:
Where we are and what it means for protocols and the ecosystem:
After looking through the data, my personal takeaways are:
What kind of DApps do you think we as a community should be building? Would love to hear your takeaways and thoughts about the state of DApps, feel free to comment below or tweet @mccannatron.
Also, if there are any DApps or UI/UX tools I should be paying attention to, let me know — I would love to check them out.
With all of the noise surrounding bitcoin and its underlying technology, blockchain, it’s often difficult to separate real blockchain articles from those just looking for clicks.
Here’s a list, in no particular order, of articles and whitepapers written by the people actively involved in developing this new technology. The resources below range all the way back from the 1980s to today.