Digital transformation is sweeping through businesses, giving rise to new to new business models, new and different constraints, and presenting a need for more focused organizational attention and resources in a new way. It is also upending the C-suite, bringing in new corporate titles and functions such as the Chief Security Officer emerge, Chief Digital Officer and Chief Data Officer. These new roles seemingly pose an existential threat to existing roles – for example, the CIO.
As companies invent new business models through digital transformation and bring new organizations into being, they do more than cover new ground. They also carve new roles out of existing organizations (the CIO organization, for instance). Other digital threats potentially affect the CIO role:
At Everest Group, we investigated the question of “Will the role of the CIO go away?” As a result of that investigation, we come back strongly with “no.” In fact, here’s happening to the role of the CIO: the CIO charter is changing and thus changing – but strengthening – the role.
The focus of the CIO charter is increasingly changing – matching the new corporate charter for competitive repositioning. The prior focus was on the plumbing (infrastructure, ensuring applications are maintained and in compliance, etc.). Although those functions remain, the new charter focuses on building out and operating the new digital platforms and new digital operating models that are reshaping the competitive landscape.
The reason the CIO role is changing with the new corporate charter is that, in most organizations, the CIO is the only function that has these necessary capabilities for digital transformation:
Digital transformation inevitably forces new operating models that have no respect for traditional organizations that are functional. Digital platforms and digital operating models collapse marketing and operations, for instance, spanning across these functions and groups to achieve a superb end-to-end for customer experience.
The new models force much tighter integration and often a realignment of organizations. The CIO organization and its breadth of vision and depth of resources to drive the transformation and support the new operating model that inevitably emerges from transformation.
Meeting the goals of the new charter for the CIO role will not come without CIOs changing their organizations and, in many cases, changing personally. To seizing the opportunities in the new charter, as well as shaping it, requires substantial change in (a) modernizing the IT, (b) the orientation and mind-set of the IT organization, and (c) changing the organizational structure.
To support digital transformation agendas, CIOs face a set of journeys in which they need to dramatically modernize their traditional functions. They first must think about their relationship with the business. To meet the needs of the business in a much more intimate, proactive, deeper way requires more investment and organizations with deeper industry domain knowledge and relationships. They need to move talent from remote centers back onshore to be close to the business so that they can better understand in a deeper way what the needs are and act on those quickly.
Second, the IT operating model needs to change from its historical structures so that it can deliver a seamless operating environment. The waterfall structures that still permeate IT need to change into a DevOps model with persistent teams that don’t change, teams that sit close to the business. IT also needs to accelerate the company’s journey to automation and cloud.
One thing companies quickly find about operating models is that they can’t get to a well-functioning DevOps team without migrating to a cloud-based infrastructure basis. And they can’t get to a cloud-based infrastructure basis without transforming their network and network operations model.
To meet the new charter, the CIO organization also needs to change in the following aspects:
The modernizations I mentioned above then call into question the historical organizational structure of IT with functions such as network, infrastructure, security, apps development, apps maintenance, etc. In the new digital charter, these functions inevitably start to collapse into pods or functions aligned by business services.
As I’ve described above, substantial organizational technology and organizational change is required within the CIO’s organization to live up the new mandate. I can’t overemphasize the fact that the change is substantial nor overemphasize the need. In upcoming blog posts, I’ll further discuss the CIO’s role in reorienting the charter from plumbing to transformation and supporting the new digital operating models.
Source : https://www.forbes.com/sites/peterbendorsamuel/2019/01/30/how-the-cio-role-must-change-due-to-digital-transformation/#24f9952f68be
TL;DR – those discussing what should be appropriate regulatory benchmarks for API performance and availability under PSD2 are missing a strategic opportunity. Any bank that simply focusses on minimum, mandatory product will rule itself out of commercial agreements with those relying parties who have the wherewithal to consume commercial APIs at scale.
As March approaches, those financial institutions in the UK and Ireland impacted by PSD2 are focussed on readiness for full implementation. The Open Banking Implementation Entity (OBIE) has been consulting on Operational Guidelineswhich give colour to the regulatory requirements found in the Directive and Regulatory Technical Standards which support it. The areas covered are not unique to the UK, and whilst they are part of an OBIE-specific attestation process, the guidelines could prove useful to any ASPSP impacted by PSD2.
The EBA at guidelines 2.2-4 are clear on the obligations for ASPSPs. These are supplemented by the RTS – ” [ASPSPs must] ensure that the dedicated interface offers at all times the same level of availability and performance, including support, as the interfaces made available to the payment service user for directly accessing its payment account online…” and “…define transparent key performance indicators and service level targets, at least as stringent as those set for the interface used by their payment service users both in terms of availability and of data provided in accordance with Article 36″ (RTS Arts. 32(1) and (2)).
This places the market in a quandary – it is extremely difficult to compare, even at a theoretical level, the performance of two interfaces where one (PSU) is designed for human interaction and the other (API) for machine. Some suggested during the EBA’s consultation period that a more appropriate comparison might be between the APIs which support the PSU interface and those delivered in response to PSD2. Those in the game of reverse engineering confirm that there is broad comparability between the functions these support – unfortunately this proved too much technical detail for the EBA.
To fill the gap, OB surveyed the developers, reviewed those existing APIs already delivered by financial institutions, and settled on an average of 99% availability (c.22hrs downtime per quarter) and 1000 m/s per 1MB of payload response time (this is a short summary and more detail can be read on the same). A quick review of the API Performance page OB publish will show that, with average availability of 96.34% across the brands in November, and only Bank of Scotland, Lloyds and the HSBC brands achieving >99% availability, there is a long way to go before this target is met, made no easier by a significant amount of change to platforms as their functional scope expands over the next 6-8 months. This will also been in the face of increasing demand volumes, as those organisations which currently rely on screen scraping for access to data begin to transfer their integrations onto APIs. In short, ASPSPs are facing a perfect storm to achieve these goals.
At para 2.3.1 of their guidelines, the OBIE expands on the EBA’s reporting guidelines, and provides a useful template for this purpose, but this introduces a conundrum. All of the data published to date has been the banks reporting on themselves – i.e. the technical solutions to generate this data sit inside their domains, so quite apart from the obvious issue of self-reporting, there have already been clear instances where services haven’t been functioning correctly, and the bank in question simply hasn’t known this to be the case until so informed by a TPP. One of the larger banks in the UK recently misconfigured a load balancer to the effect that 50% of the traffic it received was misdirected and received no response, but without its knowledge. A clear case of downtime that almost certainly went unreported – if an API call goes unacknowledged in the woods, does anyone care?
Banks have a challenge, in that risk and compliance departments typically baulk at any services they own being placed in the cloud, or indeed anywhere outside their physical infrastructure. This is absolutely what is required for their support teams to have a true understanding of how their platforms are functioning, and to generate reliable data for their regulatory reporting requirements.
[During week commencing 21st Jan, the Market Data Initiative will announce a free/open service to solve some of these issues. This platform monitors the performance and availability of API platforms using donated consents, with the aim of establishing a clear, independent view of how the market is performing, without prejudicial comment or reference to benchmarks. Watch this space for more on that.]
For any TPP seeking investment, where their business model necessitates consuming open APIs at scale, one of the key questions they’re likely to face is how reliable these services are, and what remedies are available in the event of non-performance. In the regulatory space, some of this information is available (see above) but is hardly transparent or independently produced, and even with those caveats does not currently make for happy reading. For remedy, TPPs are reliant on regulators and a quarterly reporting cycle for the discovery of issues. Even in the event that the FCA decided to take action, the most significant step they could take would be to instruct and ASPSP to implement a fall-back interface, and given that they would have a period of weeks to build this, it is likely that any relying party’s business would have suffered significant detriment before it could even start testing such a facility. The consequence of this framework is that, for the open APIs, performance, availability and the transparency of information will have to improve dramatically before any commercial services rely on them.
Source : https://www.linkedin.com/pulse/api-metrics-status-regulatory-requirement-strategic-john?trk=portfolio_article-card_title
In 2017, we had a death in the portfolio. Once all the employees left, the only remaining assets were some patents, servers, domains, and a lot of code. Recently, we managed to learn how to sell a patent and code. Here is what we learned on how to sell a patent:
The value of IP is a small fraction of what the company was once valued at; it’s maybe 1 to 5 cents on the dollar. Any acquirer of the IP is unlikely to do an all-cash deal, so don’t be surprised if the final consideration is a blend of cash, stock, royalty, earn out, or some other creative structure that reduces the acquirer’s upfront risk.
Selling a patent is going to take a year or more with legal taking 6 to 9 months alone (we recommend specialized counsel that has M&A experience and experience in bankruptcy/winding down entities).
It’s also going to take some cash along the way as you foot the bill for legal, preparing the code, and other unforeseen expenses that have to be paid well ahead of the close. With those expectations in mind, you need to seriously consider whether it is worth the work to sell the IP, what you will really recover, and what the probability of success really is.
If you’ve decided it’s worth it to try and recover something for the IP, reach out to absolutely everyone you know. That includes old customers, prospects, former customers, anyone who has ever solicited you for acquisition, your cousin, your aunt, etc.
The point is don’t eliminate anyone as a potential acquirer as you don’t know what’s on someone’s product roadmap and be shameless about reaching out to your entire network. The acquirer of the IP in our dead company was a prospect who never actually became a customer. We also had interest from very random firms that weren’t remotely adjacent to our space.
In order to transfer code to an acquirer, you’re going to need the CTO or whoever built a majority of the code to assist. No acquirer is going to take the code as-is unless you want them to massively discount the price to hedge their risk.
They’re going to want it cleaned up and packaged specifically to their needs. In our case, it took a founding developer 3 months of hard work to get the code packaged just right for our acquirer, and of course, we paid him handsomely for successful delivery.
The code was once part of a company, and that company has liabilities, creditors, equity owners, former employees, and various other obligations. All of those parties are probably pretty upset with you that things didn’t work out. Before you embark on a path to sell the IP, consult with an attorney that can tell you who has a right to any proceeds collected, what the waterfall of recipients looks like, who can potentially block a deal, who you need to get approval from, whether patents are in good standing, etc.
You’ll need to pay the attorney up front for his work and as you progress through the deal, so it takes take money to make money from selling IP.
Put the code on Github. Have potential acquirers sign a very tight and punitive NDA before allowing them to see the code. It also may be advisable to only give acquirers access to portions of the code. Github is the best $7 a month you’ll ever spend when it comes to selling IP.
Make sure you have access to all the assets. This includes all code, training modules, patents, domains, actual servers and hardware, trademarks, logos, etc. An acquirer is going to want absolutely everything even if there are some things he can’t necessarily use.
The acquirer has to be someone that is negotiating fairly and in good faith with you. We got very lucky that our acquirer had an upstanding and reputable CEO. If you don’t trust the acquirer or if they’re being shifty, move on. In our case, had the acquirer been a bad guy, there were many times when he could have screwed us such as changing the terms of the deal before the close, among other things.
Given the limited recourse you often have in situations like this, ‘bad boy’ acquirers do it all the time. We got lucky finding an acquirer who was honest, forthright and kept his word. You’ll need to do the same.
Selling patents is incredibly challenging. In our case the recovery was very small relative to capital invested, the process took nearly 1 year, and there were a lot of people involved to make it happen. We also spent about tens of thousands of dollars in legal fees, data scientist consulting, patent reinstatement and recovery, shipping of servers, etc.
A lot of that expenditure was done along the way so we had to put more money at risk for the possibility of maybe recovering cash in the sale of IP. Learning how to sell a patent wasn’t easy, but it got done. Hopefully, we never have to do it again and neither do you.
At a recent KPMG Robotic Innovations event, Futurist and friend Gerd Leonhard delivered a keynote titled “The Digital Transformation of Business and Society: Challenges and Opportunities by 2020”. I highly recommend viewing the Video of his presentation. As Gerd describes, he is a Futurist focused on foresight and observations — not predicting the future. We are at a point in history where every company needs a Gerd Leonhard. For many of the reasons presented in the video, future thinking is rapidly growing in importance. As Gerd so rightly points out, we are still vastly under-estimating the sheer velocity of change.
With regard to future thinking, Gerd used my future scenario slide to describe both the exponential and combinatorial nature of future scenarios — not only do we need to think exponentially, but we also need to think in a combinatorial manner. Gerd mentioned Tesla as a company that really knows how to do this.
He then described our current pivot point of exponential change: a point in history where humanity will change more in the next twenty years than in the previous 300. With that as a backdrop, he encouraged the audience to look five years into the future and spend 3 to 5% of their time focused on foresight. He quoted Peter Drucker (“In times of change the greatest danger is to act with yesterday’s logic”) and stated that leaders must shift from a focus on what is, to a focus on what could be. Gerd added that “wait and see” means “wait and die” (love that by the way). He urged leaders to focus on 2020 and build a plan to participate in that future, emphasizing the question is no longer what-if, but what-when. We are entering an era where the impossible is doable, and the headline for that era is: exponential, convergent, combinatorial, and inter-dependent — words that should be a key part of the leadership lexicon going forward. Here are some snapshots from his presentation:
Source: B. Joseph Pine II and James Gilmore: The Experience Economy
Gerd then summarized the session as follows:
The future is exponential, combinatorial, and interdependent: the sooner we can adjust our thinking (lateral) the better we will be at designing our future.
My take: Gerd hits on a key point. Leaders must think differently. There is very little in a leader’s collective experience that can guide them through the type of change ahead — it requires us all to think differently
When looking at AI, consider trying IA first (intelligent assistance / augmentation).
My take: These considerations allow us to create the future in a way that avoids unintended consequences. Technology as a supplement, not a replacement
Efficiency and cost reduction based on automation, AI/IA and Robotization are good stories but not the final destination: we need to go beyond the 7-ations and inevitable abundance to create new value that cannot be easily automated.
My take: Future thinking is critical for us to be effective here. We have to have a sense as to where all of this is heading, if we are to effectively create new sources of value
We won’t just need better algorithms — we also need stronger humarithms i.e. values, ethics, standards, principles and social contracts.
My take: Gerd is an evangelist for creating our future in a way that avoids hellish outcomes — and kudos to him for being that voice
“The best way to predict the future is to create it” (Alan Kay).
My Take: our context when we think about the future puts it years away, and that is just not the case anymore. What we think will take ten years is likely to happen in two. We can’t create the future if we don’t focus on it through an exponential lens
Source : https://medium.com/@frankdiana/digital-transformation-of-business-and-society-5d9286e39dbf