Financial and Business News

The Fourth Revolution Will Not Be Televised (But There Will Be a Panel Discussion About It)

Wednesday, 22/04/2026 | 06:27 GMT by Exante
  • AI boom mirrors past tech hype; real challenge is adoption, not innovation.
AI boom mirrors past tech hype; real challenge is adoption, not innovation.

Last week I attended GenAI Zürich, billed as Europe's summit on applied generative AI held over two days in early April. I participated in a roundtable on AI in banking, sat through a range of presentations of wildly varying quality, and walked the exhibition floor trying to remember where I had seen all of this before.

It came to me on day two. The crypto conferences of the early 2020s. The same barely-contained frenzy. The same vendors who had clearly learned the terminology last Tuesday presenting themselves as seasoned practitioners. The same undercurrent of fear that if you don't move right now, in the next fifteen minutes, you will have permanently missed the boat. The Fear of Missing Out has found a new home, and it has exhibitor badges.

The 95% Problem (That Nobody Wants to Solve)

A curious ritual repeated itself across multiple presentations. Someone would open with a sobering statistic, a figure variously attributed to MIT, McKinsey, or "recent research," suggesting that somewhere north of 90% of AI pilot projects in enterprises fail to reach production. The audience would nod gravely. A moment of genuine reflection appeared possible.

A handful of presenters did engage honestly with failure, which was genuinely refreshing and, frankly, far more useful than anything else on the agenda. But they were the exception. The majority pivoted immediately to three glowing case studies of projects that had worked brilliantly, with no further reference to the 95%, the reasons for it, or what might be done about it. It was the conference equivalent of opening a road safety seminar with the annual accident statistics and then spending the remaining forty minutes talking about Formula One.

If the failure rate is genuinely that high, it is arguably the most interesting topic in the room. Why are pilots failing? Is it technology? Organisational resistance? Unclear success criteria? Governance gaps? These are solvable problems, if you are willing to look at them directly. Instead, the industry appears to have collectively agreed that the statistic exists to be cited and then tactfully ignored, like an awkward relative at a wedding.

Buried underneath the failure rate is a more fundamental problem that almost nobody on the conference circuit appears willing to name. The dominant model for AI adoption is substitution: take an existing process, replace the human steps with agents, declare victory. What very few organisations are doing is stopping first to ask whether the process itself still makes sense.

This matters because most business processes were not designed around what was optimal. They were designed around what was humanly possible. The number of steps, the handoffs, the approval layers, the batch runs that happen overnight because nobody could be expected to work around the clock, all of these reflect the constraints of human capacity, attention, and availability. We built our processes to fit our people. Now we are building our agents to fit our processes. It is the wrong way round.

The more interesting question, and the one I heard asked precisely once across two days, is what you would design if you started from scratch knowing you had no meaningful limit on the number of agents you could deploy, no working hours to observe, and no cognitive load to manage. The answer looks almost nothing like what most organisations currently run. The opportunity is not to automate the existing workflow. It is to make the existing workflow unnecessary.

Solutions in Search of a Problem

The exhibition floor offered its own education. Several startups were demonstrating products that solved, with considerable ingenuity and evident technical talent, problems that I cannot honestly say anyone has. One company had built an AI-powered system for a workflow so niche I had to ask twice what industry it was aimed at. Another had gamely applied large language models to a process that worked perfectly well before they arrived, and now worked slightly differently, at greater expense, with an additional dependency on a third-party API.

This is not unique to AI. Every technology wave produces its share of solutions looking for problems. In the early internet days, there were companies building browser plugins for tasks that didn't need a browser. In the mobile era, there were apps for things that didn't need an app. The pattern is as reliable as rain.

The better presentations focused on unglamorous specifics. The AWS session stood out for its work on standardised specifications for software and system definitions: a practical attempt to create a common language between human intent and machine implementation that doesn't require the human to also be a developer. Our own Denis Voskvitsov presented on agent security and sandboxing, a topic that matters enormously red and gets discussed far less than agent capabilities. The rather important question of what you do when your AI agent can take actions in the world and you want some assurance that it won't take the wrong ones.

The Banking Roundtable: Regulation, Reluctance, and Subject Access Requests

The AI in Banking roundtable surfaced themes I suspect are common to most regulated industries, dressed up in slightly different clothes. The central question of adoption, specifically how you persuade staff who are perfectly capable at their jobs to change how they do those jobs, turns out to be less a technology problem than a change management one. People don't resist AI because they are ignorant of it. They resist it because they are not convinced it will make their working lives better, and in many cases they have seen enough technology implementations to have earned that scepticism.

Regulation came up with the predictable mixture of genuine concern and performative anxiety. The EU AI Act is real, and for financial institutions that use AI in credit decisions, customer interactions, or risk classification, its requirements are not trivial. GDPR is also real, and data protection authorities have started asking pointed questions about what happens when a customer submits a subject access request asking for information about an AI model that was used to make a decision about them. This is not a hypothetical. It is happening. The answer "we used an AI" is not, it turns out, a complete or satisfying response from a regulatory standpoint.

On the topic of AI in defence: there was, inevitably, a small political undercurrent about the ethics of AI being used in military applications. My own view is straightforward. If you do not want your technology used in defence, do not sign contracts with government departments that have the words "defence" or "war" in their name. This is not a complicated principle, though I appreciate it requires reading the contract.

The Fourth Revolution

I have been doing this long enough to have lived through four of what I would call genuine technology revolutions. The PC in the 1980s. The internet in the late 1990s and early 2000s. Mobile in the 2010s. And now this.

Each one democratised something. The PC put computing in the hands of individuals rather than institutions. The internet put information in the hands of anyone with a connection. Mobile put both in your pocket. This revolution is democratising capability itself. The ability to build things, to turn an idea into a working product, is no longer gated by whether you can write code, manage a development team, or afford one.

The timeframe is compressing in ways that are genuinely difficult to internalise. A project that would have required weeks of engineering time two years ago can be prototyped in a day. We are not fully at the point where an idea becomes a product before lunch, but we are close enough that the economics of software development are being rewritten in real time. The scarce resource is no longer the ability to build. It is the quality of the idea, and the clarity with which you can articulate it.

I feel for the junior developers who haven't yet grasped this transformation. Not because their skills are worthless, they aren't, but because the entry-level path of learning by writing boilerplate has just become considerably narrower. What is becoming more valuable is the ability to define a problem precisely, to reason about whether a solution actually solves it, and to know when the output in front of you is wrong. These are not coding skills. They are thinking skills.

This feels like the biggest of the four revolutions, and I say that having watched the internet turn entire industries inside out. At EXANTE, we are not in the habit of running pilots that are designed to succeed on paper and fail in production. The questions we brought to Zürich, about adoption, governance, agent security, and regulatory exposure, are the same ones we are working through at home. The conference didn't answer them. But it was reassuring, in a slightly grim way, to confirm that everyone else is wrestling with exactly the same ones..

Last week I attended GenAI Zürich, billed as Europe's summit on applied generative AI held over two days in early April. I participated in a roundtable on AI in banking, sat through a range of presentations of wildly varying quality, and walked the exhibition floor trying to remember where I had seen all of this before.

It came to me on day two. The crypto conferences of the early 2020s. The same barely-contained frenzy. The same vendors who had clearly learned the terminology last Tuesday presenting themselves as seasoned practitioners. The same undercurrent of fear that if you don't move right now, in the next fifteen minutes, you will have permanently missed the boat. The Fear of Missing Out has found a new home, and it has exhibitor badges.

The 95% Problem (That Nobody Wants to Solve)

A curious ritual repeated itself across multiple presentations. Someone would open with a sobering statistic, a figure variously attributed to MIT, McKinsey, or "recent research," suggesting that somewhere north of 90% of AI pilot projects in enterprises fail to reach production. The audience would nod gravely. A moment of genuine reflection appeared possible.

A handful of presenters did engage honestly with failure, which was genuinely refreshing and, frankly, far more useful than anything else on the agenda. But they were the exception. The majority pivoted immediately to three glowing case studies of projects that had worked brilliantly, with no further reference to the 95%, the reasons for it, or what might be done about it. It was the conference equivalent of opening a road safety seminar with the annual accident statistics and then spending the remaining forty minutes talking about Formula One.

If the failure rate is genuinely that high, it is arguably the most interesting topic in the room. Why are pilots failing? Is it technology? Organisational resistance? Unclear success criteria? Governance gaps? These are solvable problems, if you are willing to look at them directly. Instead, the industry appears to have collectively agreed that the statistic exists to be cited and then tactfully ignored, like an awkward relative at a wedding.

Buried underneath the failure rate is a more fundamental problem that almost nobody on the conference circuit appears willing to name. The dominant model for AI adoption is substitution: take an existing process, replace the human steps with agents, declare victory. What very few organisations are doing is stopping first to ask whether the process itself still makes sense.

This matters because most business processes were not designed around what was optimal. They were designed around what was humanly possible. The number of steps, the handoffs, the approval layers, the batch runs that happen overnight because nobody could be expected to work around the clock, all of these reflect the constraints of human capacity, attention, and availability. We built our processes to fit our people. Now we are building our agents to fit our processes. It is the wrong way round.

The more interesting question, and the one I heard asked precisely once across two days, is what you would design if you started from scratch knowing you had no meaningful limit on the number of agents you could deploy, no working hours to observe, and no cognitive load to manage. The answer looks almost nothing like what most organisations currently run. The opportunity is not to automate the existing workflow. It is to make the existing workflow unnecessary.

Solutions in Search of a Problem

The exhibition floor offered its own education. Several startups were demonstrating products that solved, with considerable ingenuity and evident technical talent, problems that I cannot honestly say anyone has. One company had built an AI-powered system for a workflow so niche I had to ask twice what industry it was aimed at. Another had gamely applied large language models to a process that worked perfectly well before they arrived, and now worked slightly differently, at greater expense, with an additional dependency on a third-party API.

This is not unique to AI. Every technology wave produces its share of solutions looking for problems. In the early internet days, there were companies building browser plugins for tasks that didn't need a browser. In the mobile era, there were apps for things that didn't need an app. The pattern is as reliable as rain.

The better presentations focused on unglamorous specifics. The AWS session stood out for its work on standardised specifications for software and system definitions: a practical attempt to create a common language between human intent and machine implementation that doesn't require the human to also be a developer. Our own Denis Voskvitsov presented on agent security and sandboxing, a topic that matters enormously red and gets discussed far less than agent capabilities. The rather important question of what you do when your AI agent can take actions in the world and you want some assurance that it won't take the wrong ones.

The Banking Roundtable: Regulation, Reluctance, and Subject Access Requests

The AI in Banking roundtable surfaced themes I suspect are common to most regulated industries, dressed up in slightly different clothes. The central question of adoption, specifically how you persuade staff who are perfectly capable at their jobs to change how they do those jobs, turns out to be less a technology problem than a change management one. People don't resist AI because they are ignorant of it. They resist it because they are not convinced it will make their working lives better, and in many cases they have seen enough technology implementations to have earned that scepticism.

Regulation came up with the predictable mixture of genuine concern and performative anxiety. The EU AI Act is real, and for financial institutions that use AI in credit decisions, customer interactions, or risk classification, its requirements are not trivial. GDPR is also real, and data protection authorities have started asking pointed questions about what happens when a customer submits a subject access request asking for information about an AI model that was used to make a decision about them. This is not a hypothetical. It is happening. The answer "we used an AI" is not, it turns out, a complete or satisfying response from a regulatory standpoint.

On the topic of AI in defence: there was, inevitably, a small political undercurrent about the ethics of AI being used in military applications. My own view is straightforward. If you do not want your technology used in defence, do not sign contracts with government departments that have the words "defence" or "war" in their name. This is not a complicated principle, though I appreciate it requires reading the contract.

The Fourth Revolution

I have been doing this long enough to have lived through four of what I would call genuine technology revolutions. The PC in the 1980s. The internet in the late 1990s and early 2000s. Mobile in the 2010s. And now this.

Each one democratised something. The PC put computing in the hands of individuals rather than institutions. The internet put information in the hands of anyone with a connection. Mobile put both in your pocket. This revolution is democratising capability itself. The ability to build things, to turn an idea into a working product, is no longer gated by whether you can write code, manage a development team, or afford one.

The timeframe is compressing in ways that are genuinely difficult to internalise. A project that would have required weeks of engineering time two years ago can be prototyped in a day. We are not fully at the point where an idea becomes a product before lunch, but we are close enough that the economics of software development are being rewritten in real time. The scarce resource is no longer the ability to build. It is the quality of the idea, and the clarity with which you can articulate it.

I feel for the junior developers who haven't yet grasped this transformation. Not because their skills are worthless, they aren't, but because the entry-level path of learning by writing boilerplate has just become considerably narrower. What is becoming more valuable is the ability to define a problem precisely, to reason about whether a solution actually solves it, and to know when the output in front of you is wrong. These are not coding skills. They are thinking skills.

This feels like the biggest of the four revolutions, and I say that having watched the internet turn entire industries inside out. At EXANTE, we are not in the habit of running pilots that are designed to succeed on paper and fail in production. The questions we brought to Zürich, about adoption, governance, agent security, and regulatory exposure, are the same ones we are working through at home. The conference didn't answer them. But it was reassuring, in a slightly grim way, to confirm that everyone else is wrestling with exactly the same ones..

Thought Leadership