Research Library
Monthly Research
& Market Commentary

There is Nothing Artificial About Machine Intelligence

The pursuit of Artificial Intelligence (AI) is as old as the IT industry itself. Ever since the first programmable computers were conceived, both pioneering technologists and the public at large have foreseen the inevitable competition between human and electronic capabilities.

Machine Intelligence

However, this situation is now rapidly changing. Over the next few years, we expect major progress in a wide range of AI/MI disciplines.

But reaching this stage has taken much longer than many predicted. While sophisticated, hand-crafted expert systems have long been mission-critical in a wide range of industries, developing low-cost, general-purpose machine intelligence (MI) has mostly proved elusive. Progress in fundamental AI/MI fields such as robotics, neural networks and natural language processing has generally lagged behind other areas of digital innovation, with most of the relevant research remaining academic in nature.

However, this situation is now rapidly changing. Over the next few years, we expect major progress in a wide range of AI/MI disciplines, as machines become ever-more capable of speaking, hearing, seeing, recognizing, tracking, controlling, analyzing, advising, learning, explaining, deciding and many other human capabilities. Although the terms AI and MI can still be used interchangeably, over time MI should prevail. Machine intelligence works quite differently from the human version, but it is entirely real, and in this sense not at all artificial.

There are three main reasons why we believe progress in MI is improving so quickly today:

  1. Technologists have learned that it is often much easier to build specific MI capabilities (such as language translation or facial recognition) by using large, specialized data sets than it is to develop general-purpose learning capabilities. Each MI capability has its own applications, business model and focused start-up companies. There are hundreds of such firms today.
  2. Major strides are also being made toward the traditional goal of general-purpose intelligence through so-called deep learning. For example, DeepMind, a British firm acquired for over $400 million by Google in 2014, has greatly impressed the AI community with the ability of its systems to learn Atari video games such as Pong from scratch, and then quickly teach themselves to play at super-human levels. This month, the company’s AlphaGo system will take on Lee Sedol, a world champion in the Chinese game Go, in a highly anticipated $1million match.
  3. Perhaps most importantly, the reason that the two developments above are so important today is that machine intelligence is merging with cloud economics – global scale, continuous improvement, real-time Big Data acquisition, and effectively zero marginal cost. This means that each new MI capability can potentially serve literally billions of online users. Consider the way that language translation capabilities are now being bundled into Skype.

The merger of MI and the cloud is fundamental to the innovations of the future, but it is still not sufficiently understood. Before the cloud, most AI work was isolated and relatively high cost. But when married to the cloud, MI services such as recognizing faces or translating languages are no different from using Shazam to identify a song or Googling a search term. What makes all of these wondrous applications possible is not just powerful computers and clever software, but the availability of large, specialized and continually refreshed data sets, most of which simply did not exist in the pre-cloud era. 

As AI enthusiasts have long observed, once a smart application becomes ubiquitous, people no longer see it as AI – it becomes just another technology service. But taken together, ever-richer data sets, improved MI know-how, powerful cloud economics and lucrative MI business models are creating tremendous new market opportunities and a new Silicon Valley gold rush, even today when tech markets are down.

New words describe new realities

Over the last year, clients may have noticed that we have been using the term ‘the Matrix’ to describe today’s emerging technology landscape, with a deliberate nod to the iconic 1997-2003 film trilogy. We decided to do this because new words can help us appreciate new realities, as shown in the figure below.

New words emerge to describe new realities

While the internet, the web and the cloud are now often used interchangeably, in each case new terminology emerged to reflect a major new phase of digital innovation.

Consider that the term internet was originally a shortened alternative to internetworking – which described the ability to link private incompatible computer networks via gateways. In the late 1980s, Tim Berners-Lee took this concept a major step further by making it easy to link not just systems, but individual pages and documents, via a web of hypertext. More recently, the use of ‘the cloud’ caught on as a way to capture the fact that networked computers were no longer just connecting systems and pages; they were also an on-demand platform that could transform computing into a utility service. While the internet, the web and the cloud are now often used interchangeably, in each case new terminology emerged to reflect a major new phase of digital innovation. 

We believe that new words will soon be needed once again. While the cloud suggests something ‘out there’, we are all now part of a vast and increasingly intelligent Matrix of systems, software and data. But whether our use of the term ‘the Matrix’ catches on or not, we think it captures the driving technology dynamic of our times: the extraordinary merger of machine intelligence and cloud economics, and (for better or worse) all that this will entail.


*{{ error }}
*{{ error }}
*{{ error }}
*{{ error }}
*{{ error }}
*{{ error }}


Research Commentary

PDF (437.3 KB)



21st Century
Adaptive Execution
Proactive, Haptic Sensing
Reimagining the Portfolio
Value Centric Leadership


The Counter-Industrial Revolution
19 Feb 2019 / By David Rimmer
How far along is the success of the Distributed Ledger and DApps?
23 Jan 2019 / By Krzysztof (Chris) Daniel
2019: The Year of Digital Decisions
15 Jan 2019 / By Richard Davies
Defending Digital
12 Dec 2018 / By David Moschella
Our Research Agenda 2019
30 Nov 2018 / By Simon Wardley, David Reid