Monthly Research
& Market Commentary

So Much is Becoming Possible with ‘Matrix’ Technologies – How Will Your Organization Take Advantage?

Adaptive Execution / 23 Aug 2017 / By Glen Robinson

In 1999, Thomas A Anderson, a computer programmer working for the MetaCortex software company, took the red pill and was extracted from the Matrix and into the real world.  As well as learning that the Matrix was a computer generated reality that was nearly indistinguishable from reality, he also learnt that he could control and manipulate it to his advantage.

I am, of course, referring to the 1999 film, The Matrix, starring Keanu Reeves.  It was the inspiration for the creation of a whole research area within LEF called the Matrix.  The ability for technology to get to such a degree of capability where it can recreate reality, completely operated by machines, may have seemed like science fiction in 1999, but 18 years on, I’d suggest this isn’t such a crazy concept. 

In June 2016, Elon Musk made the claim that there was a possibility that we are already living in a Matrix style alternate reality.  This is based on the fact that with the speed of technological advancement we’re seeing today, if you project that out over 10,000 years, then in the future we could well have the technology capable of building a digital reality that, if we were inserted into it, would be indistinguishable from reality.

Still sounding far-fetched?

The ability to recreate reality

This year, I’ve been tracking the incredible journey of Improbable and their charismatic founder, Herman Narula.  The purpose of their business is to recreate reality.  In regards to technology ERAs, the LEF talks about the fact that we have moved on from the era of the cloud to the era of the Matrix.  It’s the technology of today that is starting to make reality what only recently felt like science fiction.  With enough processing power, storage, network speed reliability and bandwidth, realistic virtual reality, you could soon see how you could build an alternate digital reality. 

Alas, recreating reality is no mean feat.  You first need to create an environment which contains countless entities – entities being people, trees, tables, animals, anything that inhabits that environment.  All these entities have a state, and there is all manner of controls that impact that state such as laws of physics, gravity, economic, political.  So when we build all that out, recreating reality would mean you have to build a world where you can track every single entity, and the state of it, understand how it would respond to external forces and controls, and when it is interacted with, change its state so that it is immediately observable to every entity in that world, all in real time.  Mass multi-player games of today rely on you inhabiting an environment where your sphere of influence is limited.  If you chop down a tree, the state of that tree is only observable to you and a very small number of people in your ‘world’.  Improbable have evolved an architecture that allows them to take this partitioned gaming experience and stitch them all together into once massive wold where everything can interact.

Whilst this will make many gamers very happy, the application of this in the real world is where this really gets interesting.  Already, Improbable are applying this technology to recreate whole cities with multiple layers of data represented so that you can start doing simulations of events for which you have no past data.  This is super powerful.  It’s relatively easy to predict the impact of something that you have done before in a similar environment, such as building a new roundabout or changing the flow of traffic.  But what about when we finally introduce autonomous vehicles?  We have almost no data points on this, and not only do we have no idea how this will impact congestion on our streets, but how will this impact the economics of a city which has a mature ecosystem build around the fact that people drive cars and spend money on things like fuel, drinks, food, whilst doing so?  These are all questions that businesses will need to find ways to answer as emerging technologies evolve rapidly disrupting existing businesses, and also create opportunities for new value chains.

This concept of recreating reality is very exciting to me as it’s going to give us new ways to think about the world we live in and better plan and predict the outcomes of our actions.  It’s already possible to do this on the technology of today but what does the future hold?

Machine Intelligence is getting more intelligent 

In The Matrix, machine intelligence plays a huge role.  Hopefully, it will never evolve into the types of robots that Neo had to fight in the movie, but thanks to the technology enablers like cloud computing, the speed at which we are able to process enormous amounts of data is a self-fulfilling prophecy that keeps nudging machine intelligence once step further.  Did you know that in May 2017, Google created something called AutoML?

“In our approach (which we call ‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task.  That feedback is then used to inform the controller how to improve its proposals for the next round.  We repeat this process thousands of times – generating new architectures, testing them, and giving that feedback to the controller to learn from.  Eventually, the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly.”

In essence, the machines are now doing better jobs of building neural networks for deep learning tasks as they are able to build, test and experiment with many different network architectures.  Much like AlphaGo – the Go playing computer programme developed by Google’s DeepMind – was not only able to beat the world champion at his own game, it was also able to do it using a move that was new to Go players.  AutoML is able to go way beyond the capabilities of human thinking in building architectures that make sense, it is able to experiment with architectures that may seem crazy, but which yield unexpected positive results. 

The significant advances in processing power over the years has enabled many technologies to really flourish.  From CPUs (central processing units) that have pretty much powered the technology of my lifetime, to GPUs (graphical processing units) which, as well as being used for their original purpose of offloading graphics rendering from the CPU and onto their own specific optimized chipsets, are now the go-to processor for machine learning tasks as well as mining things like crypto currencies.  More recently, we’ve seen the creation of the TPU (Tensor Processing Unit), built by Google – a specialist chip set design to optimize the use of the TensorFlow deep learning library.

The quantum effect

But Quantum Computing is just around the corner and it seems like every week we’re starting to see progress being made in this area.  On 21 July 2017, The Russian Quantum Center built a 51 qubit computer.  The processing power of these types of chips is exponentially better than anything we’ve seen before.  You won’t find them in a laptop or games console anytime soon as they work well for specific types of work much like Google’s TPUs.  But, oh boy, for those tasks, they are quick.  To help you appreciate how quick, my laptop runs on a 64-bit architecture CPU.  This means it can represent   264 bits of state (that’s 18,446,744,073,709,551,616 states).  Whilst a massive number, it can only process 1 at a time.  So to work through all 64bits of state, at 2,000,000 states a second (the speed of a modern CPU) it would still take 400 years. 

Due to a mind blowing property called superposition, a 64 qubit quantum computer could process all 64 bits of state, at the same time, taking out 400 years of processing on a CPU down to nearly an instant.  The Russian Quantum Center will not be far off this with their 51 qubit computer which can store a paltry 2,251,799,813,685,248 states.  It’s important to note that as the number of qubits increases, the amount of states doubles each time.  So the exponential processing capability we will see with Quantum computing over time will have huge effect. 

But it’s no good having all that processing power if you have nothing for it to work on.  You need data, lots of it, and really fast access to it.  We’ve seen the advancement of storage over the years but in comparison to what’s to follow, as ever, it looks like we’ve barely made progress at all.  From hard disk drives to solid state disks and flash, we’ve seen the cost of storing data come down, and the amount and speed of it we can store/transfer go up.  But just like Quantum computers, storage will have its day.

Atomic storage

On 8 March 2017, IBM managed to represent a single bit of data, onto a single atom.  This was a Holmium atom and by manipulating its electrons, they were able to freeze the electrons in place, the position of which denoted either the binary 1 or 0.  They were then able to write data to the atom changing the position of the electrons and changing it either from or to a 1 or 0.  You can imagine a big, long string of atoms in a line representing 1s or 0s much like the data on a storage device today.  With only one big difference.  Storing a single bit of data on a hard disk drive today consumes around 100,000 atoms worth of storage.  By using the IBM method, we have reduced our storage density 100,000 fold. 

More commonly today, we are seeing the evolution of DNA storage.  Much like in our bodies whereby DNA is used to store all the information needed to build and sustain our existence, we are now using the properties of DNA to see how we can store data in this format, as well as being able to read and write data to DNA.  Just earlier this year, Microsoft research announced their progress in this area, but also their intent to make this commercially available out of their data centres.  The days of having an API that gives you access to a DNA storage backend aren’t that far off. 

So you have an awesome amount of processing capability, and you can now store all your data at the atomic level, but now you need to shift it around.  In my opinion, the network has always been the thing that has held up innovation.  For the most part, we still rely on joining the points together with a cable and pushing electric signals from one end to the other.  These days, we’ve moved on a small amount and we now push light over fibre optic cables.  But still, just like the hard drives and the CPUs, it’s basically a small incremental improvement rather than a quantum leap in capability. 

But fear not, the Chinese are here to help.  In August 2016, the Chinese launched the Micius Quantum Space Satellite, which is a satellite that support quantum communication.  Much like Quantum computing, quantum communication also uses some of the weird and wonderful properties of the quantum world to advance our technology.  This time it uses both superposition and quantum entanglement – entanglement being the relationship between particles whereby the quantum state of one particle cannot be described independently of the others, over any distance.  In this case, the application is encryption, and by the wonderful properties of the quantum world, they are able to create unhackable encryption keys (or so they say). 

It’s all very early days, but hopefully you are getting a feeling that we are more advanced than you may have previously considered.  Is Elon Musk wrong when he says we could be living in an alternate reality?  If we look at the technology advances that are happening today, how could you question the fact that one day we will be able to completely recreate reality?  We could already be living in The Matrix and we would never know it. 

Businesses need to be designed to constantly evolve

Let’s quickly zoom back into today and your business.  How is your business shaping up to handle the changes that are coming in our lifetimes?  Forget quantum computing and DNA storage, they are weak signals of what’s to follow, but today we have stronger signals such as the disruptive forces being applied by more mature components of the Matrix, such as cloud computing, machine intelligence, blockchain, AR/VR.  As ever, it still feels like the network is lagging behind, but 5G is just around the corner.  The opportunities and threats presented by these, and future technologies, is the key focus point of the research done under the banner of the Matrix within LEF.  Right now, we are near completion of a study of the impact to the IT organization from recently emerged technologies such as cloud, IoT, MI.  This research report will start to pave the way for IT organizations to build themselves to prepare for constant evolution, the need to deal with disruptive forces, whether it be technology, changing market conditions, political, environmental, regulatory and so on, the ability to constantly evolve is now the new operating model, so what are you doing to prepare, and do you understand the options in front of you to start your own evolution?

We will be publishing our thinking in October this year and presenting it at our 21 November Executive Forum in London.  Please contact us if you would like more information.  


*{{ error }}
*{{ error }}
*{{ error }}
*{{ error }}
*{{ error }}
*{{ error }}


21st Century
Adaptive Execution
Proactive, Haptic Sensing
Reimagining the Portfolio
Value Centric Leadership


The Counter-Industrial Revolution
19 Feb 2019 / By David Rimmer
How far along is the success of the Distributed Ledger and DApps?
23 Jan 2019 / By Krzysztof (Chris) Daniel
2019: The Year of Digital Decisions
15 Jan 2019 / By Richard Davies
Defending Digital
12 Dec 2018 / By David Moschella
Our Research Agenda 2019
30 Nov 2018 / By Simon Wardley, David Reid