Could computers built like brains destroy competition?

Image source, Getty Images

Image description, The power demand of data centers is increasing rapidly

  • Author, Zoe Corbyn
  • Role, Technology Reporter
  • Report from San Francisco

The power consumption of modern computers is increasing at an alarming rate.

According to a recent report by the International Energy Agency (IEA), consumption of data centers, artificial intelligence (AI) and cryptocurrencies could double by 2026 from 2022 levels.

It is estimated that the energy consumption of these three sectors could be approximately equivalent to Japan's annual energy needs by 2026.

Companies like Nvidia – whose computer chips form the basis for most of today’s AI applications – are working to develop more energy-efficient hardware.

But could an alternative be to build computers with a fundamentally different architecture that is more energy efficient?

Some companies are convinced of this and are exploiting the structure and function of an organ that requires only a fraction of the power of a conventional computer and yet can perform more operations faster: the brain.

In neuromorphic computing, electronic devices mimic neurons and synapses and are interconnected in a way that resembles the brain's electrical network.

This is nothing new – researchers have been working on this technology since the 1980s.

But the energy demands of the AI ​​revolution are increasing the pressure to bring the young technology into the real world.

Current systems and platforms serve primarily as research tools, but proponents say they could bring huge improvements in energy efficiency,

Those with commercial ambitions include hardware giants such as Intel and IBM.

A handful of smaller companies are also making inroads. “The opportunity awaits the company that can seize it,” says Dan Hutcheson, an analyst at TechInsights.[And] the chances are so good that it could be an Nvidia killer.”

Image source, SpiNNcloud systems

Image description, SpiNNcloud says its neuromorphic computer will be more energy efficient for AI

In May, SpiNNcloud Systems, a spin-off from the Technical University of Dresden, announced that it would begin selling neuromorphic supercomputers for the first time and was accepting pre-orders.

“We achieved the commercialization of neuromorphic supercomputers before other companies,” says Hector Gonzalez, co-CEO of the company.

It is a significant development, says Tony Kenyon, professor of nanoelectronic and nanophotonic materials at University College London, who works in this field.

“While there is no killer application yet, there are many areas where neuromorphic computing will deliver significant improvements in energy efficiency and performance, and I am confident that as this technology advances, we will see widespread adoption,” he says.

Neuromorphic computing encompasses a range of approaches, from a more brain-inspired approach to a near-complete simulation of the human brain (although we are still a long way from that).

However, there are some basic design features that distinguish it from traditional computers.

First, unlike traditional computers, neuromorphic computers do not have separate memory and processing units. Instead, these tasks are performed together on a chip in one place.

By eliminating the need to transfer data between the two, energy consumption is reduced and processing time is accelerated, notes Prof. Kenyon.

An event-driven approach to computing can also be common.

In contrast to conventional computing, where all parts of the system are constantly switched on and available for communication with all other parts at any time, activation in neuromorphic computing can be more sparse.

The imitation neurons and synapses are only activated when they have something to communicate. Similarly, many neurons and synapses in our brain only become active when there is a reason to do so.

Working only when there is something to process also saves electricity.

And while modern computers are digital – using ones or zeros to represent data – neuromorphic computing can be analog.

This historically significant calculation method is based on continuous signals and can be useful where data from the outside world needs to be analyzed.

However, for the sake of simplicity, most commercially oriented neuromorphic efforts are digital.

More technology in business

The intended commercial applications can be divided into two main categories.

A focus of SpiNNcloud is to provide a more energy efficient and higher performance platform for AI applications – including image and video analysis, speech recognition, and the rich language models that power chatbots like ChatGPT.

Another area of ​​application is “edge computing” applications, where data is processed in real time on connected devices rather than in the cloud, but with limited performance. Autonomous vehicles, robots, mobile phones and wearable technologies could all benefit.

However, technical challenges remain. The biggest obstacle to the further development of neuromorphic computing has long been the development of the software required to operate the chips.

Having the hardware is one thing, but it also needs to be programmed to work, and that may require developing a completely different programming style from the ground up than that used on traditional computers.

“The potential of these devices is huge… the problem is how to make them work,” Hutcheson summarizes, predicting that it will take at least a decade, if not two, before the benefits of neuromorphic computing are truly felt.

Cost is also an issue. Whether using silicon, as is the case with commercially oriented approaches, or other materials, developing radically new chips is expensive, notes Professor Kenyon.

Image description, Intel is making “rapid progress” with its neuromorphic computer, says Mike Davies (right)

The current prototype of a neuromorphic chip from Intel is called Loihi 2.

In April, the company announced that it had brought together 1,152 of them to create Hala Point, a large-scale neuromorphic research system with more than 1.15 billion artificial neurons and 128 billion artificial synapses.

With a neuronal capacity roughly equivalent to that of an owl's brain, Intel says it is the largest system in the world to date.

At the moment it is still a research project by Intel.

“[But Hala Point] shows that there is real viability here for applications that use AI,” said Mike Davies, head of the Neuromorphic Computing Lab at Intel.

Hala Point is about the size of a microwave oven, “commercially relevant” and is making “rapid progress” on the software side, he says.

IBM calls its latest brain-inspired chip prototype NorthPole.

The chip, unveiled last year, is an evolution of the previous TrueNorth prototype. Tests have shown it is more energy efficient, takes up less space and is faster than any chip currently on the market, says Dharmendra Modha, the company's chief scientist for brain-inspired computing. He adds that his group is currently working to demonstrate that chips can be interconnected to form a larger system.

“The path to market is a story to come,” he says. One of the great innovations at NorthPole, notes Dr. Modha, is that it was developed in conjunction with the software, so the full capabilities of the architecture can be exploited from the start.

Other smaller neuromorphic companies include BrainChip, SynSense and Innatera.

Image description, IBM says its NorthPole chip is more energy efficient and faster than other chips

SpiNNcloud's supercomputer commercializes neuromorphic computing developed by researchers at TU Dresden and the University of Manchester under the auspices of the EU's Human Brain Project.

These efforts have resulted in two neuromorphic supercomputers for research purposes: the SpiNNaker1 machine at the University of Manchester, which consists of over a billion neurons and has been in operation since 2018.

A second-generation SpiNNaker2 machine at TU Dresden, currently being configured, has the capacity to emulate at least five billion neurons. SpiNNcloud's commercially available systems can reach an even higher level of at least 10 billion neurons, says Mr Gonzalez.

In the future, different types of computing platforms will work together – conventional, neuromorphic and quantum computers – says Professor Kenyon, another new type of computing on the horizon.