• Home
  • Insights
  • Technologies
    • Business applications
    • Internet of Things
    • 3D/AR/MR/VR
    • Artificial Intelligence
    • Blockchain
    • Embedded software
  • Skillset
  • Cases
  • Careers
  • About
  • Contact
  • EN
    • NL
Pegus Digital Pegus Digital Pegus Digital Pegus Digital
  • Home
  • Insights
  • Technologies
    • Business applications
    • Internet of Things
    • 3D/AR/MR/VR
    • Artificial Intelligence
    • Blockchain
    • Embedded software
  • Skillset
  • Cases
  • Careers
  • About
  • Contact
  • EN
    • NL

Neuromorphic Software and Hardware: The Future

21 November 2017 Posted by Ina Danova Insights

Grey matters

Without a doubt the human brain is a fascinating organ. It contains approximately 86 billion neurons, cells which connect and pass signals to thousands of other neurons. All the neurons can work simultaneously, as a team, to resolve a variety of tasks, yet, we still know very little about how this amazing collaboration really works on the inside. Issues like speeding up the travel time of the pain signal from your limb to your brain versus a direct stimulation of the brain are responsible for many scientists’ sleepless nights.

History

The first mathematical model of the neural system was created in the 1940s and the first simple models of artificial neural networks — in the 1950s, although these were far from perfect. The electronic circuitry of neurons is highly deterministic which means the same set of inputs will always produce the same set of outputs. But brain cells, being living things, exhibit what is known as stochastic behaviour. The detailed pattern of activity varies unpredictably from one repetition to the next. The main breakthrough came only recently when IBM captured the very essence of what makes the brain cells tick in artificial neurons. By using a phase change material (chalcogenide) they were able to successfully replicate neuronal dynamics observed in brain cells. In a nutshell, the phase change material behaves almost exactly like the membrane of a brain cell. Unlike the standard capacitor-based unit, after discharge the artificial neuron goes to a slightly different state than before the first impulse arrived. This results in a degree of randomness to the functioning of the artificial neuron. But why do scientists even bother replicating something so complex to start with?

Data collection

We see new IoT products entering the market every day. Companies are trying to make things faster, smarter, safer and more convenient for us, made possible by the vast amounts of data they have and the ability to make sense of it all. Everyone and everything around us produces data, from simple web clicks to IoT sensors and high resolution chips in telescopes. Moore’s law established that we double the number of transistors in chips every other year. Although this trend is indeed very impressive, the amount of data does not grow as linearly, but nearly exponentially. Creating and collecting data is cheap, though analysing it is difficult and costly, which is what drove the creation of crypto-mining. Our modern telescopes can collect almost an exabyte of data per day. As a comparison, most of today’s laptops run on 500-gigabyte hard drives (or SSD), which means that we need 2,000,000 or so of them just to store the daily data generated by telescopes. What about the rest of it?

The imminent future

We may be soon collecting more data than our computers are able to process and there are a few simple reasons for that. As the numbers of transistors within the same space doubles, the heat they produce goes up. Removing the heat from components is an ongoing challenge. This further highlights the fact that if you run a data centre today, a large portion of the electricity bill will be spent on a powerful cooling system. Energy efficiency is another limitation, as only a part of a chip can run at any given time, leaving the rest of it in the dark — also called the dark silicone. Most computational systems are based on the Von Neumann architecture, using centralised resources. This means that all chips access the same memory, its bandwidth limitation thus creating a bottleneck. These considerations lead back to mother nature’s most impressive computational structure — the brain.

Neuromorphic hardware

Despite the abundance of AI, the brain and brain-like computers are still unmatched in their ability to make sense of data quickly and sometimes, rather mysteriously. They are highly fault-tolerant and need only 10 to 20 watts to operate. All neurons can directly or indirectly communicate with and work with each other. With data stored in memory blocks all across the brain, neurons create a massive parallelism. Yet, that isn’t all; our brains can provide fast and frugal answers as they recognise patterns in noisy and complex data sets.

Need for new software

The hardware simulation of the brain is only one part of the problem. Although our current software and algorithms can operate on multi-processor machines with tens of cores, they are far from being able to run in parallel on brain-inspired machines with hundreds of thousands or millions of cores.

Implementation

All in all, what can neuromorphic software and hardware really do for us? They could certainly speed up cognitive computing and analyse IoT big data, even speed up DNA sequencing. Thanks to them, we can find promising patterns in noisy data and are better equipped to predict earthquakes and meteorological events, simulate brain cell activity, the spreading of viruses or the effect of drug treatments. We may get better at diagnosing diseases like Alzheimer’s or devise personalised treatments to address them. The applications are nearly limitless — among them cyber security, medicine, automotive, banking and finance, augmented reality and mobile technologies. 

We take for granted the automation features on our smartphones, such as the DND (do not disturb) mode during meetings or at night. Few know that this technology was first implemented by Nokia on their S60 platform and 235- megahertz processor nearly two decades ago. Alarm suggestions and auto profiles were triggered based on patterns in user behavior even back then. The sky is the limit to what future mobile phones or any other smart devices could accomplish thanks to the holy trinity of next generation neuromorphic chips, new software and cloud computing.

Copywriter: Ina Danova

Tags: Artificial IntelligenceDigital TransformationInternet of Things
1
Share

You also might be interested in

Computer code as the next universal language

Computer code as the next universal language

Jul 18, 2017

Programming shouldn’t be reserved for geeks and is beneficial for a variety of fields

How business apps will help keep your enterprise big

How business apps will help keep your enterprise big

Nov 25, 2016

Explore the benefits of a mobile strategy

Beacon technology and the Internet of Things

Beacon technology and the Internet of Things

Dec 17, 2015

The possibilities of beacon technology are astonishing.

Recent Posts

  • Devising Digital Experiences that Delight
  • Innovation in Branded Spaces
  • Mastering Scalable Computing Models
  • Building an Organizational Human OS
  • Wearables in Institutional Healthcare
Logo Pegus
Noordlaan 18, Kuurne, Belgium
Member of Masters in Innovation, a pioneering product innovation group.

Logo Pegus Logo Pegus Logo Pegus

© 2019 · Pegus Masters in Innovation · Terms & conditions · Privacy Statement

Prev Next