Showing posts with label SyNAPSE. Show all posts
Showing posts with label SyNAPSE. Show all posts




Project SyNAPSE (Systemsof Neuromorphic Adaptive Plastic Scalable Electronics) is a collaborativecognitive computing effort sponsored by the Defense Advanced Research ProjectsAgency to develop the architecture for a brain-inspired neurosynaptic computercore.

The project, which began in 2008, is a collaboration between IBM Research, HRL Laboratories, and Hewlett-Packard.

Researchers from a number of universities are also involved in the project.

The acronym SyNAPSE comes from the Ancient Greek word v, which means "conjunction," and refers to the neural connections that let information go to the brain.

The project's purpose is to reverse-engineer the functional intelligence of rats, cats, or potentially humans to produce a flexible, ultra-low-power system for use in robots.

The initial DARPA announcement called for a machine that could "scale to biological levels" and break through the "algorithmic-computational paradigm" (DARPA 2008, 4).

In other words, they needed an electronic computer that could analyze real-world complexity, respond to external inputs, and do so in near-real time.

SyNAPSE is a reaction to the need for computer systems that can adapt to changing circumstances and understand the environment while being energy efficient.

Scientists at SyNAPSE are working on neuromorphicelectronics systems that are analogous to biological nervous systems and capable of processing data from complex settings.

It is envisaged that such systems would gain a considerable deal of autonomy in the future.

The SyNAPSE project takes an interdisciplinary approach, drawing on concepts from areas as diverse as computational neuroscience, artificial neural networks, materials science, and cognitive science.

Basic science and engineering will need to be expanded in the following areas by SyNAPSE: 

  •  simulation—for the digital replication of systems in order to verify functioning prior to the installation of material neuromorphological systems.

In 2008, IBM Research and HRL Laboratories received the first SyNAPSE grant.

Various aspects of the grant requirements were subcontracted to a variety of vendors and contractors by IBM and HRL.

The project was split into four parts, each of which began following a nine-month feasibility assessment.

The first simulator, C2, was released in 2009 and operated on a BlueGene/P supercomputer, simulating cortical simulations with 109 neurons and 1013 synapses, similar to those seen in a mammalian cat brain.

Following a revelation by the Blue Brain Project leader that the simulation did not meet the complexity claimed, the software was panned.

Each neurosynaptic core is 2 millimeters by 3 millimeters in size and is made up of materials derived from human brain biology.

The cores and actual brains have a more symbolic than comparable relationship.

Communication replaces real neurons, memory replaces synapses, and axons and dendrites are replaced by communication.

This enables the team to explain a biological system's hardware implementation.

HRL Labs stated in 2012 that it has created the world's first working memristor array layered atop a traditional CMOS circuit.

The term "memristor," which combines the words "memory" and "transistor," was invented in the 1970s.

Memory and logic functions are integrated in a memristor.

In 2012, project organizers reported the successful large-scale simulation of 530 billion neurons and 100 trillion synapses on the Blue Gene/Q Sequoia machine at Lawrence Livermore National Laboratory in California, which is the world's second fastest supercomputer.

The TrueNorth processor, a 5.4-billion-transistor chip with 4096 neurosynaptic cores coupled through an intrachip network that includes 1 million programmable spiking neurons and 256 million adjustable synapses, was presented by IBM in 2014.

Finally, in 2016, an end-to-end ecosystem (including scalable systems, software, and apps) that could fully use the TrueNorth CPU was unveiled.

At the time, there were reports on the deployment of applications such as interactive handwritten character recognition and data-parallel text extraction and recognition.

TrueNorth's cognitive computing chips have now been put to the test in simulations like a virtual-reality robot driving and playing the popular videogame Pong.

DARPA has been interested in the construction of brain-inspired computer systems since the 1980s.

Dharmendra Modha, director of IBM Almaden's Cognitive ComputingInitiative, and Narayan Srinivasa, head of HRL's Center for Neural and Emergent Systems, are leading the Project SyNAPSE project.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Cognitive Computing; Computational Neuroscience.

References And Further Reading

Defense Advanced Research Projects Agency (DARPA). 2008. “Systems of Neuromorphic Adaptive Plastic Scalable Electronics.” DARPA-BAA 08-28. Arlington, VA: DARPA, Defense Sciences Office.

Hsu, Jeremy. 2014. “IBM’s New Brain.” IEEE Spectrum 51, no. 10 (October): 17–19.

Merolla, Paul A., et al. 2014. “A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface.” Science 345, no. 6197 (August): 668–73.

Monroe, Don. 2014. “Neuromorphic Computing Gets Ready for the (Really) Big Time.” Communications of the ACM 57, no. 6 (June): 13–15.

Artificial Intelligence - What Is Cognitive Computing?


Self-learning hardware and software systems that use machine learning, natural language processing, pattern recognition, human-computer interaction, and data mining technologies to mimic the human brain are referred to as cognitive computing.

The term "cognitive computing" refers to the use of advances in cognitive science to create new and complex artificial intelligence systems.

Cognitive systems aren't designed to take the place of human thinking, reasoning, problem-solving, or decision-making; rather, they're meant to supplement or aid people.

A collection of strategies to promote the aims of affective computing, which entails narrowing the gap between computer technology and human emotions, is frequently referred to as cognitive computing.

Real-time adaptive learning approaches, interactive cloud services, interactive memo ries, and contextual understanding are some of these methodologies.

To conduct quantitative assessments of organized statistical data and aid in decision-making, cognitive analytical tools are used.

Other scientific and economic systems often include these tools.

Complex event processing systems utilize complex algorithms to assess real-time data regarding events for patterns and trends, offer choices, and make judgments.

These kinds of systems are widely used in algorithmic stock trading and credit card fraud detection.

Face recognition and complex image recognition are now possible with image recognition systems.

Machine learning algorithms build models from data sets and improve as new information is added.

Neural networks, Bayesian classifiers, and support vector machines may all be used in machine learning.

Natural language processing entails the use of software to extract meaning from enormous amounts of data generated by human conversation.

Watson from IBM and Siri from Apple are two examples.

Natural language comprehension is perhaps cognitive computing's Holy Grail or "killer app," and many people associate natural language processing with cognitive computing.

Heuristic programming and expert systems are two of the oldest branches of so-called cognitive computing.

Since the 1980s, there have been four reasonably "full" cognitive computing architectures: Cyc, Soar, Society of Mind, and Neurocognitive Networks.

Speech recognition, sentiment analysis, face identification, risk assessment, fraud detection, and behavioral suggestions are some of the applications of cognitive computing technology.

These applications are referred regarded as "cognitive analytics" systems when used together.

In the aerospace and defense industries, agriculture, travel and transportation, banking, health care and the life sciences, entertainment and media, natural resource development, utilities, real estate, retail, manufacturing and sales, marketing, customer service, hospitality, and leisure, these systems are in development or are being used.

Netflix's movie rental suggestion algorithm is an early example of predictive cognitive computing.

Computer vision algorithms are being used by General Electric to detect tired or distracted drivers.

Customers of Domino's Pizza can place orders online by speaking with a virtual assistant named Dom.

Elements of Google Now, a predictive search feature that debuted in Google applications in 2012, assist users in predicting road conditions and anticipated arrival times, locating hotels and restaurants, and remembering anniversaries and parking spots.

In IBM marketing materials, the term "cognitive" computing appears frequently.

Cognitive computing, according to the company, is a subset of "augmented intelligence," which is preferred over artificial intelligence.

The Watson machine from IBM is frequently referred to as a "cognitive computer" since it deviates from the traditional von Neumann design and instead draws influence from neural networks.

Neuroscientists are researching the inner workings of the human brain, seeking for connections between neuronal assemblies and mental aspects, and generating new mental ideas.

Hebbian theory is an example of a neuroscientific theory that underpins cognitive computer machine learning implementations.

The Hebbian theory is a proposed explanation for neural adaptation during the learning process.

Donald Hebb initially proposed the hypothesis in his 1949 book The Organization of Behavior.

Learning, according to Hebb, is a process in which the causal induction of recurrent or persistent neuronal firing or activity causes neural traces to become stable.

"Any two cells or systems of cells that are consistently active at the same time will likely to become'associated,' such that activity in one favors activity in the other," Hebb added (Hebb 1949, 70).

"Cells that fire together, wire together," is how the idea is frequently summarized.

According to this hypothesis, the connection of neuronal cells and tissues generates neurologically defined "engrams" that explain how memories are preserved in the brain as biophysical or biochemical changes.

Engrams' actual location, as well as the procedures by which they are formed, are currently unknown.

IBM machines are stated to learn by aggregating information into a computational convolution or neural network architecture made up of weights stored in a parallel memory system.

Intel introduced Loihi, a cognitive chip that replicates the functions of neurons and synapses, in 2017.

Loihi is touted to be 1,000 times more energy efficient than existing neurosynaptic devices, with 128 clusters of 1,024 simulated neurons on per chip, for a total of 131,072 simulated neurons.

Instead of relying on simulated neural networks and parallel processing with the overarching goal of developing artificial cognition, Loihi uses purpose-built neural pathways imprinted in silicon.

These neuromorphic processors are likely to play a significant role in future portable and wire-free electronics, as well as automobiles.

Roger Schank, a cognitive scientist and artificial intelligence pioneer, is a vocal opponent of cognitive computing technology.

"Watson isn't thinking. You can only reason if you have objectives, plans, and strategies to achieve them, as well as an understanding of other people's ideas and a knowledge of prior events to draw on.

"Having a point of view is also beneficial," he writes.

"How does Watson feel about ISIS, for example?" Is this a stupid question? ISIS is a topic on which actual thinking creatures have an opinion" (Schank 2017).

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Computational Neuroscience; General and Narrow AI; Human Brain Project; SyNAPSE.

Further Reading

Hebb, Donald O. 1949. The Organization of Behavior. New York: Wiley.

Kelly, John, and Steve Hamm. 2013. Smart Machines: IBM’s Watson and the Era of Cognitive Computing. New York: Columbia University Press.

Modha, Dharmendra S., Rajagopal Ananthanarayanan, Steven K. Esser, Anthony Ndirango, Anthony J. Sherbondy, and Raghavendra Singh. 2011. “Cognitive Computing.” Communications of the ACM 54, no. 8 (August): 62–71.

Schank, Roger. 2017. “Cognitive Computing Is Not Cognitive at All.” FinTech Futures, May 25.

Vernon, David, Giorgio Metta, and Giulio Sandini. 2007. “A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents.” IEEE Transactions on Evolutionary Computation 11, no. 2: 151–80.

Artificial Intelligence - What Is The Blue Brain Project (BBP)?


The brain, with its 100 billion neurons, is one of the most complicated physical systems known.

It is an organ that takes constant effort to comprehend and interpret.

Similarly, digital reconstruction models of the brain and its activity need huge and long-term processing resources.

The Blue Brain Project, a Swiss brain research program supported by the École Polytechnique Fédérale de Lausanne (EPFL), was founded in 2005. Henry Markram is the Blue Brain Project's founder and director.

The purpose of the Blue Brain Project is to simulate numerous mammalian brains in order to "ultimately, explore the stages involved in the formation of biological intelligence" (Markram 2006, 153).

These simulations were originally powered by IBM's BlueGene/L, the world's most powerful supercomputer system from November 2004 to November 2007.

In 2009, the BlueGene/L was superseded by the BlueGene/P.

BlueGene/P was superseded by BlueGene/Q in 2014 due to a need for even greater processing capability.

The BBP picked Hewlett-Packard to build a supercomputer (named Blue Brain 5) devoted only to neuroscience simulation in 2018.

The use of supercomputer-based simulations has pushed neuroscience research away from the physical lab and into the virtual realm.

The Blue Brain Project's development of digital brain reconstructions enables studies to be carried out in a "in silico" environment, a Latin pseudo-word referring to modeling of biological systems on computing equipment, using a regulated research flow and methodology.

The possibility for supercomputers to turn the analog brain into a digital replica suggests a paradigm change in brain research.

One fundamental assumption is that the digital or synthetic duplicate will act similarly to a real or analog brain.

Michael Hines, John W. Moore, and Ted Carnevale created the software that runs on Blue Gene hardware, a simulation environment called NEURON that mimics neurons.

The Blue Brain Project may be regarded a typical example of what was dubbed Big Science following World War II (1939–1945) because of the expanding budgets, pricey equipment, and numerous interdisciplinary scientists participating.


Furthermore, the scientific approach to the brain via simulation and digital imaging processes creates issues such as data management.

Blue Brain joined the Human Brain Project (HBP) consortium as an initial member and submitted a proposal to the European Commission's Future & Emerging Technologies (FET) Flagship Program.

The European Union approved the Blue Brain Project's proposal in 2013, and the Blue Brain Project is now a partner in a larger effort to investigate and undertake brain simulation.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

General and Narrow AI; Human Brain Project; SyNAPSE.

Further Reading

Djurfeldt, Mikael, Mikael Lundqvist, Christopher Johansson, Martin Rehn, Örjan Ekeberg, Anders Lansner. 2008. “Brain-Scale Simulation of the Neocortex on the IBM Blue Gene/L Supercomputer.” IBM Journal of Research and Development 52, no. 1–2: 31–41.

Markram, Henry. 2006. “The Blue Brain Project.” Nature Reviews Neuroscience 7, no. 2: 153–60.

Markram, Henry, et al. 2015. “Reconstruction and Simulation of Neocortical Microcircuitry.” Cell 63, no. 2: 456–92.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...