Quantum Computing - What Is Quantum Chromodynamics (QCD)?







Quantum Chromodynamics (QCD) is a physics theory that explains interactions mediated by the strong force, one of the four basic forces of nature. 


It was developed as an analogue for Quantum Electrodynamics (QED), which describes interactions owing to the electromagnetic force carried by photons. 



The theory of the strong interaction between quarks mediated by gluons, the basic particles that make up composite hadrons like the proton, neutron, and pion, is known as quantum chromodynamics (QCD). 

QCD is a non-abelian gauge theory with the symmetry group SU, which is a form of quantum field theory (3). 




The color attribute is the QCD equivalent of electric charge. 




Gluons are the theory's force carriers, exactly as photons are in quantum electrodynamics for the electromagnetic force. 

The hypothesis is an essential aspect of particle physics' Standard Model. 

Over the years, a considerable amount of experimental data supporting QCD has accumulated. 



How does the QCD scale work? 


The quantity is known as the QCD scale in quantum chromodynamics (QCD). 

When the energy-momentum involved in the process permits just the up, down, and strange quarks to be produced, but not the heavier quarks, the value is for three "active" quark flavors. 

This is equivalent to energies less than 1.275 GeV. 



Who was the first to discover quantum chromodynamics? 



One of the founders of quantum chromodynamics, Harald Fritzsch, remembers some of the backdrop to the theory's development 40 years ago. 



What is the Quantum Electrodynamics (QED) Theory? 


Quantum electrodynamics (QED) is the quantum field theory of charged particles' interactions with electromagnetic fields. 

It mathematically defines not just light's interactions with matter, but also the interactions of charged particles with one another. 

Albert Einstein's theory of special relativity is integrated into each of QED's equations, making it a relativistic theory. 

Because atoms and molecules are mainly electromagnetic in nature, all of atomic physics may be thought of as a test bed for the hypothesis. 

Experiments using the behavior of subatomic particles known as muons have been some of the most exact tests of QED. 

This sort of particle's magnetic moment has been found to accord with theory to nine significant digits. 

QED is one of the most effective physics theories ever established, with such great precision. 



Recent Developments In The Investigation Of QCD


A new collection of papers edited by Diogo Boito, Instituto de Fisica de Sao Carlos, Universidade de Sao Paulo, Brazil, and Irinel Caprini, Horia Hulubei National Institute for Physics and Nuclear Engineering, Bucharest, Romania, and published in The European Physical Journal Special Topics brings together recent developments in the investigation of QCD. 


The editors explain in a special introduction to the collection that,

the divergence of perturbation expansions in the mathematical descriptions of a system can have important physical consequences because the strong force — carried by gluons between quarks, forming the fundamental building blocks of matter — described by QCD has a much stronger coupling than the electromagnetic force. 


The editors note out that, with to developments in so-called higher-order loop computations, this has become more significant with recent high-precision calculations in QCD. 


"The fact that perturbative expansions in QCD are divergent greatly influences the renormalization scheme and scale dependency of the truncated expansions," write Boito and Caprini, "which provides a major source of uncertainty in the theoretical predictions of the standard model."

"One of the primary problems for precision QCD to meet the needs of future accelerator facilities is to understand and tame this behavior.


A cadre of specialists in the subject discuss these and other themes pertaining to QCD, such as the mathematical theory of revival and the presence of infrared (IR) and ultraviolet (UV) renormalons, in the special edition. 

These issues are approached from a range of perspectives, including a more basic viewpoint or phenomenological approach, and in the context of related quantum field theories.



~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.



Further Reading


Diogo Boito et al, Renormalons and hyperasymptotics in QCD, 

The European Physical Journal Special Topics (2021).

DOI: 10.1140/epjs/s11734-021-00276-w


Quantum Computing Error Correction - Improving Encoding Redundancy Exponentially Drops Net Error Rate.




Researchers at QuTech, a joint venture between TU Delft and TNO, have achieved a quantum error correction milestone. 

They've combined high-fidelity operations on encoded quantum data with a scalable data stabilization approach. 

The results are published in the December edition of Nature Physics. 


Physical quantum bits, also known as qubits, are prone to mistakes. Quantum decoherence, crosstalk, and improper calibration are some of the causes of these problems. 



Fortunately, quantum error correction theory suggests that it is possible to compute while simultaneously safeguarding quantum data from such defects. 


"An error corrected quantum computer will be distinguished from today's noisy intermediate-scale quantum (NISQ) processors by two characteristics," explains QuTech's Prof Leonardo DiCarlo. 


  • "To begin, it will handle quantum data stored in logical rather than physical qubits (each logical qubit consisting of many physical qubits). 

  • Second, quantum parity checks will be interspersed with computing stages to discover and fix defects in physical qubits, protecting the encoded information as it is processed." 


According to theory, if the occurrence of physical faults is below a threshold and the circuits for logical operations and stabilization are fault resistant, the logical error rate may be exponentially reduced. 

The essential principle is that when redundancy is increased and more qubits are used to encode data, the net error decreases. 


Researchers from TU Delft and TNO have recently achieved a crucial milestone toward this aim, producing a logical qubit made up of seven physical qubits (superconducting transmons). 


"We demonstrate that the encoded data may be used to perform all calculation operations. 

A important milestone in quantum error correction is the combination of high-fidelity logical operations with a scalable approach for repetitive stabilization " Prof. Barbara Terhal, also of QuTech, agrees. 


Jorge Marques, the first author and Ph.D. candidate, goes on to say, 


"Researchers have encoded and stabilized till now. We've now shown that we can also calculate. 

This is what a fault-tolerant computer must finally do: handle data while also protecting it from faults. 

We do three sorts of logical-qubit operations: initializing it in any state, changing it using gates, and measuring it. We demonstrate that all operations may be performed directly on encoded data. 

We find that fault-tolerant versions perform better than non-fault-tolerant variants for each category.



Fault-tolerant processes are essential for preventing physical-qubit faults from becoming logical-qubit errors. 

DiCarlo underlines the work's interdisciplinary nature: This is a collaboration between experimental physics, Barbara Terhal's theoretical physics group, and TNO and external colleagues on electronics. 


IARPA and Intel Corporation are the primary funders of the project.


"Our ultimate aim is to demonstrate that as we improve encoding redundancy, the net error rate drops exponentially," DiCarlo says. 

"Our present concentration is on 17 physical qubits, and we'll move on to 49 in the near future. 

Our quantum computer's architecture was built from the ground up to allow for this scalability."


~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.



Further Reading:


J. F. Marques et al, Logical-qubit operations in an error-detecting surface code, Nature Physics (2021). DOI: 10.1038/s41567-021-01423-9

Journal information: Nature Physics 

Abstract:

"Future fault-tolerant quantum computers will require storing and processing quantum data in logical qubits. 
Here we realize a suite of logical operations on a distance-2 surface code qubit built from seven physical qubits and stabilized using repeated error-detection cycles. 
Logical operations include initialization into arbitrary states, measurement in the cardinal bases of the Bloch sphere and a universal set of single-qubit gates. 
For each type of operation, we observe higher performance for fault-tolerant variants over non-fault-tolerant variants, and quantify the difference. 
In particular, we demonstrate process tomography of logical gates, using the notion of a logical Pauli transfer matrix. 
This integration of high-fidelity logical operations with a scalable scheme for repeated stabilization is a milestone on the road to quantum error correction with higher-distance superconducting surface codes."



Artificial Intelligence - AI And Effective Doctor-Patient Dialogue.


The majority of physicians use language that is too difficult for their patients to grasp, according to a computer study of hundreds of thousands of encrypted email conversations between doctors and patients. 


The research also revealed some of the tactics used by some clinicians to overcome communication obstacles. 

Experts on health literacy, as well as key health-care organizations, have recommended that physicians explain things to their patients in simple terms to avoid confounding individuals with the least health literacy. 

However, the majority of physicians, according to the report, did not do so. 

Only around 40% of patients with inadequate health literacy had clinicians who spoke to them in basic terms. 


As physicians and patients depend more on encrypted messaging, an invention that has quickly developed during the COVID-19 epidemic, effective electronic communication is becoming more crucial. 


The research discovered that physicians who scored highest on evaluations of how well their patients understood their treatment tended to adjust their electronic communications to their patients' degree of health literacy, regardless of where they were on the spectrum. 

"We uncovered a mix of attitudes and abilities that is crucial to physician-patient communication," said Dean Schillinger, MD, UCSF professor of medicine and primary care physician, and co-first author of the article, which was published in Science Advances on Dec. 17, 2021. 

"We were able to demonstrate that this kind of 'precise communication' is critical to all patients' comprehension." 


The researchers used computer algorithms and machine learning to assess the language complexity of the physicians' statements as well as their patients' health literacy. 

The study sets a new bar for the scale of research on doctor-patient communication by using data from over 250,000 secure messages exchanged between diabetes patients and their doctors through Kaiser Permanente's secure email portal. 


Typically, research on doctor-patient communication is done with much smaller data sets and often does not use objective metrics. 


The algorithms determined whether patients were treated by physicians who spoke the same language as them. 


The researchers next looked at the general trends of individual clinicians to determine whether they tended to customize their communications to their patients' various degrees of health literacy. 

"Our computer algorithms extracted dozens of linguistic features beyond the literal meaning of words, looking at how words were arranged, their psychological and linguistic characteristics, what part of speech they were, how frequently they were used, and their emotional saliency," said Nicholas Duran, Ph.D., a cognitive scientist and associate professor at Arizona State University's School of Social and Behavioral Sciences and the paper's co-first author. 


Patients' ratings of how well they understood their physicians paralleled their feelings about their doctor's verbal and written interactions. 


Nonetheless, the evaluations were closely linked to the doctor's written communication approach. 

"Unlike a clinic encounter, where a doctor can use visual cues or verbal feedback from each patient to verify understanding," said Andrew Karter, Ph.D., senior research scientist at Kaiser Permanente Northern California Division of Research, "in an email exchange, a doctor can never be sure that their patient understood the written message." "Our results imply that physicians should modify their email communications to match the complexity of the language used by their patients."



Language As A Barrier To Minorities Receiving Treatment.


Many individuals, particularly members from minority groups, find going to the doctor to be daunting. 

People from minority groups are especially uncomfortable in hospital settings for a variety of reasons, including fear and previous negative experiences. 

When individuals do go to their physicians, there are a number of hurdles that might prevent patients and doctors from communicating effectively. 

One important factor is language. 

Patients' and physicians' differences in class, culture, education, and even personality may all affect how well a visit goes and how well patients and doctors understand each other. 

Patients, on the other hand, have a right to be understood. 

It should not be necessary to communicate in English in order to get quality medical treatment. 

According to the United States Census Bureau, 60 million people in the United States speak a language other than English at home. 

A doctor's appointment is a two-way dialogue. 


Doctors have an obligation to know their patients and communicate with them in a manner that they can comprehend. 


Patients have the right to ask questions and participate in the dialogue until they grasp what the doctor is saying. 

Many patient bills of rights expressly say that patients have the right to receive information in the manner that best suits their needs. 

You have the right to obtain information about your health in words you can comprehend, as well as the planned course of therapy and recovery prospects, according to Indiana University Health. 

You have the right to information about your illness that is suited to your age, language, and comprehension level. 

You have the right to request language interpretation and translation services from the hospital. 


Patients with visual, speech, hearing, or cognitive disabilities have the right to obtain information from their healthcare professional in a format that is appropriate for them. 


Despite these guarantees, individuals from various minority groups often have trouble obtaining healthcare, get subpar treatment, and have worse health outcomes than their peers. 

Patients with weak English skills, for example, were less likely to have informed consent documents and were more likely to have drug issues and remain longer in the hospital. 

Much of my current work at the Regenstrief Institute and Indiana University School of Medicine focuses on understanding healthcare communication, whether it's between patients and doctors or between healthcare teams. 

I'm especially interested in learning how technology may be used to enhance communication and, as a result, people's health. 


Patient portals, which enable people to access their medical information and send encrypted communications to their doctors, are one technology that might help patients and doctors communicate better. 


For some patients, having access to their own medical records and the ability to compose questions in their own time, rather than having to rush through an in-person visit when they may not be able to process what a doctor is saying, can help them better understand and manage their health and get their health needs met. 

Despite this, we discovered that 52 percent of hospitals in the United States do not provide a patient portal in a language other than English, restricting the number of patients who may benefit from patient portals. 


Furthermore, accessing and navigating patient portals, regardless of language, requires a high level of literacy. 


This is a two-way conversation. You have the right to be heard. 

And to keep asking questions until you fully comprehend what your doctor says. 

Communication difficulties, whether intentional or not, may contribute to and worsen healthcare inequities. 



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Further Reading


Dean Schillinger, Precision communication: Physicians' linguistic adaptation to patients' health literacy, Science Advances (2021). DOI: 10.1126/sciadv.abj2836www.science.org/doi/10.1126/sciadv.abj2836


Artificial Intelligence - What Is The Blue Brain Project (BBP)?

 



The brain, with its 100 billion neurons, is one of the most complicated physical systems known.

It is an organ that takes constant effort to comprehend and interpret.

Similarly, digital reconstruction models of the brain and its activity need huge and long-term processing resources.

The Blue Brain Project, a Swiss brain research program supported by the École Polytechnique Fédérale de Lausanne (EPFL), was founded in 2005. Henry Markram is the Blue Brain Project's founder and director.



The purpose of the Blue Brain Project is to simulate numerous mammalian brains in order to "ultimately, explore the stages involved in the formation of biological intelligence" (Markram 2006, 153).


These simulations were originally powered by IBM's BlueGene/L, the world's most powerful supercomputer system from November 2004 to November 2007.




In 2009, the BlueGene/L was superseded by the BlueGene/P.

BlueGene/P was superseded by BlueGene/Q in 2014 due to a need for even greater processing capability.

The BBP picked Hewlett-Packard to build a supercomputer (named Blue Brain 5) devoted only to neuroscience simulation in 2018.

The use of supercomputer-based simulations has pushed neuroscience research away from the physical lab and into the virtual realm.

The Blue Brain Project's development of digital brain reconstructions enables studies to be carried out in a "in silico" environment, a Latin pseudo-word referring to modeling of biological systems on computing equipment, using a regulated research flow and methodology.

The possibility for supercomputers to turn the analog brain into a digital replica suggests a paradigm change in brain research.

One fundamental assumption is that the digital or synthetic duplicate will act similarly to a real or analog brain.

Michael Hines, John W. Moore, and Ted Carnevale created the software that runs on Blue Gene hardware, a simulation environment called NEURON that mimics neurons.


The Blue Brain Project may be regarded a typical example of what was dubbed Big Science following World War II (1939–1945) because of the expanding budgets, pricey equipment, and numerous interdisciplinary scientists participating.


 


Furthermore, the scientific approach to the brain via simulation and digital imaging processes creates issues such as data management.

Blue Brain joined the Human Brain Project (HBP) consortium as an initial member and submitted a proposal to the European Commission's Future & Emerging Technologies (FET) Flagship Program.

The European Union approved the Blue Brain Project's proposal in 2013, and the Blue Brain Project is now a partner in a larger effort to investigate and undertake brain simulation.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

General and Narrow AI; Human Brain Project; SyNAPSE.


Further Reading

Djurfeldt, Mikael, Mikael Lundqvist, Christopher Johansson, Martin Rehn, Örjan Ekeberg, Anders Lansner. 2008. “Brain-Scale Simulation of the Neocortex on the IBM Blue Gene/L Supercomputer.” IBM Journal of Research and Development 52, no. 1–2: 31–41.

Markram, Henry. 2006. “The Blue Brain Project.” Nature Reviews Neuroscience 7, no. 2: 153–60.

Markram, Henry, et al. 2015. “Reconstruction and Simulation of Neocortical Microcircuitry.” Cell 63, no. 2: 456–92.



Artificial Intelligence - Who Is Nick Bostrom?

 




Nick Bostrom(1973–) is an Oxford University philosopher with a physics and computational neuroscience multidisciplinary academic background.

He is a cofounder of the World Transhumanist Association and a founding director of the Future of Humanity Institute.

Anthropic Bias (2002), Human Enhancement (2009), Superintelligence: Paths, Dangers, Strategies (2014), and Global Catastrophic Risks (2014) are among the works he has authored or edited (2014).

Bostrom was born in the Swedish city of Helsingborg in 1973.

Despite his dislike of formal education, he enjoyed studying.

Science, literature, art, and anthropology were among his favorite interests.

Bostrom earned bachelor's degrees in philosophy, mathematics, logic, and artificial intelligence from the University of Gothenburg, as well as master's degrees in philosophy and physics from Stockholm University and computational neuroscience from King's College London.

The London School of Economics gave him a PhD in philosophy.

Bostrom is a regular consultant or contributor to the European Commission, the United States President's Council on Bioethics, the CIA, and Cambridge University's Centre for the Study of Existential Risk.

Bostrom is well-known for his contributions to a variety of subjects, and he has proposed or written extensively on a number of well-known philosophical arguments and conjectures, including the simulation hypothesis, existential risk, the future of machine intelligence, and transhumanism.

Bostrom's concerns in the future of technology, as well as his discoveries on the mathematics of the anthropic bias, are combined in the so-called "Simulation Argument." Three propositions make up the argument.

The first hypothesis is that almost all civilizations that attain human levels of knowledge eventually perish before achieving technological maturity.

The second hypothesis is that most civilizations develop "ancestor simulations" of sentient beings, but ultimately abandon them.

The "simulation hypothesis" proposes that mankind is now living in a simulation.

He claims that just one of the three assertions must be true.

If the first hypothesis is false, some proportion of civilizations at the current level of human society will ultimately acquire technological maturity.

If the second premise is incorrect, certain civilizations may be interested in continuing to perform ancestor simulations.

These civilizations' researchers may be performing massive numbers of these simulations.

There would be many times as many simulated humans living in simulated worlds as there would be genuine people living in real universes in that situation.

As a result, mankind is most likely to exist in one of the simulated worlds.

If the second statement is true, the third possibility is also true.

It's even feasible, according to Bostrom, for a civilization inside a simulation to conduct its own simulations.

In the form of an endless regress, simulations may be living within simulated universes, inside their own simulated worlds.

It's also feasible that all civilizations would vanish, maybe as a result of the discovery of a new technology, posing an existential threat beyond human control.

Bostrom's argument implies that humanity is not blind to the truth of the external world, an argument that can be traced back to Plato's conviction in the existence of universals (the "Forms") and the capacity of human senses to see only specific examples of universals.

His thesis also implies that computers' ability to imitate things will continue to improve in terms of power and sophistication.

Computer games and literature, according to Bostrom, are modern instances of natural human fascination with synthetic reality.

The Simulation Argument is sometimes mistaken with the restricted premise that mankind lives in a simulation, which is the third argument.

Humans, according to Bostrom, have a less than 50% probability of living in some kind of artificial matrix.

He also argues that if mankind lived in one, society would be unlikely to notice "glitches" that revealed the simulation's existence since they had total control over the simulation's operation.

Simulator creators, on the other hand, would inform people that they are living in a simulation.

Existential hazards are those that pose a serious threat to humanity's existence.

Humans, rather than natural dangers, pose the biggest existential threat, according to Bostrom (e.g., asteroids, earthquakes, and epidemic disease).

He argues that artificial hazards like synthetic biology, molecular nanotechnology, and artificial intelligence are considerably more threatening.

Bostrom divides dangers into three categories: local, global, and existential.

Local dangers might include the theft of a valuable item of art or an automobile accident.

A military dictator's downfall or the explosion of a supervolcano are both potential global threats.

The extent and intensity of existential hazards vary.

They are cross-generational and long-lasting.

Because of the amount of lives that might be spared, he believes that reducing the danger of existential hazards is the most essential thing that human beings can do; battling against existential risk is also one of humanity's most neglected undertakings.

He also distinguishes between several types of existential peril.

Human extinction, defined as the extinction of a species before it reaches technological maturity; permanent stagnation, defined as the plateauing of human technological achievement; flawed realization, defined as humanity's failure to use advanced technology for an ultimately worthwhile purpose; and subsequent ruination, defined as a society reaching technological maturity but then something goes wrong.

While mankind has not yet harnessed human ingenuity to create a technology that releases existentially destructive power, Bostrom believes it is possible that it may in the future.

Human civilization has yet to produce a technology with such horrific implications that mankind could collectively forget about it.

The objective would be to go on a technical path that is safe, includes global collaboration, and is long-term.

To argue for the prospect of machine superintelligence, Bostrom employs the metaphor of altered brain complexity in the development of humans from apes, which took just a few hundred thousand generations.

Artificial systems that use machine learning (that is, algorithms that learn) are no longer constrained to a single area.

He also points out that computers process information at a far faster pace than human neurons.

Humans will eventually rely on super intelligent robots in the same manner that chimps presently rely on humans for their ultimate survival, according to Bostrom, even in the wild.

By establishing a powerful optimizing process with a poorly stated purpose, super intelligent computers have the potential to cause devastation, or possibly an extinction-level catastrophe.

By subverting humanity to the programmed purpose, a superintelligence may even foresee a human response.

Bostrom recognizes that there are certain algorithmic techniques used by humans that computer scientists do not yet understand.

As they engage in machine learning, he believes it is critical for artificial intelligences to understand human values.

On this point, Bostrom is drawing inspiration from artificial intelligence theorist Eliezer Yudkowsky's concept of "coherent extrapolated volition"—also known as "friendly AI"—which is akin to what is currently accessible in human good will, civil society, and institutions.

A superintelligence should seek to provide pleasure and joy to all of humanity, and it may even make difficult choices that benefit the whole community rather than the individual.

In 2015, Bostrom, along with Stephen Hawking, Elon Musk, Max Tegmark, and many other top AI researchers, published "An Open Letter on Artificial Intelligence" on the Future of Life Institute website, calling for artificial intelligence research that maximizes the benefits to humanity while minimizing "potential pitfalls." Transhumanism is a philosophy or belief in the technological extension and augmentation of the human species' physical, sensory, and cognitive capacity.

In 1998, Bostrom and colleague philosopher David Pearce founded the World Transhumanist Association, now known as Humanity+, to address some of the societal hurdles to the adoption and use of new transhumanist technologies by people of all socioeconomic strata.

Bostrom has said that he is not interested in defending technology, but rather in using modern technologies to address real-world problems and improve people's lives.

Bostrom is particularly concerned in the ethical implications of human enhancement and the long-term implications of major technological changes in human nature.

He claims that transhumanist ideas may be found throughout history and throughout cultures, as shown by ancient quests such as the Gilgamesh Epic and historical hunts for the Fountain of Youth and the Elixir of Immortality.

The transhumanist idea, then, may be regarded fairly ancient, with modern representations in disciplines like artificial intelligence and gene editing.

Bostrom takes a stand against the emergence of strong transhumanist instruments as an activist.

He expects that politicians may act with foresight and command the sequencing of technical breakthroughs in order to decrease the danger of future applications and human extinction.

He believes that everyone should have the chance to become transhuman or posthuman (have capacities beyond human nature and intelligence).

For Bostrom, success would require a worldwide commitment to global security and continued technological progress, as well as widespread access to the benefits of technologies (cryonics, mind uploading, anti-aging drugs, life extension regimens), which hold the most promise for transhumanist change in our lifetime.

Bostrom, however cautious, rejects conventional humility, pointing out that humans have a long history of dealing with potentially catastrophic dangers.

In such things, he is a strong supporter of "individual choice," as well as "morphological freedom," or the ability to transform or reengineer one's body to fulfill specific wishes and requirements.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: 

Superintelligence; Technological Singularity.


Further Reading

Bostrom, Nick. 2003. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53, no. 211: 243–55.

Bostrom, Nick. 2005. “A History of Transhumanist Thought.” Journal of Evolution and Technology 14, no. 1: 1–25.

Bostrom, Nick, ed. 2008. Global Catastrophic Risks. Oxford, UK: Oxford University Press.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.

Savulescu, Julian, and Nick Bostrom, eds. 2009. Human Enhancement. Oxford, UK: Oxford University Press.

Artificial Intelligence - Who Is Rodney Brooks?

 


Rodney Brooks (1954–) is a business and policy adviser, as well as a computer science researcher.

He is a recognized expert in the fields of computer vision, artificial intelligence, robotics, and artificial life.

Brooks is well-known for his work in artificial intelligence and behavior-based robotics.

His iRobot Roomba autonomous robotic vacuum cleaners are among the most widely used home robots in America.

Brooks is well-known for his support for a bottom-up approach to computer science and robotics, which he discovered while on a lengthy, continuous visit to his wife's relatives in Thailand.

Brooks claims that situatedness, embodiment, and perception are just as crucial as cognition in describing the dynamic actions of intelligent beings.

This method is currently known as behavior-based artificial intelligence or action-based robotics.

Brooks' approach to intelligence, which avoids explicitly planned reasoning, contrasts with the symbolic reasoning and representation method that dominated artificial intelligence research over the first few decades.

Much of the early advances in robotics and artificial intelligence, according to Brooks, was based on the formal framework and logical operators of Alan Turing and John von Neumann's universal computer architecture.

He argued that these artificial systems had become far far from the biological systems that they were supposed to reflect.

Low-speed, massively parallel processing and adaptive interaction with their surroundings were essential for living creatures.

These were not, in his opinion, elements of traditional computer design, but rather components of what Brooks coined the term "subsumption architecture" in the mid-1980s.

According to Brooks, behavior-based robots are placed in real-world contexts and learn effective behaviors from them.

They need to be embodied in order to be able to interact with the environment and get instant feedback from their sensory inputs.

Specific conditions, signal changes, and real-time physical interactions are usually the source of intelligence.

Intelligence may be difficult to define functionally since it comes through a variety of direct and indirect interactions between different robot components and the environment.

As a professor at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory, Brooks developed numerous notable mobile robots based on the subsumption architecture.

Allen, the first of these behavior-based robots, was outfitted with sonar range and motion sensors.

Three tiers of control were present on the robot.

Its capacity to avoid static and dynamic impediments was given to it by the first rudimentary layer.

The second implanted a random walk algorithm that gave the robot the ability to shift course on occasion.

The third behavioral layer kept an eye on faraway locations that may be objectives while the other two control levels were turned off.

Another robot, Herbert, used a dispersed array of 8-bit microprocessors and 30 infrared proximity sensors to avoid obstructions, navigate low walls, and gather empty drink cans scattered across several workplaces.

Genghis was a six-legged robot that could walk across rugged terrain and had four onboard microprocessors, 22 sensors, and 12 servo motors.

Genghis was able to stand, balance, and maintain itself, as well as climb stairs and follow humans.

Brooks started thinking scenarios in which behavior-based robots may assist in the exploration of the surface of other planets with the support of Anita Flynn, an MIT Mobile Robotics Group research scientist.

The two roboticists argued in their 1989 essay "Fast, Cheap, and Out of Control," published by the British Interplanetary Society, that space organizations like the Jet Propulsion Laboratory should reconsider plans for expensive, large, and slow-moving mission rovers and instead consider using larger sets of small mission rovers to save money and avoid risk.

Brooks and Flynn came to the conclusion that their autonomous robot technology could be constructed and tested swiftly by space agencies, and that it could serve dependably on other planets even when it was out of human control.

When the Sojourner rover arrived on Mars in 1997, it had some behavior-based autonomous robot capabilities, despite its size and unique design.

Brooks and a new Humanoid Robotics Group at the MIT Artificial Intelligence Laboratory started working on Cog, a humanoid robot, in the 1990s.

The term "cog" had two meanings: it referred to the teeth on gears as well as the word "cognitive." Cog had a number of objectives, many of which were aimed at encouraging social communication between the robot and a human.

Cog had a human-like visage and a lot of motor mobility in his head, trunk, arms, and legs when he was created.

Cog was equipped with sensors that allowed him to see, hear, touch, and speak.

Cynthia Breazeal, the group researcher who designed Cog's mechanics and control system, used the lessons learned from human interaction with the robot to create Kismet, a new robot in the lab.

Kismet is an affective robot that is capable of recognizing, interpreting, and replicating human emotions.

The meeting of Cog and Kis is a watershed moment in the history of artificial emotional intelligence.

Rodney Brooks, cofounder and chief technology officer of iRobot Corporation, has sought commercial and military applications of his robotics research in recent decades.

PackBot, a robot commonly used to detect and defuse improvised explosive devices in Iraq and Afghanistan, was developed with a grant from the Defense Advanced Research Projects Agency (DARPA) in 1998.

PackBot was also used to examine the damage to the Fukushima Daiichi nuclear power facility in Japan after the terrorist attacks of September 11, 2001, and at the site of the World Trade Center after the terrorist attacks on September 11, 2001.

Brooks and others at iRobot created a toy robot that was sold by Hasbro in 2000.

My Real Baby, the end product, is a realistic doll that can cry, fuss, sleep, laughing, and showing hunger.

The Roomba cleaning robot was created by the iRobot Corporation.

Roomba is a disc-shaped vacuum cleaner featuring roller wheels, brushes, filters, and a squeegee vacuum that was released in 2002.

The Roomba, like other Brooks behavior-based robots, uses sensors to detect obstacles and avoid dangers such as falling down stairs.

For self-charging and room mapping, newer versions use infrared beams and photocell sensors.

By 2019, iRobot has sold over 25 million robots throughout the globe.

Brooks is also Rethink Robotics' cofounder and chief technology officer.

Heartland Robotics, which was formed in 2008 as Heartland Robotics, creates low-cost industrial robots.

Baxter, Rethink's first robot, can do basic repetitive activities including loading, unloading, assembling, and sorting.

Baxter poses in front of a computer screen with an animated human face created on it.

Bax vehicle has sensors and cameras integrated in it that allow it identify and prevent crashes when people are around, which is a critical safety feature.

Baxter may be utilized in normal industrial settings without the need for a security cage.

Unskilled personnel may rapidly train the robot by simply moving its arms around in the desired direction to control its actions.

Baxter remembers these gestures and adjusts them to other jobs.

The controls on its arms may be used to make fine motions.

Sawyer is a smaller version of Rethink's Baxter collaborative robot, which is advertised for accomplishing risky or boring industrial jobs in restricted places.

Brooks has often said that science are still unable to solve the difficult challenges of consciousness.

He claims that artificial intelligence and artificial life researchers have overlooked an essential aspect of living systems that maintains the gap between nonliving and living worlds wide.

Even if all of our world's live aspects are made out of nonliving atoms, this remains true.

Brooks speculates that some of the AI and ALife researchers' parameters are incorrect, or that current models are too simple.

It's also possible that researchers are still lacking in raw computing power.

However, Brooks thinks that there may be something about biological life and subjective experience—an component or a property—that is now undetectable or concealed from scientific perspective.

Brooks attended Flinders University in Adelaide, South Australia, to study pure mathematics.

At Stanford University, he earned his PhD under the supervision of John McCarthy, an American computer scientist and cognitive scientist.

Model-Based Computer Vision was the title of his dissertation thesis, which he extended and published (1984).

From 1997 until 2007, he was the Director of the MIT Artificial Intelligence Laboratory (CSAIL), which was renamed Computer Science & Artificial Intelligence Laboratory (CSAIL) in 2003.

Brooks has received various distinctions and prizes for his contributions to artificial intelligence and robotics.

He is a member of both the American Academy of Arts and Sciences and the Association for Computing Machinery.

Brooks has won the IEEE Robotics and Automation Award as well as the Joseph F. Engelberger Robotics Award for Leadership.

He is now the vice chairman of the Toyota Research Institute's advisory board.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Tilden, Mark.



Further Reading

Brooks, Rodney A. 1984. Model-Based Computer Vision. Ann Arbor, MI: UMI Research Press.

Brooks, Rodney A. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6, no. 1–2 (June): 3–15.

Brooks, Rodney A. 1991. “Intelligence without Reason.” AI Memo No. 1293. Cambridge, MA: MIT Artificial Intelligence Laboratory.

Brooks, Rodney A. 1999. Cambrian Intelligence: The Early History of the New AI. Cambridge, MA: MIT Press.

Brooks, Rodney A. 2002. Flesh and Machines: How Robots Will Change Us. New York: Pantheon.

Brooks, Rodney A., and Anita M. Flynn. 1989. “Fast, Cheap, and Out of Control.” Journal of the British Interplanetary Society 42 (December): 478–85.

Artificial Intelligence - Who Is Erik Brynjolfsson?

 



The Massachusetts Institute of Technology's Initiative on the Digital Economy is directed by Erik Brynjolfsson (1962–).

He is also a Research Associate at the National Bureau of Economic Research and a Schussel Family Professor at the MIT Sloan School (NBER).

Brynjolfsson's research and writing focuses on the relationship between information technology productivity and labor and innovation.

Brynjolfsson's work has long been at the focus of debates concerning how technology affects economic relationships.

His early research focused on the link between information technology and productivity, particularly the "productivity conundrum." Brynjolfsson discovered "large negative associations between economywide productivity and information worker productivity," according to his findings (Brynjolfs son 1993, 67).

He proposed that the paradox may be explained by effect mismeasurement, a lag between initial cost and final benefits, private benefits accumulating at the expense of the collective benefit, or blatant mismanagement.

However, multiple empirical studies by Brynjolfsson and associates demonstrate that investing in information technology has increased productivity significantly—at least since 1991.

Information technology, notably electronic communication networks, enhances multitasking, according to Brynjolfsson.

Multitasking, in turn, boosts productivity, knowledge network growth, and worker performance.

More than a simple causal connection, the relationship between IT and productivity constitutes a "virtuous cycle": as performance improves, users are motivated to embrace knowledge networks that boost productivity and operational performance.

In the era of artificial intelligence, the productivity paradox has resurfaced as a topic of discussion.

The digital economy faces a new set of difficulties as the battle between human and artificial labor heats up.

Brynjolfsson discusses the phenomenon of frictionless commerce, a trait brought about by internet activities such as smart shopbots' rapid pricing comparison.

Retailers like Amazon have redesigned their supply chains and distribution tactics to reflect how online marketplaces function in the age of AI.

This restructuring of internet commerce has changed the way we think about efficiency.

Price and quality comparisons may be made by covert human consumers in the brick-and-mortar economy.

This procedure may be time-consuming and expensive.

Consumers (and web-scraping bots) may now effortlessly navigate from one website to another, thereby lowering the cost of obtaining various types of internet information to zero.

Brynjolfsson and coauthor Andrew McAfee discuss the impact of technology on employment, the economy, and productivity development in their best-selling book Race Against the Machine (2011).

They're particularly interested in the process of creative destruction, which economist Joseph Schumpeter popularized in his book Capitalism, Socialism, and Democracy (1942).

While technology is a beneficial asset for the economy as a whole, Brynjolfsson and McAfee illustrate that it does not always benefit everyone in society.

In reality, the advantages of technical advancements may be uneven, benefiting small groups of innovators and investors who control digital marketplaces.

The key conclusion reached by Brynjolfsson and McAfee is that humans should collaborate with machines rather than compete with them.

When people learn skills to participate in the new age of smart machines, innovation and human capital improve.

Brynjolfsson and McAfee expanded on this topic in The Second Machine Age (2014), evaluating the significance of data in the digital economy and the growing prominence of artificial intelligence.

Data-driven intelligent devices, according to the authors, are a key component of online business.

Artificial intelligence brings us a world of new possibilities in terms of services and features.

They suggest that these changes have an impact on productivity indices as well as our understanding of what it means to participate in capitalist business.

Brynjolfsson and McAfee both have a lot to say on the disruptive effects of a widening gap between internet billionaires and regular people.

The authors are particularly concerned about the effects of artificial intelligence and smart robots on employment.

Brynjolfsson and McAfee reaffirm in Second Machine Age that there should be no race against technology, but rather purposeful cohabitation with it in order to develop a better global economy and society.

Brynjolfsson and McAfee argue in Machine, Platform, Crowd (2017) that the human mind will have to learn to cohabit with clever computers in the future.

The big difficulty is figuring out how society will utilize technology and how to nurture the beneficial features of data-driven innovation and artificial intelligence while weeding out the undesirable aspects.

Brynjolfsson and McAfee envision a future in which labor is not only suppressed by efficient machines and the disruptive effects of platforms, but also in which new matchmaking businesses govern intricate economic structures and large enthusiastic online crowds, and vast amounts of human knowledge and expertise are used to strengthen supply chains and economic processes.

Machines, platforms, and the crowd, according to Brynjolfsson and McAfee, may be employed in a variety of ways, either to concentrate power or to disperse decision-making and wealth.

They come to the conclusion that individuals do not have to be passively reliant on previous technological trends; instead, they may modify technology to make it more productive and socially good.

Brynjolfsson's current research interests include productivity, inequality, labor, and welfare, and he continues to work on artificial intelligence and the digital economy.

He graduated from Harvard University with degrees in Applied Mathematics and Decision Sciences.

In 1991, he received his doctorate in Managerial Economics from the MIT Sloan School.

"Information Technology and the Reorganization of Work: Theory and Evidence," was the title of his dissertation.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Ford, Martin; Workplace Automation.



Further Reading

Aral, Sinan, Erik Brynjolfsson, and Marshall Van Alstyne. 2012. “Information, Technology, and Information Worker Productivity.” Information Systems Research 23, no. 3, pt. 2 (September): 849–67.

Brynjolfsson, Erik. 1993. “The Productivity Paradox of Information Technology.” Com￾munications of the ACM 36, no. 12 (December): 67–77.

Brynjolfsson, Erik, Yu Hu, and Duncan Simester. 2011. “Goodbye Pareto Principle, Hello Long Tail: The Effect of Search Costs on the Concentration of Product Sales.” Management Science 57, no. 8 (August): 1373–86.

Brynjolfsson, Erik, and Andrew McAfee. 2012. Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Lexington, MA: Digital Frontier Press.

Brynjolfsson, Erik, and Andrew McAfee. 2016. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton.

Brynjolfsson, Erik, and Adam Saunders. 2013. Wired for Innovation: How Information Technology Is Reshaping the Economy. Cambridge, MA: MIT Press.

McAfee, Andrew, and Erik Brynjolfsson. 2017. Machine, Platform, Crowd: Harnessing Our Digital Future. New York: W. W. Norton.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...