Showing posts sorted by relevance for query computer science. Sort by date Show all posts
Showing posts sorted by relevance for query computer science. Sort by date Show all posts

Artificial Intelligence - Who Is Rodney Brooks?

 


Rodney Brooks (1954–) is a business and policy adviser, as well as a computer science researcher.

He is a recognized expert in the fields of computer vision, artificial intelligence, robotics, and artificial life.

Brooks is well-known for his work in artificial intelligence and behavior-based robotics.

His iRobot Roomba autonomous robotic vacuum cleaners are among the most widely used home robots in America.

Brooks is well-known for his support for a bottom-up approach to computer science and robotics, which he discovered while on a lengthy, continuous visit to his wife's relatives in Thailand.

Brooks claims that situatedness, embodiment, and perception are just as crucial as cognition in describing the dynamic actions of intelligent beings.

This method is currently known as behavior-based artificial intelligence or action-based robotics.

Brooks' approach to intelligence, which avoids explicitly planned reasoning, contrasts with the symbolic reasoning and representation method that dominated artificial intelligence research over the first few decades.

Much of the early advances in robotics and artificial intelligence, according to Brooks, was based on the formal framework and logical operators of Alan Turing and John von Neumann's universal computer architecture.

He argued that these artificial systems had become far far from the biological systems that they were supposed to reflect.

Low-speed, massively parallel processing and adaptive interaction with their surroundings were essential for living creatures.

These were not, in his opinion, elements of traditional computer design, but rather components of what Brooks coined the term "subsumption architecture" in the mid-1980s.

According to Brooks, behavior-based robots are placed in real-world contexts and learn effective behaviors from them.

They need to be embodied in order to be able to interact with the environment and get instant feedback from their sensory inputs.

Specific conditions, signal changes, and real-time physical interactions are usually the source of intelligence.

Intelligence may be difficult to define functionally since it comes through a variety of direct and indirect interactions between different robot components and the environment.

As a professor at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory, Brooks developed numerous notable mobile robots based on the subsumption architecture.

Allen, the first of these behavior-based robots, was outfitted with sonar range and motion sensors.

Three tiers of control were present on the robot.

Its capacity to avoid static and dynamic impediments was given to it by the first rudimentary layer.

The second implanted a random walk algorithm that gave the robot the ability to shift course on occasion.

The third behavioral layer kept an eye on faraway locations that may be objectives while the other two control levels were turned off.

Another robot, Herbert, used a dispersed array of 8-bit microprocessors and 30 infrared proximity sensors to avoid obstructions, navigate low walls, and gather empty drink cans scattered across several workplaces.

Genghis was a six-legged robot that could walk across rugged terrain and had four onboard microprocessors, 22 sensors, and 12 servo motors.

Genghis was able to stand, balance, and maintain itself, as well as climb stairs and follow humans.

Brooks started thinking scenarios in which behavior-based robots may assist in the exploration of the surface of other planets with the support of Anita Flynn, an MIT Mobile Robotics Group research scientist.

The two roboticists argued in their 1989 essay "Fast, Cheap, and Out of Control," published by the British Interplanetary Society, that space organizations like the Jet Propulsion Laboratory should reconsider plans for expensive, large, and slow-moving mission rovers and instead consider using larger sets of small mission rovers to save money and avoid risk.

Brooks and Flynn came to the conclusion that their autonomous robot technology could be constructed and tested swiftly by space agencies, and that it could serve dependably on other planets even when it was out of human control.

When the Sojourner rover arrived on Mars in 1997, it had some behavior-based autonomous robot capabilities, despite its size and unique design.

Brooks and a new Humanoid Robotics Group at the MIT Artificial Intelligence Laboratory started working on Cog, a humanoid robot, in the 1990s.

The term "cog" had two meanings: it referred to the teeth on gears as well as the word "cognitive." Cog had a number of objectives, many of which were aimed at encouraging social communication between the robot and a human.

Cog had a human-like visage and a lot of motor mobility in his head, trunk, arms, and legs when he was created.

Cog was equipped with sensors that allowed him to see, hear, touch, and speak.

Cynthia Breazeal, the group researcher who designed Cog's mechanics and control system, used the lessons learned from human interaction with the robot to create Kismet, a new robot in the lab.

Kismet is an affective robot that is capable of recognizing, interpreting, and replicating human emotions.

The meeting of Cog and Kis is a watershed moment in the history of artificial emotional intelligence.

Rodney Brooks, cofounder and chief technology officer of iRobot Corporation, has sought commercial and military applications of his robotics research in recent decades.

PackBot, a robot commonly used to detect and defuse improvised explosive devices in Iraq and Afghanistan, was developed with a grant from the Defense Advanced Research Projects Agency (DARPA) in 1998.

PackBot was also used to examine the damage to the Fukushima Daiichi nuclear power facility in Japan after the terrorist attacks of September 11, 2001, and at the site of the World Trade Center after the terrorist attacks on September 11, 2001.

Brooks and others at iRobot created a toy robot that was sold by Hasbro in 2000.

My Real Baby, the end product, is a realistic doll that can cry, fuss, sleep, laughing, and showing hunger.

The Roomba cleaning robot was created by the iRobot Corporation.

Roomba is a disc-shaped vacuum cleaner featuring roller wheels, brushes, filters, and a squeegee vacuum that was released in 2002.

The Roomba, like other Brooks behavior-based robots, uses sensors to detect obstacles and avoid dangers such as falling down stairs.

For self-charging and room mapping, newer versions use infrared beams and photocell sensors.

By 2019, iRobot has sold over 25 million robots throughout the globe.

Brooks is also Rethink Robotics' cofounder and chief technology officer.

Heartland Robotics, which was formed in 2008 as Heartland Robotics, creates low-cost industrial robots.

Baxter, Rethink's first robot, can do basic repetitive activities including loading, unloading, assembling, and sorting.

Baxter poses in front of a computer screen with an animated human face created on it.

Bax vehicle has sensors and cameras integrated in it that allow it identify and prevent crashes when people are around, which is a critical safety feature.

Baxter may be utilized in normal industrial settings without the need for a security cage.

Unskilled personnel may rapidly train the robot by simply moving its arms around in the desired direction to control its actions.

Baxter remembers these gestures and adjusts them to other jobs.

The controls on its arms may be used to make fine motions.

Sawyer is a smaller version of Rethink's Baxter collaborative robot, which is advertised for accomplishing risky or boring industrial jobs in restricted places.

Brooks has often said that science are still unable to solve the difficult challenges of consciousness.

He claims that artificial intelligence and artificial life researchers have overlooked an essential aspect of living systems that maintains the gap between nonliving and living worlds wide.

Even if all of our world's live aspects are made out of nonliving atoms, this remains true.

Brooks speculates that some of the AI and ALife researchers' parameters are incorrect, or that current models are too simple.

It's also possible that researchers are still lacking in raw computing power.

However, Brooks thinks that there may be something about biological life and subjective experience—an component or a property—that is now undetectable or concealed from scientific perspective.

Brooks attended Flinders University in Adelaide, South Australia, to study pure mathematics.

At Stanford University, he earned his PhD under the supervision of John McCarthy, an American computer scientist and cognitive scientist.

Model-Based Computer Vision was the title of his dissertation thesis, which he extended and published (1984).

From 1997 until 2007, he was the Director of the MIT Artificial Intelligence Laboratory (CSAIL), which was renamed Computer Science & Artificial Intelligence Laboratory (CSAIL) in 2003.

Brooks has received various distinctions and prizes for his contributions to artificial intelligence and robotics.

He is a member of both the American Academy of Arts and Sciences and the Association for Computing Machinery.

Brooks has won the IEEE Robotics and Automation Award as well as the Joseph F. Engelberger Robotics Award for Leadership.

He is now the vice chairman of the Toyota Research Institute's advisory board.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Tilden, Mark.



Further Reading

Brooks, Rodney A. 1984. Model-Based Computer Vision. Ann Arbor, MI: UMI Research Press.

Brooks, Rodney A. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6, no. 1–2 (June): 3–15.

Brooks, Rodney A. 1991. “Intelligence without Reason.” AI Memo No. 1293. Cambridge, MA: MIT Artificial Intelligence Laboratory.

Brooks, Rodney A. 1999. Cambrian Intelligence: The Early History of the New AI. Cambridge, MA: MIT Press.

Brooks, Rodney A. 2002. Flesh and Machines: How Robots Will Change Us. New York: Pantheon.

Brooks, Rodney A., and Anita M. Flynn. 1989. “Fast, Cheap, and Out of Control.” Journal of the British Interplanetary Society 42 (December): 478–85.

SPACE EXPLORATION AS AN INSPIRATION FOR EDUCATION.




The idea that spaceflight should be given more support because it is particularly educationally inspirational is a frequent topic in space advocacy, both in popular science and peer-reviewed scientific and space policy literature. 



Improved interest in STEM fields (as shown by STEM enrollments) or increased scientific knowledge among the general population may be evidence of such motivation. 


For the time being, I'll concede that achieving both of these objectives would be beneficial. 

I am particularly uninterested in debating the value of increasing general public scientific literacy; however, I grant that it is debatable whether society (at least, American society) is currently in need of more individuals with STEM degrees, as enrollments in engineering and computer science, in particular, have shown strong growth over the last 30 years (but perhaps the complaint is that enrollments in engineering and computer science have shown strong growth over the last 30 years). 


I'm not going to argue that any of these jobs is impossible to do. What I'll argue is that there's no conclusive evidence that spaceflight is a good source of either kind of inspiration. 


  1. SPACE EXPLORATION AND STEM EDUCATION.
  2. Scientific and Space Enthusiasm.


We have no related duty to fund spaceflight since it is not an effective means of fulfilling the commitment (if there is one) to inspire interest in science.



~ Jai Krishna Ponnappan 


You may also want to read more about Space Exploration, Space Missions and Systems here.





Artificial Intelligence - Who Is Anne Foerst?

 


 

Anne Foerst (1966–) is a Lutheran clergyman, theologian, author, and computer science professor at Allegany, New York's St. Bonaventure University.



In 1996, Foerst earned a doctorate in theology from the Ruhr-University of Bochum in Germany.

She has worked as a research associate at Harvard Divinity School, a project director at MIT, and a research scientist at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory.

She supervised the God and Computers Project at MIT, which encouraged people to talk about existential questions brought by scientific research.



Foerst has written several scientific and popular pieces on the need for improved conversation between religion and science, as well as shifting concepts of personhood in the light of robotics research.



God in the Machine, published in 2004, details her work as a theological counselor to the MIT Cog and Kismet robotics teams.

Foerst's study has been influenced by her work as a hospital counselor, her years at MIT collecting ethnographic data, and the writings of German-American Lutheran philosopher and theologian Paul Tillich.



As a medical counselor, she started to rethink what it meant to be a "normal" human being.


Foerst was inspired to investigate the circumstances under which individuals are believed to be persons after seeing variations in physical and mental capabilities in patients.

In her work, Foerst contrasts between the terms "human" and "person," with human referring to members of our biological species and per son referring to a person who has earned a form of reversible social inclusion.



Foerst uses the Holocaust as an illustration of how personhood must be conferred but may also be revoked.


As a result, personhood is always vulnerable.

Foerst may explore the inclusion of robots as individuals using this personhood schematic—something people bestow to each other.


Tillich's ideas on sin, alienation, and relationality are extended to the connections between humans and robots, as well as robots and other robots, in her work on robots as potential people.


  • People become alienated, according to Tillich, when they ignore opposing polarities in their life, such as the need for safety and novelty or freedom.
  • People reject reality, which is fundamentally ambiguous, when they refuse to recognize and interact with these opposing forces, cutting out or neglecting one side in order to concentrate entirely on the other.
  • People are alienated from their lives, from the people around them, and (for Tillich) from God if they do not accept the complicated conflicts of existence.


The threat of reducing all things to items or data that can be measured and studied, as well as the possibility to enhance people's capacity to create connections and impart identity, are therefore opposites of danger and opportunity in AI research.



Foerst has attempted to establish a dialogue between theology and other structured fields of inquiry, following Tillich's paradigm.


Despite being highly welcomed in labs and classrooms, Foerst's work has been met with skepticism and pushback from some concerned that she is bringing counter-factual notions into the realm of science.

These concerns are crucial data for Foerst, who argues for a mutualistic approach in which AI researchers and theologians accept strongly held preconceptions about the universe and the human condition in order to have fruitful discussions.

Many valuable discoveries come from these dialogues, according to Foerst's study, as long as the parties have the humility to admit that neither side has a perfect grasp of the universe or human existence.



Foerst's work on AI is marked by humility, as she claims that researchers are startled by the vast complexity of the human person while seeking to duplicate human cognition, function, and form in the figure of the robot.


The way people are socially rooted, socially conditioned, and socially accountable adds to the complexity of any particular person.

Because human beings' embedded complexity is intrinsically physical, Foerst emphasizes the significance of an embodied approach to AI.

Foerst explored this embodied technique while at MIT, where having a physical body capable of interaction is essential for robotic research and development.


When addressing the evolution of artificial intelligence, Foerst emphasizes a clear difference between robots and computers in her work (AI).


Robots have bodies, and those bodies are an important aspect of their learning and interaction abilities.

Although supercomputers can accomplish amazing analytic jobs and participate in certain forms of communication, they lack the ability to learn through experience and interact with others.

Foerst is dismissive of research that assumes intelligent computers may be created by re-creating the human brain.

Rather, she contends that bodies are an important part of intellect.


Foerst proposes for growing robots in a way similar to human child upbringing, in which robots are given opportunities to interact with and learn from the environment.


This process is costly and time-consuming, just as it is for human children, and Foerst reports that funding for creative and time-intensive AI research has vanished, replaced by results-driven and military-focused research that justifies itself through immediate applications, especially since the terrorist attacks of September 11, 2001.

Foerst's work incorporates a broad variety of sources, including religious texts, popular films and television programs, science fiction, and examples from the disciplines of philosophy and computer science.



Loneliness, according to Foerst, is a fundamental motivator for humans' desire of artificial life.


Both fictional imaginings of the construction of a mechanical companion species and con actual robotics and AI research are driven by feelings of alienation, which Foerst ties to the theological position of a lost contact with God.


Academic opponents of Foerst believe that she has replicated a paradigm initially proposed by German theologian and scholar Rudolph Otto in his book The Idea of the Holy (1917).


The heavenly experience, according to Otto, may be discovered in a moment of attraction and dread, which he refers to as the numinous.

Critics contend that Foerst used this concept when she claimed that humans sense attraction and dread in the figure of the robot.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Embodiment, AI and; Nonhuman Rights and Personhood; Pathetic Fallacy; Robot Ethics; Spiritual Robots.


Further Reading:


Foerst, Anne. 2005. God in the Machine: What Robots Teach Us About Humanity and God. New York: Plume.

Geraci, Robert M. 2007. “Robots and the Sacred in Science Fiction: Theological Implications of Artificial Intelligence.” Zygon 42, no. 4 (December): 961–80.

Gerhart, Mary, and Allan Melvin Russell. 2004. “Cog Is to Us as We Are to God: A Response to Anne Foerst.” Zygon 33, no. 2: 263–69.

Groks Science Radio Show and Podcast with guest Anne Foerst. Audio available online at http://ia800303.us.archive.org/3/items/groks146/Groks122204_vbr.mp3. Transcript available at https://grokscience.wordpress.com/transcripts/anne-foerst/.

Reich, Helmut K. 2004. “Cog and God: A Response to Anne Foerst.” Zygon 33, no. 2: 255–62.



Artificial Intelligence - General and Narrow Categories Of AI.






There are two types of artificial intelligence: general (or powerful or complete) and narrow (or limited) (or weak or specialized).

In the actual world, general AI, such as that seen in science fiction, does not yet exist.

Machines with global intelligence would be capable of completing every intellectual endeavor that humans are capable of.

This sort of system would also seem to think in abstract terms, establish connections, and communicate innovative ideas in the same manner that people do, displaying the ability to think abstractly and solve problems.



Such a computer would be capable of thinking, planning, and recalling information from the past.

While the aim of general AI has yet to be achieved, there are more and more instances of narrow AI.

These are machines that perform at human (or even superhuman) levels on certain tasks.

Computers that have learnt to play complicated games have abilities, techniques, and behaviors that are comparable to, if not superior to, those of the most skilled human players.

AI systems that can translate between languages in real time, interpret and respond to natural speech (both spoken and written), and recognize images have also been developed (being able to recognize, identify, and sort photos or images based on the content).

However, the ability to generalize knowledge or skills is still largely a human accomplishment.

Nonetheless, there is a lot of work being done in the field of general AI right now.

It will be difficult to determine when a computer develops human-level intelligence.

Several serious and hilarious tests have been suggested to determine whether a computer has reached the level of General AI.

The Turing Test is arguably the most renowned of these examinations.

A machine and a person speak in the background, as another human listens in.

The human eavesdropper must figure out which speaker is a machine and which is a human.

The machine passes the test if it can fool the human evaluator a prescribed percentage of the time.

The Coffee Test is a more fantastical test in which a machine enters a typical household and brews coffee.



It has to find the coffee machine, look for the coffee, add water, boil the coffee, and pour it into a cup.

Another is the Flat Pack Furniture Test, which involves a machine receiving, unpacking, and assembling a piece of furniture based only on the instructions supplied.

Some scientists, as well as many science fiction writers and fans, believe that once intelligent machines reach a tipping point, they will be able to improve exponentially.

AI-based beings that far exceed human capabilities might be one conceivable result.

The Singularity, or artificial superintelligence, is the point at which AI assumes control of its own self-improvement (ASI).

If ASI is achieved, it will have unforeseeable consequences for human society.

Some pundits worry that ASI would jeopardize humanity's safety and dignity.

It's up for dispute whether the Singularity will ever happen, and how dangerous it may be.

Narrow AI applications are becoming more popular across the globe.

Machine learning (ML) is at the heart of most new applications, and most AI examples in the news are connected to this subset of technology.

Traditional or conventional algorithms are not the same as machine learning programs.

In programs that cannot learn, a computer programmer actively adds code to account for every action of an algorithm.

All of the decisions made along the process are governed by the programmer's guidelines.

This necessitates the programmer imagining and coding for every possible circumstance that an algorithm may face.

This kind of program code is bulky and often inadequate, especially if it is updated frequently to accommodate for new or unanticipated scenarios.

The utility of hard-coded algorithms approaches its limit in cases where the criteria for optimum judgments are unclear or impossible for a human programmer to foresee.

Machine learning is the process of training a computer to detect and identify patterns via examples rather than predefined rules.



This is achieved, according to Google engineer Jason Mayes, by reviewing incredibly huge quantities of training data or participating in some other kind of programmed learning step.

New patterns may be extracted by processing the test data.

The system may then classify newly unknown data based on the patterns it has already found.

Machine learning allows an algorithm to recognize patterns or rules underlying decision-making processes on its own.

Machine learning also allows a system's output to improve over time as it gains more experience (Mayes 2017).

A human programmer continues to play a vital role in this learning process, influencing results by making choices like developing the exact learning algorithm, selecting the training data, and choosing other design elements and settings.

Machine learning is powerful once it's up and running because it can adapt and enhance its ability to categorize new data without the need for direct human interaction.

In other words, the output quality increases as the user gains experience.

Artificial intelligence is a broad word that refers to the science of making computers intelligent.

AI is a computer system that can collect data and utilize it to make judgments or solve issues, according to scientists.

Another popular scientific definition of AI is "a software program paired with hardware that can receive (or sense) inputs from the world around it, evaluate and analyze those inputs, and create outputs and suggestions without the assistance of a person." When programmers claim an AI system can learn, they're referring to the program's ability to change its own processes in order to provide more accurate outputs or predictions.

AI-based systems are now being developed and used in practically every industry, from agriculture to space exploration, and in applications ranging from law enforcement to online banking.

The methods and techniques used in computer science are always evolving, extending, and improving.

Other terminology linked to machine learning, such as reinforcement learning and neural networks, are important components of cutting-edge artificial intelligence systems.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Superintelligence; Turing, Alan; Turing Test.


Further Reading:


Kelnar, David. 2016. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Mayes, Jason. 2017. Machine Learning 101. https://docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/htmlpresent.

Müller, Vincent C., and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 553–71. New York: Springer.

Russell, Stuart, and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

Samuel, Arthur L. 1988. “Some Studies in Machine Learning Using the Game of Checkers I.” In Computer Games I, 335–65. New York: Springer.



Artificial Intelligence - What Is Computer-Assisted Diagnosis?

 



Computer-assisted diagnosis (CAD) is a branch of medical informatics that deals with the use of computer and communications technologies in medicine.

Beginning in the 1950s, physicians and scientists used computers and software to gather and organize expanding collections of medical data and to offer important decision and treatment assistance in contacts with patients.

The use of computers in medicine has resulted in significant improvements in the medical diagnostic decision-making process.

Tables of differential diagnoses inspired the first diagnostic computing devices.

Differential diagnosis entails the creation of a set of sorting criteria that may be used to determine likely explanations of symptoms during a patient's examination.

A excellent example is the Group Symbol Associator (GSA), a slide rule-like device designed about 1950 by F.A.

Nash of the South West London Mass X-Ray Service that enabled the physician to line up a patient's symptoms with 337 symptom-disease complexes to obtain a diagnosis (Nash 1960, 1442–46).

At the Rockefeller Institute for Medical Research's Medical Electronics Center, Cornell University physician Martin Lipkin and physiologist James Hardy developed a manual McBee punched card system for the detection of hematological illnesses.

Beginning in 1952, researchers linked patient data to findings previously known about each of twenty-one textbook hematological diseases (Lipkin and Hardy 1957, 551–52).

The findings impressed the Medical Electronics Center's director, television pioneer Vladimir Zworykin, who used Lipkin and Hardy's method to create a comparable digital computer equipment.

By compiling and sorting findings and creating a weighted diagnostic index, Zworykin's system automated what had previously been done manually.

Zworykin used vacuum tube BIZMAC computer coders at RCA's Electronic Data Processing Division to convert the punched card system to the digital computer.

On December 10, 1957, in Camden, New Jersey, the finalized Zworykin programmed hematological differential diagnosis system was first exhibited on the BIZMAC computer (Engle 1992, 209–11).

As a result, the world's first totally digital electronic computer diagnostic assistance was developed.

In the 1960s, a new generation of doctors collaborated with computer scientists to link the concept of reasoning under ambiguity to the concept of personal probability, where orderly medical judgments might be indexed along the lines of gambling behavior.

Probability is used to quantify uncertainty in order to determine the likelihood that a single patient has one or more illnesses.

The use of personal probability in conjunction with digital computer technologies yielded unexpected outcomes.

Medical decision analysis is an excellent example of this, since it entails using utility and probability theory to compute alternative patient diagnoses, prognoses, and treatment management options.

Stephen Pauker and Jerome Kassirer, both of Tufts University's medical informatics department, are often acknowledged as among the first to explicitly apply computer-aided decision analysis to clinical medicine (Pauker and Kassirer 1987, 250–58).

Identifying all available options and their possible consequences, as well as creating a decision model, generally in the form of a decision tree so complicated and changing that only a computer can keep track of changes in all of the variables in real time, is what decision analysis entails.

Nodes in such a tree describe options, probabilities, and outcomes.

The tree is used to show the strategies accessible to the physician and to quantify the chance of each result occurring if a certain approach is followed (sometimes on a moment-by-moment basis).

Each outcome's relative value is also expressed mathematically, as a utility, on a clearly defined scale.

Decision analysis assigns an estimate of the cost of getting each piece of clinical or laboratory-derived information, as well as the possible value that may be gained from it.

The costs and benefits may be measured in qualitative terms, such as the quality of life or amount of pain derived from the acquisition and use of medical information, but they are usually measured in quantitative or statistical terms, such as when calculating surgical success rates or cost-benefit ratios for new medical technologies.

Critics claimed that cost-benefit calculations made rationing of scarce health-care resources more appealing, but decision analysis at irregular intervals resisted the onslaught (Berg 1997, 54).

Artificial intelligence expert systems started to supplant more logical and sequential algorithmic processes for attaining medical agreement in the 1960s and 1970s.

Miller and Masarie, Jr. (1990, 1–2) criticized the so-called oracles of medical computing's past, claiming that they created factory diagnoses (Miller and Masarie, Jr. 1990, 1–2).

Computer scientists collaborated with clinicians to integrate assessment procedures into medical applications, repurposing them as criticizing systems of last resort rather than diagnostic systems (Miller 1984, 17–23).

The ATTENDING expert system for anesthetic administration, created at Yale University School of Medicine, may have been the first to use a criticizing method.

Routines for risk assessment are at the heart of the ATTENDING system, and they assist residents and doctors in weighing factors such as patient health, surgical procedure, and available anesthetics when making clinical decisions.

Unlike diagnostic tools that suggest a procedure based on previously entered data, ATTENDING reacts to user recommendations in a stepwise manner (Miller 1983, 362–69).

Because it requires the attentive attention of a human operator, the criticizing technique absolves the computer of ultimate responsibility for diagnosis.

This is a critical characteristic in an era where strict responsibility applies to medical technology failures, including complicated software.

Computer-assisted diagnosis migrated to home computers and the internet in the 1990s and early 2000s.

Medical HouseCall and Dr. Schueler's Home Medical Advisor are two instances of so-called "doc-in-a-box" software.

Medical HouseCall is a generalized, consumer-oriented version of the University of Utah's Iliad decision-support system.

The information foundation for Medical HouseCall took an estimated 150,000 person hours to develop.

The first software package, which was published in May 1994, had information on over 1,100 ailments as well as 3,000 prescription and nonprescription medications.

It also included cost and treatment alternatives information.

The encyclopedia included in the program spanned 5,000 printed pages.

Medical HouseCall also has a module for maintaining medical records for family members.

Medical HouseCall's first version required users to choose one of nineteen symptom categories by clicking on graphical symbols depicting bodily parts, then answer a series of yes-or-no questions.

After that, the program generates a prioritized list of potential diagnoses.

Bayesian estimate is used to obtain these diagnoses (Bouhaddou and Warner, Jr. 1995, 1181–85).


Dr. Schueler's Home Medical Advisor was a competitive software program in the 1990s.

Home Medical Advisor is a consumer-oriented CD-ROM set that contains a wide library of health and medical information, as well as a diagnostic-assistance application that offers probable diagnoses and appropriate courses of action.

In 1997, its medical encyclopedia defined more than 15,000 words.

There's also a picture library and full-motion video presentations in Home Medical Advisor.


The program's artificial intelligence module may be accessed via two alternative interfaces.

  1. The first involves using mouse clicks to tick boxes.
  2. The second interface requires the user to provide written responses to particular inquiries in natural language.


The program's differential diagnoses are connected to more detailed information about those illnesses (Cahlin 1994, 53–56).

Online symptom checks are becoming commonplace.

Deep learning in big data analytics has the potential to minimize diagnostic and treatment mistakes, lower costs, and improve workflow efficiency in the future.

CheXpert, an automated chest x-ray diagnostic system, was unveiled in 2019 by Stanford University's Machine Learning Group and Intermountain Healthcare.

In under 10 seconds, the radiology AI program can identify pneumonia.

In the same year, Massachusetts General Hospital reported the development of a convolutional neural network based on a huge collection of chest radiographs to detect persons at high risk of death from any cause, including heart disease and cancer.

The identification of wrist fractures, breast cancer that has spread, and cataracts in youngsters has improved thanks to pattern recognition utilizing deep neural networks.

Although the accuracy of deep learning findings varies by field of health and kind of damage or sickness, the number of applications is growing to the point where smartphone apps with integrated AI are already in limited usage.

Deep learning approaches are projected to be used in the future to help with in-vitro fertilization embryo selection, mental health diagnosis, cancer categorization, and weaning patients off of ventilator support.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Automated Multiphasic Health Testing; Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR.


Further Reading


Berg, Marc. 1997. Rationalizing Medical Work: Decision Support Techniques and Medical Practices. Cambridge, MA: MIT Press.

Bouhaddou, Omar, and Homer R. Warner, Jr. 1995. “An Interactive Patient Information and Education System (Medical HouseCall) Based on a Physician Expert System (Iliad).” Medinfo 8, pt. 2: 1181–85.

Cahlin, Michael. 1994. “Doc on a Disc: Diagnosing Home Medical Software.” PC Novice, July 1994: 53–56.

Engle, Ralph L., Jr. 1992. “Attempts to Use Computers as Diagnostic Aids in Medical Decision Making: A Thirty-Year Experience.” Perspectives in Biology and Medicine 35, no. 2 (Winter): 207–19.

Lipkin, Martin, and James D. Hardy. 1957. “Differential Diagnosis of Hematologic Diseases Aided by Mechanical Correlation of Data.” Science 125 (March 22): 551–52.

Miller, Perry L. 1983. “Critiquing Anesthetic Management: The ‘ATTENDING’ Computer System.” Anesthesiology 58, no. 4 (April): 362–69.

Miller, Perry L. 1984. “Critiquing: A Different Approach to Expert Computer Advice in Medicine.” In Proceedings of the Annual Symposium on Computer Applications in Medical Care, vol. 8, edited by Gerald S. Cohen, 17–23. Piscataway, NJ: IEEE Computer Society.

Miller, Randolph A., and Fred E. Masarie, Jr. 1990. “The Demise of the Greek Oracle Model for Medical Diagnosis Systems.” Methods of Information in Medicine 29, no. 1 (January): 1–2.

Nash, F. A. 1960. “Diagnostic Reasoning and the Logoscope.” Lancet 276, no. 7166 (December 31): 1442–46.

Pauker, Stephen G., and Jerome P. Kassirer. 1987. “Decision Analysis.” New England Journal of Medicine 316, no. 5 (January): 250–58.

Topol, Eric J. 2019. “High-Performance Medicine: The Convergence of Human and Artificial Intelligence.” Nature Medicine 25, no. 1 (January): 44–56.



Quantum Computing Hype Cycle



    Context: Quantum computing has been classified as an emerging technology since 2005.





    Because quantum computing has been on the Gartner Hype Cycle up-slope for more than 10 years, it is arguably the most costly and hardest to comprehend new technology. 


    Quantum computing has been classified as an emerging technology since 2005, and it is still classified as such.

    The idea that theoretical computing techniques cannot be isolated from the physics that governs computing devices is at the heart of quantum computing





    Quantum physics, in particular, introduces a new paradigm for computer science that fundamentally changes our understanding of information processing and what we previously believed to be the top limits of computing



    If quantum mechanics governs nature, we should be able to mimic it using QCs. 

    The executive summary depicts the next generation of computing.




     

    Quantum Computing On The Hype Cycle.


    Since the hype cycle for quantum computing had been first established by Gartner, Pundits have predicted that it will take over and permanently affect the world. 

    Although it's safe to argue that quantum computers might mark the end for traditional cryptography, the truth will most likely be less dramatic. 

    This has obvious ramifications for technology like blockchain, which are expected to power future financial systems. 

    While the Bitcoin system, for example, is expected to keep traditional mining computers busy until 2140, a quantum computer could potentially mine every token very instantly using brute-force decoding. 



    Quantum cryptography-based digital ledger technologies that are more powerful might level the playing field. 




    All of this assumes that quantum computing will become widely accessible and inexpensive. As things are, this seems to be feasible. 

    Serious computer companies such as IBM, Honeywell, Google, and Microsoft, as well as younger specialty startups, are all working on putting quantum computing in the cloud right now and welcoming participation from the entire computing community. 

    To assist novice users, introduction packs and development kits are provided. 

    These are significant steps forward that will very probably accelerate progress as users develop more diversified and demanding workloads and find out how to handle them with quantum technology. 

    The predicted democratizing impact of universal cloud access, which should bring more individuals from a wider diversity of backgrounds into touch with quantum to comprehend, utilize, and influence its continued development, is also significant. 




    Despite the fact that it has arrived, quantum computing is still in its infancy. 


    • Commercial cloud services might enable inexpensive access in the future, similar to how scientific and banking institutions can hire cloud AI applications to do complicated tasks that are invoiced based on the amount of computer cycles utilized now. 
    • To diagnose genetic problems in newborn newborns, hospitals, for example, are using genome sequencing applications housed on AI accelerators in hyperscale data centers. The procedure is inexpensive, and the findings are available in minutes, allowing physicians to intervene quickly and possibly save lives. 
    • Quantum computing as a service has the potential to improve healthcare and a variety of other sectors, including materials science. 
    • Simulating a coffee molecule, for example, is very challenging with a traditional computer, requiring more than 100 years of processing time. The work can be completed in seconds by a quantum computer. 
    • Climate analysis, transit planning, biology, financial services, encryption, and codebreaking are some of the other areas that might benefit. 
    • Quantum computing, for all of its potential, isn't come to replace traditional computing or flip the world on its head. 
    • Quantum bits (qubits) may hold exponentially more information than traditional binary bits since they can be in both states, 0 and 1, but binary bits can only be in one state. 
    • Quantum, on the other hand, is only suitable for specific kinds of algorithms since their state when measured is determined by chance. Others are best handled by traditional computers. 





    Quantum computing will take more than a decade to reach the Plateau of Productivity.




    Because of the massive efficiency it delivers at scale, quantum computing has caught the attention of technological leaders. 

    However, it will take years to develop for most applications, even if it makes limited progress in highly specialized sectors like materials science and cryptography in the short future. 


    Quantum approaches, on the other hand, are gaining traction with specific AI tools, as seen by recent advancements in natural language processing that potentially break open the "black box" of today's neural networks. 




    • The lambeq kit, sometimes known as lambeq, is a traditional Python repository available on GitHub. 
    • It coincides with the arrival to Cambridge Quantum of well-known AI and NLP researchers, and provides an opportunity for hands-on QNLP experience. 
    • The lambeq program is supposed to turn phrases into quantum circuits, providing a fresh perspective on text mining, language translation, and bioinformatics corpora. It is named after late semantics scholar Joachim Lambek. 
    • According to Bob Coecke, principal scientist at Cambridge Quantum, NLP may give explainability not feasible in today's "bag of words" neural techniques done on conventional computers. 





    These patterns, as shown on schema, resemble parsed phrases on elementary school blackboards. 

    Coecke told that current NLP approaches "don't have the capacity to assemble things together to discover a meaning." 


    "What we want to do is introduce compositionality in the traditional sense, which means using the same compositional framework. We want to reintroduce logic." 

    Honeywell announced earlier this year that it would merge its own quantum computing operations with Cambridge Quantum to form an independent company to pursue cybersecurity, drug discovery, optimization, material science, and other applications, including AI, as part of its efforts to expand quantum infrastructure. 

    Honeywell claimed the new operation will cost between $270 million and $300 million to build. 


    Cambridge Quantum said that it will stay autonomous while collaborating with a variety of quantum computing companies, including IBM. 

    In an e-mail conversation, Cambridge Quantum founder and CEO Ilyas Khan said that the lambeq work is part of a larger AI project that is the company's longest-term initiative. 

    In terms of timetables, we may be pleasantly pleased, but we feel that NLP is at the core of AI in general, and thus something that will truly come to the fore as quantum computers scale," he added. 

    In Cambridge Quantum's opinion, the most advanced application areas are cybersecurity and quantum chemistry. 





    What type of quantum hardware timetable do we expect in the future? 




    • Not only is there a well-informed agreement on the hardware plan, but also on the software roadmap (Honeywell and IBM among credible corporate players in this regard). 
    • Quantum computing is not a general-purpose technology; we cannot utilize quantum computing to solve all of our existing business challenges.
    • According to Gartner's Hype Cycle for Computing Infrastructure for 2021, quantum computing would take more than ten years to reach the Plateau of Productivity. 
    • That's where the analytics company expects IT users to get the most out of a certain technology. 
    • Quantum computing's current position on Gartner's Peak of Inflated Expectations — a categorization for emerging technologies that are deemed overhyped — is the same as it was in 2020.


    ~ Jai Krishna Ponnappan

    You may also want to read more about Quantum Computing here.



    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...