Artificial Intelligence - What Is Computer-Assisted Diagnosis?

 



Computer-assisted diagnosis (CAD) is a branch of medical informatics that deals with the use of computer and communications technologies in medicine.

Beginning in the 1950s, physicians and scientists used computers and software to gather and organize expanding collections of medical data and to offer important decision and treatment assistance in contacts with patients.

The use of computers in medicine has resulted in significant improvements in the medical diagnostic decision-making process.

Tables of differential diagnoses inspired the first diagnostic computing devices.

Differential diagnosis entails the creation of a set of sorting criteria that may be used to determine likely explanations of symptoms during a patient's examination.

A excellent example is the Group Symbol Associator (GSA), a slide rule-like device designed about 1950 by F.A.

Nash of the South West London Mass X-Ray Service that enabled the physician to line up a patient's symptoms with 337 symptom-disease complexes to obtain a diagnosis (Nash 1960, 1442–46).

At the Rockefeller Institute for Medical Research's Medical Electronics Center, Cornell University physician Martin Lipkin and physiologist James Hardy developed a manual McBee punched card system for the detection of hematological illnesses.

Beginning in 1952, researchers linked patient data to findings previously known about each of twenty-one textbook hematological diseases (Lipkin and Hardy 1957, 551–52).

The findings impressed the Medical Electronics Center's director, television pioneer Vladimir Zworykin, who used Lipkin and Hardy's method to create a comparable digital computer equipment.

By compiling and sorting findings and creating a weighted diagnostic index, Zworykin's system automated what had previously been done manually.

Zworykin used vacuum tube BIZMAC computer coders at RCA's Electronic Data Processing Division to convert the punched card system to the digital computer.

On December 10, 1957, in Camden, New Jersey, the finalized Zworykin programmed hematological differential diagnosis system was first exhibited on the BIZMAC computer (Engle 1992, 209–11).

As a result, the world's first totally digital electronic computer diagnostic assistance was developed.

In the 1960s, a new generation of doctors collaborated with computer scientists to link the concept of reasoning under ambiguity to the concept of personal probability, where orderly medical judgments might be indexed along the lines of gambling behavior.

Probability is used to quantify uncertainty in order to determine the likelihood that a single patient has one or more illnesses.

The use of personal probability in conjunction with digital computer technologies yielded unexpected outcomes.

Medical decision analysis is an excellent example of this, since it entails using utility and probability theory to compute alternative patient diagnoses, prognoses, and treatment management options.

Stephen Pauker and Jerome Kassirer, both of Tufts University's medical informatics department, are often acknowledged as among the first to explicitly apply computer-aided decision analysis to clinical medicine (Pauker and Kassirer 1987, 250–58).

Identifying all available options and their possible consequences, as well as creating a decision model, generally in the form of a decision tree so complicated and changing that only a computer can keep track of changes in all of the variables in real time, is what decision analysis entails.

Nodes in such a tree describe options, probabilities, and outcomes.

The tree is used to show the strategies accessible to the physician and to quantify the chance of each result occurring if a certain approach is followed (sometimes on a moment-by-moment basis).

Each outcome's relative value is also expressed mathematically, as a utility, on a clearly defined scale.

Decision analysis assigns an estimate of the cost of getting each piece of clinical or laboratory-derived information, as well as the possible value that may be gained from it.

The costs and benefits may be measured in qualitative terms, such as the quality of life or amount of pain derived from the acquisition and use of medical information, but they are usually measured in quantitative or statistical terms, such as when calculating surgical success rates or cost-benefit ratios for new medical technologies.

Critics claimed that cost-benefit calculations made rationing of scarce health-care resources more appealing, but decision analysis at irregular intervals resisted the onslaught (Berg 1997, 54).

Artificial intelligence expert systems started to supplant more logical and sequential algorithmic processes for attaining medical agreement in the 1960s and 1970s.

Miller and Masarie, Jr. (1990, 1–2) criticized the so-called oracles of medical computing's past, claiming that they created factory diagnoses (Miller and Masarie, Jr. 1990, 1–2).

Computer scientists collaborated with clinicians to integrate assessment procedures into medical applications, repurposing them as criticizing systems of last resort rather than diagnostic systems (Miller 1984, 17–23).

The ATTENDING expert system for anesthetic administration, created at Yale University School of Medicine, may have been the first to use a criticizing method.

Routines for risk assessment are at the heart of the ATTENDING system, and they assist residents and doctors in weighing factors such as patient health, surgical procedure, and available anesthetics when making clinical decisions.

Unlike diagnostic tools that suggest a procedure based on previously entered data, ATTENDING reacts to user recommendations in a stepwise manner (Miller 1983, 362–69).

Because it requires the attentive attention of a human operator, the criticizing technique absolves the computer of ultimate responsibility for diagnosis.

This is a critical characteristic in an era where strict responsibility applies to medical technology failures, including complicated software.

Computer-assisted diagnosis migrated to home computers and the internet in the 1990s and early 2000s.

Medical HouseCall and Dr. Schueler's Home Medical Advisor are two instances of so-called "doc-in-a-box" software.

Medical HouseCall is a generalized, consumer-oriented version of the University of Utah's Iliad decision-support system.

The information foundation for Medical HouseCall took an estimated 150,000 person hours to develop.

The first software package, which was published in May 1994, had information on over 1,100 ailments as well as 3,000 prescription and nonprescription medications.

It also included cost and treatment alternatives information.

The encyclopedia included in the program spanned 5,000 printed pages.

Medical HouseCall also has a module for maintaining medical records for family members.

Medical HouseCall's first version required users to choose one of nineteen symptom categories by clicking on graphical symbols depicting bodily parts, then answer a series of yes-or-no questions.

After that, the program generates a prioritized list of potential diagnoses.

Bayesian estimate is used to obtain these diagnoses (Bouhaddou and Warner, Jr. 1995, 1181–85).


Dr. Schueler's Home Medical Advisor was a competitive software program in the 1990s.

Home Medical Advisor is a consumer-oriented CD-ROM set that contains a wide library of health and medical information, as well as a diagnostic-assistance application that offers probable diagnoses and appropriate courses of action.

In 1997, its medical encyclopedia defined more than 15,000 words.

There's also a picture library and full-motion video presentations in Home Medical Advisor.


The program's artificial intelligence module may be accessed via two alternative interfaces.

  1. The first involves using mouse clicks to tick boxes.
  2. The second interface requires the user to provide written responses to particular inquiries in natural language.


The program's differential diagnoses are connected to more detailed information about those illnesses (Cahlin 1994, 53–56).

Online symptom checks are becoming commonplace.

Deep learning in big data analytics has the potential to minimize diagnostic and treatment mistakes, lower costs, and improve workflow efficiency in the future.

CheXpert, an automated chest x-ray diagnostic system, was unveiled in 2019 by Stanford University's Machine Learning Group and Intermountain Healthcare.

In under 10 seconds, the radiology AI program can identify pneumonia.

In the same year, Massachusetts General Hospital reported the development of a convolutional neural network based on a huge collection of chest radiographs to detect persons at high risk of death from any cause, including heart disease and cancer.

The identification of wrist fractures, breast cancer that has spread, and cataracts in youngsters has improved thanks to pattern recognition utilizing deep neural networks.

Although the accuracy of deep learning findings varies by field of health and kind of damage or sickness, the number of applications is growing to the point where smartphone apps with integrated AI are already in limited usage.

Deep learning approaches are projected to be used in the future to help with in-vitro fertilization embryo selection, mental health diagnosis, cancer categorization, and weaning patients off of ventilator support.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Automated Multiphasic Health Testing; Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR.


Further Reading


Berg, Marc. 1997. Rationalizing Medical Work: Decision Support Techniques and Medical Practices. Cambridge, MA: MIT Press.

Bouhaddou, Omar, and Homer R. Warner, Jr. 1995. “An Interactive Patient Information and Education System (Medical HouseCall) Based on a Physician Expert System (Iliad).” Medinfo 8, pt. 2: 1181–85.

Cahlin, Michael. 1994. “Doc on a Disc: Diagnosing Home Medical Software.” PC Novice, July 1994: 53–56.

Engle, Ralph L., Jr. 1992. “Attempts to Use Computers as Diagnostic Aids in Medical Decision Making: A Thirty-Year Experience.” Perspectives in Biology and Medicine 35, no. 2 (Winter): 207–19.

Lipkin, Martin, and James D. Hardy. 1957. “Differential Diagnosis of Hematologic Diseases Aided by Mechanical Correlation of Data.” Science 125 (March 22): 551–52.

Miller, Perry L. 1983. “Critiquing Anesthetic Management: The ‘ATTENDING’ Computer System.” Anesthesiology 58, no. 4 (April): 362–69.

Miller, Perry L. 1984. “Critiquing: A Different Approach to Expert Computer Advice in Medicine.” In Proceedings of the Annual Symposium on Computer Applications in Medical Care, vol. 8, edited by Gerald S. Cohen, 17–23. Piscataway, NJ: IEEE Computer Society.

Miller, Randolph A., and Fred E. Masarie, Jr. 1990. “The Demise of the Greek Oracle Model for Medical Diagnosis Systems.” Methods of Information in Medicine 29, no. 1 (January): 1–2.

Nash, F. A. 1960. “Diagnostic Reasoning and the Logoscope.” Lancet 276, no. 7166 (December 31): 1442–46.

Pauker, Stephen G., and Jerome P. Kassirer. 1987. “Decision Analysis.” New England Journal of Medicine 316, no. 5 (January): 250–58.

Topol, Eric J. 2019. “High-Performance Medicine: The Convergence of Human and Artificial Intelligence.” Nature Medicine 25, no. 1 (January): 44–56.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...