Showing posts with label INTERNIST-I. Show all posts
Showing posts with label INTERNIST-I. Show all posts

Artificial Intelligence - Knowledge Engineering In Expert Systems.

  


Knowledge engineering (KE) is an artificial intelligence subject that aims to incorporate expert knowledge into a formal automated programming system in such a manner that the latter can produce the same or comparable results in problem solving as human experts when working with the same data set.

Knowledge engineering, more precisely, is a discipline that develops methodologies for constructing large knowledge-based systems (KBS), also known as expert systems, using appropriate methods, models, tools, and languages.

For knowledge elicitation, modern knowledge engineering uses the knowledge acquisition and documentation structuring (KADS) approach; hence, the development of knowledge-based systems is considered a modeling effort (i.e., knowledge engineer ing builds up computer models).

It's challenging to codify the knowledge acquisition process since human specialists' knowledge is a combination of skills, experience, and formal knowledge.

As a result, rather than directly transferring knowledge from human experts to the programming system, the experts' knowledge is modeled.

Simultaneously, direct simulation of the entire cognitive process of experts is extremely difficult.

Designed computer models are expected to achieve targets similar to experts’ results doing problem solving in the domain rather than matching the cognitive capabilities of the experts.

As a result, knowledge engineering focuses on modeling and problem-solving methods (PSM) that are independent of various representation formalisms (production rules, frames, etc.).

The problem solving method is a key component of knowledge engineering, and it refers to the knowledge-level specification of a reasoning pattern that can be used to complete a knowledge-intensive task.

Each problem-solving technique is a pattern that offers template structures for addressing a specific issue.

The terms "diagnostic," "classification," and "configuration" are often used to categorize problem-solving strategies based on their topology.

PSM "Cover-and-Differentiate" for diagnostic tasks and PSM "Propose-and-Reverse" for parametric design tasks are two examples.

Any problem-solving approach is predicated on the notion that the suggested method's logical adequacy corresponds to the computational tractability of the system implementation based on it.

The PSM heuristic classification—an inference pattern that defines the behavior of knowledge-based systems in terms of objectives and knowledge required to attain these goals—is often used in early instances of expert systems.

Inference actions and knowledge roles, as well as their relationships, are covered by this problem-solving strategy.

The relationships specify how domain knowledge is used in each interference action.

Observables, abstract observables, solution abstractions, and solution are the knowledge roles, while the interference action might be abstract, heuristic match, or refine.

The PSM heuristic classification requires a hierarchically organized model of observables as well as answers for "abstract" and "refine," making it suited for static domain knowledge acquisition.

In the late 1980s, knowledge engineering modeling methodologies shifted toward role limiting methods (RLM) and generic tasks (GT).

The idea of the "knowledge role" is utilized in role-limiting methods to specify how specific domain knowledge is employed in the problem-solving process.

RLM creates a wrapper over PSM by explaining it in broad terms with the purpose of reusing it.

However, since this technique only covers a single instance of PSM, it is ineffective for issues that need the employment of several methods.

Configurable role limiting methods (CRLM) are an extension of the role limiting methods concept, offering a predetermined collection of RLMs as well as a fixed scheme of knowledge categories.

Each member method may be used on a distinct subset of a job, but introducing a new method is challenging since it necessitates changes to established knowledge categories.

The generic task method includes a predefined scheme of knowledge kinds and an inference mechanism, as well as a general description of input and output.

The generic task is based on the "strong interaction problem hypothesis," which claims that domain knowledge's structure and representation may be totally defined by its application.

Each generic job makes use of information and employs control mechanisms tailored to that knowledge.

Because the control techniques are more domain-specific, the actual knowledge acquisition employed in GT is more precise in terms of problem-solving step descriptions.

As a result, the design of specialized knowledge-based systems may be thought of as the instantiation of specified knowledge categories using domain-specific words.

The downside of GT is that it may not be possible to integrate a specified problem-solving approach with the optimum problem-solving strategy required to complete the assignment.

The task structure (TS) approach seeks to address GT's shortcomings by distinguishing between the job and the technique employed to complete it.

As a result, every task-structure based on that method postulates how the issue might be solved using a collection of generic tasks, as well as what knowledge has to be acquired or produced for these tasks.

Because of the requirement for several models, modeling frameworks were created to meet various parts of knowledge engineering methodologies.

The organizational model, task model, agent model, communication model, expertise model, and design model are the models of the most common engineering CommonKADS structure (which depends on KADS).

The organizational model explains the structure as well as the tasks that each unit performs.

The task model describes tasks in a hierarchical order.

Each agent's skills in task execution are specified by the agent model.

The communication model specifies how agents interact with one another.

The expertise model, which employs numerous layers and focuses on representing domain-specific knowledge (domain layer) as well as inference for the reasoning process, is the most significant model (inference layer).

A task layer is also supported by the expertise model.

The latter is concerned with task decomposition.

The system architecture and computational mechanisms used to make the inference are described in the design model.

In CommonKADS, there is a clear distinction between domain-specific knowledge and generic problem-solving techniques, allowing various problems to be addressed by constructing a new instance of the domain layer and utilizing the PSM on a different domain.

Several libraries of problem-solving algorithms are now available for use in development.

They are distinguished by their key characteristics: if the library was created for a specific goal or has a larger reach; whether the library is formal, informal, or implemented; whether the library uses fine or coarse grained PSM; and, lastly, the library's size.

Recently, some research has been carried out with the goal of unifying existing libraries by offering adapters that convert task-neutral PSM to task-specific PSM.

The MIKE (model-based and incremental knowledge engineering) method, which proposes integrating semiformal and formal specification and prototyping into the framework, grew out of the creation of CommonKADS.

As a result, MIKE divides the entire process of developing knowledge-based systems into a number of sub-activities, each of which focuses on a different aspect of system development.

The Protégé method makes use of PSMs and ontologies, with an ontology being defined as an explicit statement of a common conceptualization that holds in a certain situation.

Although the ontologies used in Protégé might be of any form, the ones utilized are domain ontologies, which describe the common conceptualization of a domain, and method ontologies, which specify the ideas and relations used by problem solving techniques.

In addition to problem-solving techniques, the development of knowledge-based systems necessitates the creation of particular languages capable of defining the information needed by the system as well as the reasoning process that will use that knowledge.

The purpose of such languages is to give a clear and formal foundation for expressing knowledge models.

Furthermore, some of these formal languages may be executable, allowing simulation of knowledge model behavior on specified input data.

The knowledge was directly encoded in rule-based implementation languages in the early years.

This resulted in a slew of issues, including the impossibility to provide some forms of information, the difficulty to assure consistent representation of various types of knowledge, and a lack of specifics.

Modern approaches to language development aim to target and formalize the conceptual models of knowledge-based systems, allowing users to precisely define the goals and process for obtaining models, as well as the functionality of interface actions and accurate semantics of the various domain knowledge elements.

The majority of these epistemological languages include primitives like constants, functions, and predicates, as well as certain mathematical operations.

Object-oriented or frame-based languages, for example, define a wide range of modeling primitives such as objects and classes.

KARL, (ML)2, and DESIRE are the most common examples of specific languages.

KARL is a language that employs a Horn logic variation.

It was created as part of the MIKE project and combines two forms of logic to target the KADS expertise model: L-KARL and P-KARL.

The L-KARL is a frame logic version that may be used in inference and domain layers.

It's a mix of first-order logic and semantic data modeling primitives, in fact.

P-KARL is a task layer specification language that is also a dynamic logic in some versions.

For KADS expertise models, (ML)2 is a formalization language.

The language mixes first-order extended logic for domain layer definition, first-order meta logic for inference layer specification, and quantified dynamic logic for task layer specification.

The concept of compositional architecture is used in DESIRE (the design and specification of interconnected reasoning components).

It specifies the dynamic reasoning process using temporal logics.

Transactions describe the interaction between components in knowl edge-based systems, and control flow between any two objects is specified as a set of control rules.

A metadata description is attached to each item.

In a declarative approach, the meta level specifies the dynamic features of the object level.

The need to design large knowledge-based systems prompted the development of knowledge engineering, which entails creating a computer model with the same problem-solving capabilities as human experts.

Knowledge engineering views knowledge-based systems as operational systems that should display some desirable behavior, and provides modeling methodologies, tools, and languages to construct such systems.




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR; MOLGEN; MYCIN.



Further Reading:


Schreiber, Guus. 2008. “Knowledge Engineering.” In Foundations of Artificial Intelligence, vol. 3, edited by Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter, 929–46. Amsterdam: Elsevier.

Studer, Rudi, V. Richard Benjamins, and Dieter Fensel. 1998. “Knowledge Engineering: Principles and Methods.” Data & Knowledge Engineering 25, no. 1–2 (March): 161–97.

Studer, Rudi, Dieter Fensel, Stefan Decker, and V. Richard Benjamins. 1999. “Knowledge Engineering: Survey and Future Directions.” In XPS 99: German Conference on Knowledge-Based Systems, edited by Frank Puppe, 1–23. Berlin: Springer.



Artificial Intelligence - The INTERNIST-I And QMR Expert Systems.

 



INTERNIST-I and QMR (Quick Medical Reference) are two similar expert systems created in the 1970s at the University of Pittsburgh School of Medicine.

The INTERNIST-I system was created by Jack D. Myers, who worked with the university's Intelligent Systems Program head Randolph A.

Miller, artificial intelligence pioneer Harry Pople, and infectious disease specialist Victor Yu to encode his internal medicine knowledge.

The expert system's microcomputer version is known as QMR.

It was created in the 1980s by Fred E. Masarie, Jr., Randolph A. Miller, and Jack D. Myers at the University of Pittsburgh School of Medicine's Section of Medical Informatics.

The two expert systems shared algorithms and are referred to as INTERNIST-I/QMR combined.

QMR may be used as a decision support tool, but it can also be used to evaluate physician opinions and recommend laboratory testing.

QMR may also be used as a teaching tool since it includes case scenarios.

INTERNIST-I was created in a medical school course presented at the University of Pittsburgh by Myers, Miller, Pople, and Yu.

The Logic of Problem-Solving in Clinical Diagnosis requires fourth-year students to integrate lab oratory and sign-and-symptom data obtained from published and unpublished clinical icopathological reports and patient histories into the course.

The technology was also used to test the registered pupils as a "quizmaster." The team developed a ranking algorithm, partitioning algorithm, exclusion functions, and other heuristic rules instead of using statistical artificial intelligence approaches.

The algorithm generated a prioritized list of likely diagnoses based on the submitted physician findings, as well as responses to follow-up questions.

INTERNIST-I may potentially suggest further lab testing.

By 1982, the project's directors believed that fifteen person-years had been invested on the system.

The system finally included taxonomy information about 1,000 disorders and three-quarters of all known internal medicine diagnoses, making it very knowledge-intensive.

At the pinnacle of the Greek oracle approach to medical artificial intelligence, the University of Pittsburgh School of Medicine produced INTERNIST-I.

The initial generation of the system's user was mostly considered as a passive spectator.

The system's creators hoped that it might take the role of doctors in locations where they were rare, such as manned space missions, rural communities, and nuclear submarines.

The technology, on the other hand, was time-consuming and difficult to use for paramedics and medical personnel.

Donald McCracken and Robert Akscyn of neighboring Carnegie Mellon University created INTERNIST-I in ZOG, an early knowledge management hypertext system, to address this challenge.

QMR enhanced INTERNIST-user-friendliness I's while promoting more active investigation of the case study knowledge set.

QMR also used a weighted scales and a ranking algorithm to analyze a patient's signs and symptoms and relate them to diagnosis.

By researching the literature in the topic, system designers were able to assess the evocative intensity and frequency (or sensitivity) of case results.

The foundation of QMR is a heuristic algorithm that assesses evocative strength and frequency and assigns a numerical value to them.

In the solution of diagnostic problems, QMR adds rules that enable the system to convey time-sensitive reasoning.

The capacity to create homologies between several related groups of symptoms was one feature of QMR that was not available in INTERNIST-I.

QMR included not just diagnoses that were probable, but also illnesses with comparable histories, signs and symptoms, and early laboratory findings.

By comparing QMR's output with case files published in The New England Journal of Medicine, the system's accuracy was tested on a regular basis.

QMR, which was commercially offered to doctors from First DataBank in the 1980s and 1990s, needed roughly 10 hours of basic training.

Typical runs of the software on individual patient situations were done after hours in private clinics.

Instead of being a clinical decisionmaker, QMR's architects recast the expert system as a hyperlinked electronic textbook.

The National Library of Medicine, the NIH Division of Research Resources, and the CAMDAT Foundation all provided funding for INTERNIST-I/QMR.

DXplain, Meditel, and Iliad were three medical artificial intelligence decision aids that were equivalent at the time.

G. Octo Barnett and Stephen Pauker of the Massachusetts General Hospital/Harvard Medical School Laboratory of Computer Science created DXplain with funding help from the American Medical Association.

DXplain's knowledge base was derived from the American Medical Association's (AMA) book Current Medical Information and Terminology (CMIT), which described the causes, symptoms, and test results for over 3,000 disorders.

The diagnostic algorithm at the core of DXplain, like that of INTERNIST-I, used a scoring or ranking procedure as well as modified Bayesian conditional probability computations.

In the 1990s, DXplain became accessible on diskette for PC users.

Meditel was developed from an earlier computerized decision aid, the Meditel Pediatric System, by Albert Einstein Medical Center educator Herbert Waxman and physician William Worley of the University of Pennsylvania Department of Medicine in the mid-1970s.

Using Bayesian statistics and heuristic decision principles, Meditel aided in suggesting probable diagnosis.

Meditel was marketed as a doc-in-a-box software package for IBM personal computers in the 1980s by Elsevier Science Publishing Company.

In the Knowledge Engineering Center of the Department of Medical Informatics at the University of Utah, Dr.

Homer Warner and his partners nurtured Iliad, a third medical AI competitor.

The federal government awarded Applied Medical Informatics a two-million-dollar grant in the early 1990s to integrate Iliad's diagnostic software directly to computerized databases of patient data.

Iliad's core target was doctors and medical students, but in 1994, the business produced Medical HouseCall, a consumer version of Iliad. 


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.




Further Reading:


Bankowitz, Richard A. 1994. The Effectiveness of QMR in Medical Decision Support: Executive Summary and Final Report. Springfield, VA: U.S. Department of Commerce, National Technical Information Service.

Freiherr, Gregory. 1979. The Seeds of Artificial Intelligence: SUMEX-AIM. NIH Publication 80-2071. Washington, DC: National Institutes of Health, Division of Research Resources.

Lemaire, Jane B., Jeffrey P. Schaefer, Lee Ann Martin, Peter Faris, Martha D. Ainslie, and Russell D. Hull. 1999. “Effectiveness of the Quick Medical Reference as a Diagnostic Tool.” Canadian Medical Association Journal 161, no. 6 (September 21): 725–28.

Miller, Randolph A., and Fred E. Masarie, Jr. 1990. “The Demise of the Greek Oracle Model for Medical Diagnosis Systems.” Methods of Information in Medicine 29, no. 1: 1–2.

Miller, Randolph A., Fred E. Masarie, Jr., and Jack D. Myers. 1986. “Quick Medical Reference (QMR) for Diagnostic Assistance.” MD Computing 3, no. 5: 34–48.

Miller, Randolph A., Harry E. Pople, Jr., and Jack D. Myers. 1982. “INTERNIST-1: An Experimental Computer-Based Diagnostic Consultant for General Internal Medicine.” New England Journal of Medicine 307, no. 8: 468–76.

Myers, Jack D. 1990. “The Background of INTERNIST-I and QMR.” In A History of Medical Informatics, edited by Bruce I. Blum and Karen Duncan, 427–33. New York: ACM Press.

Myers, Jack D., Harry E. Pople, Jr., and Jack D. Myers. 1982. “INTERNIST: Can Artificial Intelligence Help?” In Clinical Decisions and Laboratory Use, edited by Donald P. Connelly, Ellis S. Benson, M. Desmond Burke, and Douglas Fenderson, 251–69. Minneapolis: University of Minnesota Press.

Pople, Harry E., Jr. 1976. “Presentation of the INTERNIST System.” In Proceedings of the AIM Workshop. New Brunswick, NJ: Rutgers University.






Artificial Intelligence - What Is Automated Multiphasic Health Testing?

 




Automated Multiphasic Health Testing (AMHT) is an early medical computer system for screening large numbers of ill or healthy people in a short period of time semiautomatically.

Lester Breslow, a public health official, pioneered the AMHT idea in 1948, integrating typical automated medical questionnaires with mass screening procedures for groups of individuals being examined for specific illnesses like diabetes, TB, or heart disease.

Multiphasic health testing involves integrating a number of tests into a single package to screen a group of individuals for different diseases, illnesses, or injuries.

AMHT might be related to regular physical exams or health programs.

Humans are subjected to examinations similar to those used in state inspections of autos.

In other words, AMHT approaches preventative medical care in a factory-like manner.

In the 1950s, Automated Multiphasic Health Testing (AMHT) became popular, allowing health care networks to swiftly screen new candidates.

In 1951, the Kaiser Foundation Health Plan began offering a Multiphasic Health Checkup to its members.

Morris F. Collen, an electrical engineer and physician, was the program's director from 1961 until 1979.

The "Kaiser Checkup," which used an IBM 1440 computer to crunch data from patient interviews, lab testing, and clinical findings, looked for undetected illnesses and made treatment suggestions.

Patients hand-sorted 200 prepunched cards with printed questions requiring "yes" or "no" replies at the questionnaire station (one of twenty such stations).

The computer shuffled the cards and used a probability ratio test devised by Jerzy Neyman, a well-known statistician.

Electrocardiographic, spirographic, and ballistocardiographic medical data were also captured by Kaiser's computer system.

A Kaiser Checkup takes around two and a half hours to complete.

BUPA in the United Kingdom and a nationwide program created by the Swedish government are two examples of similar AMHT initiatives that have been introduced in other countries.

The popularity of computerized health testing has fallen in recent decades.

There are issues concerning privacy as well as financial considerations.

Working with AMHT, doctors and computer scientists learned that the body typically masks symptoms.

A sick person may pass through diagnostic devices successfully one day and then die the next.

Electronic medical recordkeeping, on the other hand, has succeeded where AMHT has failed.

Without physical handling or duplication, records may be sent, modified, and returned.

Multiple health providers may utilize patient charts at the same time.

Uniform data input ensures readability and consistency in structure.

Summary reports may now be generated automatically from the information gathered in individual electronic medical records using electronic medical records software.

These "big data" reports make it possible to monitor changes in medical practice as well as evaluate results over time.

Summary reports also enable cross-patient analysis, a detailed algorithmic examination of prognoses by patient groups, and the identification of risk factors prior to the need for therapy.

The application of deep learning algorithms to medical data has sparked a surge of interest in so-called cognitive computing for health care.

IBM's Watson system and Google DeepMind Health, two current leaders, promise changes in eye illness and cancer detection and treatment.

Also unveiled by IBM is the Medical Sieve system, which analyzes both radiological images and textual documents.

Clinical Decision Support Systems, Computer-Assisted Diagnosis, INTERNIST-I, and QMR are all examples of clinical decision support systems.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Clinical Decision Support Systems; Computer-Assisted Diagnosis; INTERNIST-I and QMR.


Further Reading


Ayers, W. R., H. M. Hochberg, and C. A. Caceres. 1969. “Automated Multiphasic Health Testing.” Public Health Reports 84, no. 7 (July): 582–84.

Bleich, Howard L. 1994. “The Kaiser Permanente Health Plan, Dr. Morris F. Collen, and Automated Multiphasic Testing.” MD Computing 11, no. 3 (May–June): 136–39.

Collen, Morris F. 1965. “Multiphasic Screening as a Diagnostic Method in Preventive Medicine.” Methods of Information in Medicine 4, no. 2 (June): 71–74.

Collen, Morris F. 1988. “History of the Kaiser Permanente Medical Care Program.” Inter￾viewed by Sally Smith Hughes. Berkeley: Regional Oral History Office, Bancroft Library, University of California.

Mesko, Bertalan. 2017. “The Role of Artificial Intelligence in Precision Medicine.” Expert Review of Precision Medicine and Drug Development 2, no. 5 (September): 239–41.

Roberts, N., L. Gitman, L. J. Warshaw, R. A. Bruce, J. Stamler, and C. A. Caceres. 1969. “Conference on Automated Multiphasic Health Screening: Panel Discussion, Morning Session.” Bulletin of the New York Academy of Medicine 45, no. 12 (December): 1326–37.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...