Showing posts with label Knowledge engineering. Show all posts
Showing posts with label Knowledge engineering. Show all posts

Artificial Intelligence - What Is The MYCIN Expert System?




MYCIN is an interactive expert system for infectious illness diagnosis and treatment developed by computer scientists Edward Feigenbaum (1936–) and Bruce Buchanan at Stanford University in the 1970s.

MYCIN was Feigenbaum's second expert system (after DENDRAL), but it was the first to be commercially accessible as a standalone software package.

TeKnowledge, the software business cofounded by Feigenbaum and other partners, offered EMYCIN as the most successful expert shell for this purpose by the 1980s.

MYCIN was developed by Feigenbaum's Heuristic Programming Project (HPP) in collaboration with Stanford Medical School's Infectious Diseases Group (IDG).

The expert clinical physician was IDG's Stanley Cohen.

Feigenbaum and Buchanan had read stories of antibiotics being prescribed wrongly owing to misdiagnoses in the early 1970s.

MYCIN was created to assist a human expert in making the best judgment possible.

MYCIN started out as a consultation tool.

MYCIN supplied a diagnosis that included the necessary antibiotics and dose after inputting the results of a patient's blood test, bacterial cultures, and other data.



MYCIN also served as an explanation system.

In simple English, the physician-user may ask MYCIN to expound on a certain inference.

Finally, MYCIN had a knowledge-acquisition software that was used to keep the system's knowledge base up to date.

Feigenbaum and his collaborators introduced two additional features to MYCIN after gaining experience with DENDRAL.

MYCIN's inference engine comes with a rule interpreter to begin with.

This enabled "goal-directed backward chaining" to be used to achieve diagnostic findings (Cendrowska and Bramer 1984, 229).

MYCIN set itself the objective of discovering a useful clinical parameter that matched the patient data submitted at each phase in the procedure.

The inference engine looked for a set of rules that applied to the parameter in question.

MYCIN typically required more information when evaluating the premise of one of the rules in this parameter set.

The system's next subgoal was to get that data.

MYCIN might test out new regulations or ask the physician for further information.

This process was repeated until MYCIN had enough data on numerous factors to make a diagnosis.

The certainty factor was MYCIN's second unique feature.

These elements should not be seen "as conditional probabilities, [though] they are loosely grounded on probability theory," according to William van Melle (then a doctoral student working on MYCIN for his thesis project) (van Melle 1978, 314).

The execution of production rules was assigned a value between –1 and +1 by MYCIN (dependent on how strongly the system felt about their correctness).

MYCIN's diagnosis also included these certainty elements, allowing the physician-user to make their own final decision.

The software package, known as EMYCIN, was released in 1976 and comprised an inference engine, user interface, and short-term memory.

It didn't have any information.

("E" stood for "Empty" at first, then "Essential.") Customers of EMYCIN were required to link their own knowledge base to the system.

Faced with high demand for EMYCIN packages and high interest in MOLGEN (Feigenbaum's third expert system), HPP decided to form IntelliCorp and TeKnowledge, the first two expert system firms.

TeKnowledge was eventually founded by a group of roughly twenty individuals, including all of the previous HPP students who had developed expert systems.

EMYCIN was and continues to be their most popular product.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Knowledge Engineering


References & Further Reading:


Cendrowska, J., and M. A. Bramer. 1984. “A Rational Reconstruction of the MYCIN Consultation System.” International Journal of Man-Machine Studies 20 (March): 229–317.

Crevier, Daniel. 1993. AI: Tumultuous History of the Search for Artificial Intelligence. Princeton, NJ: Princeton University Press.

Feigenbaum, Edward. 2000. “Oral History.” Charles Babbage Institute, October 13, 2000. van Melle, William. 1978. “MYCIN: A Knowledge-based Consultation Program for Infectious Disease Diagnosis.” International Journal of Man-Machine Studies 10 (May): 313–22.







Artificial Intelligence - What Is The MOLGEN Expert System?

 



MOLGEN is an expert system that helped molecular scientists and geneticists plan studies between 1975 and 1980.

It was Edward Feigenbaum's Heuristic Programming Project (HPP) at Stanford University's third expert system (after DENDRAL and MYCIN).

MOLGEN, like MYCIN before it, attracted hundreds of users outside of Stanford.

MOLGEN was originally made accessible to artificial intelligence researchers, molecular biologists, and geneticists via time-sharing on the GENET network in the 1980s.

Feigenbaum founded IntelliCorp in the late 1980s to offer a stand-alone software version of MOLGEN.

Scientific advancements in chromosomes and genes sparked an information boom in the early 1970s.

In 1971, Stanford University scientist Paul Berg performed the first gene splicing studies.

Stanford geneticist Stanley Cohen and University of California at San Francisco biochemist Herbert Boyer succeeded in inserting recombinant DNA into an organ ism two years later; the host organism (a bacterium) subsequently spontaneously replicated the foreign rDNA structure in its progeny.

Because of these developments, Stanford molecular researcher Joshua Lederberg told Feigenbaum that now was the right time to construct an expert system in Lederberg's expertise of molecular biology.

(Lederberg and Feigenbaum previously collaborated on DENDRAL, the first expert system.) MOLGEN could accomplish for recombinant DNA research and genetic engineering what DENDRAL had done for mass spectrometry, the two agreed.

Both expert systems were created with developing scientific topics in mind.

This enabled MOL GEN (and DENRAL) to absorb the most up-to-date scientific information and contribute to the advancement of their respective fields.

Mark Stefik and Peter Friedland developed programs for MOLGEN as their thesis project at HPP, and Feigenbaum was the primary investigator.

MOLGEN was supposed to follow a "skeletal blueprint" (Friedland and Iwasaki 1985, 161).

MOLGEN prepared a new experiment in the manner of a human expert, beginning with a design approach that had previously proven effective for a comparable issue.

MOLGEN then made hierarchical, step-by-step changes to the plan.

The algorithm was able to choose the most promising new experiments because to the combination of skeleton blueprints and MOLGEN's enormous knowledge base in molecular biology.

MOLGEN contained 300 lab procedures and strategies, as well as current data on forty genes, phages, plasmids, and nucleic acid structures, by 1980.

Fried reich and Stefik presented MOLGEN with a set of algorithms based on the molecular biology knowledge of Stanford University's Douglas Brutlag, Larry Kedes, John Sninsky, and Rosalind Grymes.

SEQ (for nucleic acid sequence analysis), GA1 (for generating enzyme maps of DNA structures), and SAFE were among them (for selecting enzymes most suit able for gene excision).

Beginning in February 1980, MOLGEN was made available to the molecular biology community outside of Stanford.

Under an account named GENET, the system was linked to SUMEX AIM (Stanford University Medical Experimental Computer for Artificial Intelligence in Medicine).

GENET was able to swiftly locate hundreds of users around the United States.

Academic scholars, experts from commercial giants like Monsanto, and researchers from modest start-ups like Genentech were among the frequent visitors.

The National Institutes of Health (NIH), which was SUMEX AIM's primary supporter, finally concluded that business customers could not have unfettered access to cutting-edge technology produced with public funds.

Instead, the National Institutes of Health encouraged Feigenbaum, Brutlag, Kedes, and Friedland to form IntelliGenetics, a company that caters to business biotech customers.

IntelliGenetics created BIONET with the support of a $5.6 million NIH grant over five years to sell or rent MOLGEN and other GENET applications.

For a $400 yearly charge, 900 labs throughout the globe had access to BIONET by the end of the 1980s.

Companies who did not wish to put their data on BIONET might purchase a software package from IntelliGenetics.

Until the mid-1980s, when IntelliGenetics withdrew its genetics material and maintained solely its underlying Knowledge Engineering Environment, MOLGEN's software did not sell well as a stand-alone product (KEE).

IntelliGenetics' AI division, which marketed the new KEE shell, changed its name to IntelliCorp.

Two more public offerings followed, but growth finally slowed.

MOLGEN's shell's commercial success, according to Feigenbaum, was hampered by its LISP-language; although LISP was chosen by pioneering computer scientists working on mainframe computers, it did not inspire the same level of interest in the corporate minicomputer sector.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


DENDRAL; Expert Systems; Knowledge Engineering.


References & Further Reading:


Feigenbaum, Edward. 2000. Oral History. Minneapolis, MN: Charles Babbage Institute.

Friedland, Peter E., and Yumi Iwasaki. 1985. “The Concept and Implementation of  Skeletal Plans.” Journal of Automated Reasoning 1: 161–208.

Friedland, Peter E., and Laurence H. Kedes. 1985. “Discovering the Secrets of DNA.” Communications of the ACM 28 (November): 1164–85.

Lenoir, Timothy. 1998. “Shaping Biomedicine as an Information Science.” In Proceedings of the 1998 Conference on the History and Heritage of Science Information Systems, edited by Mary Ellen Bowden, Trudi Bellardo Hahn, and Robert V. Williams, 27–46. Pittsburgh, PA: Conference on the History and Heritage of Science Information Systems.

Watt, Peggy. 1984. “Biologists Map Genes On-Line.” InfoWorld 6, no. 19 (May 7): 43–45.







Artificial Intelligence - Knowledge Engineering In Expert Systems.

  


Knowledge engineering (KE) is an artificial intelligence subject that aims to incorporate expert knowledge into a formal automated programming system in such a manner that the latter can produce the same or comparable results in problem solving as human experts when working with the same data set.

Knowledge engineering, more precisely, is a discipline that develops methodologies for constructing large knowledge-based systems (KBS), also known as expert systems, using appropriate methods, models, tools, and languages.

For knowledge elicitation, modern knowledge engineering uses the knowledge acquisition and documentation structuring (KADS) approach; hence, the development of knowledge-based systems is considered a modeling effort (i.e., knowledge engineer ing builds up computer models).

It's challenging to codify the knowledge acquisition process since human specialists' knowledge is a combination of skills, experience, and formal knowledge.

As a result, rather than directly transferring knowledge from human experts to the programming system, the experts' knowledge is modeled.

Simultaneously, direct simulation of the entire cognitive process of experts is extremely difficult.

Designed computer models are expected to achieve targets similar to experts’ results doing problem solving in the domain rather than matching the cognitive capabilities of the experts.

As a result, knowledge engineering focuses on modeling and problem-solving methods (PSM) that are independent of various representation formalisms (production rules, frames, etc.).

The problem solving method is a key component of knowledge engineering, and it refers to the knowledge-level specification of a reasoning pattern that can be used to complete a knowledge-intensive task.

Each problem-solving technique is a pattern that offers template structures for addressing a specific issue.

The terms "diagnostic," "classification," and "configuration" are often used to categorize problem-solving strategies based on their topology.

PSM "Cover-and-Differentiate" for diagnostic tasks and PSM "Propose-and-Reverse" for parametric design tasks are two examples.

Any problem-solving approach is predicated on the notion that the suggested method's logical adequacy corresponds to the computational tractability of the system implementation based on it.

The PSM heuristic classification—an inference pattern that defines the behavior of knowledge-based systems in terms of objectives and knowledge required to attain these goals—is often used in early instances of expert systems.

Inference actions and knowledge roles, as well as their relationships, are covered by this problem-solving strategy.

The relationships specify how domain knowledge is used in each interference action.

Observables, abstract observables, solution abstractions, and solution are the knowledge roles, while the interference action might be abstract, heuristic match, or refine.

The PSM heuristic classification requires a hierarchically organized model of observables as well as answers for "abstract" and "refine," making it suited for static domain knowledge acquisition.

In the late 1980s, knowledge engineering modeling methodologies shifted toward role limiting methods (RLM) and generic tasks (GT).

The idea of the "knowledge role" is utilized in role-limiting methods to specify how specific domain knowledge is employed in the problem-solving process.

RLM creates a wrapper over PSM by explaining it in broad terms with the purpose of reusing it.

However, since this technique only covers a single instance of PSM, it is ineffective for issues that need the employment of several methods.

Configurable role limiting methods (CRLM) are an extension of the role limiting methods concept, offering a predetermined collection of RLMs as well as a fixed scheme of knowledge categories.

Each member method may be used on a distinct subset of a job, but introducing a new method is challenging since it necessitates changes to established knowledge categories.

The generic task method includes a predefined scheme of knowledge kinds and an inference mechanism, as well as a general description of input and output.

The generic task is based on the "strong interaction problem hypothesis," which claims that domain knowledge's structure and representation may be totally defined by its application.

Each generic job makes use of information and employs control mechanisms tailored to that knowledge.

Because the control techniques are more domain-specific, the actual knowledge acquisition employed in GT is more precise in terms of problem-solving step descriptions.

As a result, the design of specialized knowledge-based systems may be thought of as the instantiation of specified knowledge categories using domain-specific words.

The downside of GT is that it may not be possible to integrate a specified problem-solving approach with the optimum problem-solving strategy required to complete the assignment.

The task structure (TS) approach seeks to address GT's shortcomings by distinguishing between the job and the technique employed to complete it.

As a result, every task-structure based on that method postulates how the issue might be solved using a collection of generic tasks, as well as what knowledge has to be acquired or produced for these tasks.

Because of the requirement for several models, modeling frameworks were created to meet various parts of knowledge engineering methodologies.

The organizational model, task model, agent model, communication model, expertise model, and design model are the models of the most common engineering CommonKADS structure (which depends on KADS).

The organizational model explains the structure as well as the tasks that each unit performs.

The task model describes tasks in a hierarchical order.

Each agent's skills in task execution are specified by the agent model.

The communication model specifies how agents interact with one another.

The expertise model, which employs numerous layers and focuses on representing domain-specific knowledge (domain layer) as well as inference for the reasoning process, is the most significant model (inference layer).

A task layer is also supported by the expertise model.

The latter is concerned with task decomposition.

The system architecture and computational mechanisms used to make the inference are described in the design model.

In CommonKADS, there is a clear distinction between domain-specific knowledge and generic problem-solving techniques, allowing various problems to be addressed by constructing a new instance of the domain layer and utilizing the PSM on a different domain.

Several libraries of problem-solving algorithms are now available for use in development.

They are distinguished by their key characteristics: if the library was created for a specific goal or has a larger reach; whether the library is formal, informal, or implemented; whether the library uses fine or coarse grained PSM; and, lastly, the library's size.

Recently, some research has been carried out with the goal of unifying existing libraries by offering adapters that convert task-neutral PSM to task-specific PSM.

The MIKE (model-based and incremental knowledge engineering) method, which proposes integrating semiformal and formal specification and prototyping into the framework, grew out of the creation of CommonKADS.

As a result, MIKE divides the entire process of developing knowledge-based systems into a number of sub-activities, each of which focuses on a different aspect of system development.

The Protégé method makes use of PSMs and ontologies, with an ontology being defined as an explicit statement of a common conceptualization that holds in a certain situation.

Although the ontologies used in Protégé might be of any form, the ones utilized are domain ontologies, which describe the common conceptualization of a domain, and method ontologies, which specify the ideas and relations used by problem solving techniques.

In addition to problem-solving techniques, the development of knowledge-based systems necessitates the creation of particular languages capable of defining the information needed by the system as well as the reasoning process that will use that knowledge.

The purpose of such languages is to give a clear and formal foundation for expressing knowledge models.

Furthermore, some of these formal languages may be executable, allowing simulation of knowledge model behavior on specified input data.

The knowledge was directly encoded in rule-based implementation languages in the early years.

This resulted in a slew of issues, including the impossibility to provide some forms of information, the difficulty to assure consistent representation of various types of knowledge, and a lack of specifics.

Modern approaches to language development aim to target and formalize the conceptual models of knowledge-based systems, allowing users to precisely define the goals and process for obtaining models, as well as the functionality of interface actions and accurate semantics of the various domain knowledge elements.

The majority of these epistemological languages include primitives like constants, functions, and predicates, as well as certain mathematical operations.

Object-oriented or frame-based languages, for example, define a wide range of modeling primitives such as objects and classes.

KARL, (ML)2, and DESIRE are the most common examples of specific languages.

KARL is a language that employs a Horn logic variation.

It was created as part of the MIKE project and combines two forms of logic to target the KADS expertise model: L-KARL and P-KARL.

The L-KARL is a frame logic version that may be used in inference and domain layers.

It's a mix of first-order logic and semantic data modeling primitives, in fact.

P-KARL is a task layer specification language that is also a dynamic logic in some versions.

For KADS expertise models, (ML)2 is a formalization language.

The language mixes first-order extended logic for domain layer definition, first-order meta logic for inference layer specification, and quantified dynamic logic for task layer specification.

The concept of compositional architecture is used in DESIRE (the design and specification of interconnected reasoning components).

It specifies the dynamic reasoning process using temporal logics.

Transactions describe the interaction between components in knowl edge-based systems, and control flow between any two objects is specified as a set of control rules.

A metadata description is attached to each item.

In a declarative approach, the meta level specifies the dynamic features of the object level.

The need to design large knowledge-based systems prompted the development of knowledge engineering, which entails creating a computer model with the same problem-solving capabilities as human experts.

Knowledge engineering views knowledge-based systems as operational systems that should display some desirable behavior, and provides modeling methodologies, tools, and languages to construct such systems.




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR; MOLGEN; MYCIN.



Further Reading:


Schreiber, Guus. 2008. “Knowledge Engineering.” In Foundations of Artificial Intelligence, vol. 3, edited by Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter, 929–46. Amsterdam: Elsevier.

Studer, Rudi, V. Richard Benjamins, and Dieter Fensel. 1998. “Knowledge Engineering: Principles and Methods.” Data & Knowledge Engineering 25, no. 1–2 (March): 161–97.

Studer, Rudi, Dieter Fensel, Stefan Decker, and V. Richard Benjamins. 1999. “Knowledge Engineering: Survey and Future Directions.” In XPS 99: German Conference on Knowledge-Based Systems, edited by Frank Puppe, 1–23. Berlin: Springer.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...