AI Glossary - What Is Artificial Intelligence in Medicine Or AIM?

 



AIM is an abbreviation for Artificial Intelligence in Medicine.

It is included in the field of Medical Informatics.


Artificial Intelligence in Medicine (the journal) publishes unique papers on the theory and application of artificial intelligence (AI) in medicine, medically-oriented human biology, and health care from a range of multidisciplinary viewpoints.

The study and implementation of ways to enhance the administration of patient data, clinical knowledge, demographic data, and other information related to patient care and community health is known as medical informatics. 

It is a relatively new science, having arisen in the decades after the 1940s discovery of the digital computer.


What Is Artificial intelligence's Importance in Healthcare and Medicine?


  • Artificial intelligence can help physicians choose the best cancer therapies from a variety of possibilities. 
  • AI helps physicians identify and choose the right drugs for the right patients by capturing data from various databases relating to the condition. 
  • AI also supports decision-making processes for existing drugs and expanded treatments for other conditions, as well as expediting clinical trials by finding the right patients from a variety of data sources.



What role does artificial intelligence play in medicine and healthcare?


Medical imaging analysis is aided by AI.

It aids in the evaluation of photos and scans by a doctor. 

This allows radiologists and cardiologists to find crucial information for prioritizing urgent patients, avoiding possible mistakes in reading electronic health records (EHRs), and establishing more exact diagnoses.


What Are The Advantages of AI in Healthcare?


Artificial intelligence (AI) has emerged as the most potent agent of change in the healthcare business over the previous decade. 

Learn how healthcare professionals might profit from artificial intelligence.

There are several potential for healthcare institutions to use AI to offer more effective, efficient, and precise interventions to their patients, ranging from diagnosis and risk assessment to treatment technique selection.


AI is positioned to generate innovations and benefits throughout the care continuum as the amount of healthcare data grows. 

This is based on AI technologies and machine learning (ML) algorithms' capacity to provide proactive, intelligent, and often concealed insights that guide diagnostic and treatment decisions.


When used in the areas of improved treatment, chronic illness management, early risk detection, and workflow automation and optimization, AI may be immensely valuable to both patients and clinicians. 


Below are some advantages of adopting AI in healthcare to assist providers better grasp how to use it in their ecosystem.


Management of Population Health Using AI.


Healthcare companies may utilize artificial intelligence to gather and analyze patient health data in order to proactively detect and avoid risk, reduce preventative care gaps, and get a better understanding of how clinical, genetic, behavioral, and environmental variables influence the population. 

Combining diagnostic data, exam results, and unstructured narrative data provides a complete perspective of a patient's health, as well as actionable insights that help to avoid illness and promote wellness. 


AI-powered systems may help compile, evaluate, and compare a slew of such data points to population-level trends in order to uncover early illness risks.


As these data points are accumulated to offer a picture into the population, predictive analytics may be obtained. 

These findings may subsequently be used to population risk stratification based on genetic and phenotypic variables, as well as behavioral and social determinants. 

Healthcare companies may use these insights to deliver more tailored, data-driven treatment while also optimizing resource allocation and use, resulting in improved patient outcomes.



Making Clinical Decisions Using AI.


Artificial intelligence may help minimize the time and money required to assess and diagnose patients in some healthcare procedures. 

Medical workers may save more lives by acting quicker as a result of this. 

Traditional procedures cannot detect danger as quickly or accurately as machine learning (ML) algorithms can. 

These algorithms, when used effectively, may automate inefficient, manual operations, speeding up diagnosis and lowering diagnostic mistakes, which are still the leading cause of medical malpractice lawsuits.


Furthermore, AI-enabled technologies may assemble and sift through enormous amounts of clinical data to provide doctors a more holistic perspective of patient populations' health state. 

These technologies provide the care team with real-time or near-real-time actionable information at the proper time and location to improve treatment outcomes dramatically. 

The whole care team may operate on top of licensing by automating the gathering and analysis of the terabytes of data streaming inside the hospital walls.



Artificial Intelligence-Assisted Surgery


Surgical robotics applications are one of the most inventive AI use cases in healthcare. 


AI surgical systems that can perform the slightest motions with flawless accuracy have been developed as AI robotics has matured. 

These devices can carry out difficult surgical procedures, lowering the typical procedure wait time as well as the danger of blood loss, complications, and other adverse effects.


Machine learning may also help to facilitate surgical procedures. 


It may give surgeons and healthcare workers with real-time data and sophisticated insights into a patient's present status. 

This AI-assisted data allows them to make quick, informed choices before, during, and after surgeries to assure the best possible results.



Improved Access to Healthcare Using AI.


As a consequence of restricted or no healthcare access, studies indicate considerable differences in average life expectancy between industrialized and developing countries. 


In terms of implementing and exploiting modern medical technology that can provide proper treatment to the public, developing countries lag behind their peers. 


In addition, a lack of skilled healthcare personnel (such as surgeons, radiologists, and ultrasound technicians) and appropriately equipped healthcare facilities has an influence on care delivery in these areas. 

To encourage a more efficient healthcare ecosystem, AI can offer a digital infrastructure that allows for speedier identification of symptoms and triage of patients to the appropriate level and modality of treatment.



In healthcare, AI may assist alleviate a scarcity of doctors in rural, low-resource locations by taking over some diagnostic responsibilities. 


Using machine learning for imaging, for example, enables for quick interpretation of diagnostic investigations like X-rays, CT scans, and MRIs. 

Furthermore, educational institutions are increasingly using these technologies to improve student, resident, and fellow training while reducing diagnostic mistakes and patient risk.



AI To Improve Operational Efficiency and Performance Of Healthcare Practices.


Modern healthcare operations are a complicated web of intricately linked systems and activities. 

This makes cost optimization challenging while also optimizing asset usage and guaranteeing minimal patient wait times.

Artificial intelligence is rapidly being used by health systems to filter through large amounts of big data inside their digital environment in order to generate insights that might help them improve operations, increase efficiency, and optimize performance. 



For example, AI and machine learning can: 


(1) improve throughput and effective and efficient use of facilities by prioritizing services based on patient acuity and resource availability, 

(2) improve revenue cycle performance by optimizing workflows such as prior authorization claims and denials, and 

(3) automate routine, repeatable tasks to better deploy human resources when and where they are most needed.


When used effectively, AI and machine learning may give administrators and clinical leaders with the knowledge they need to enhance the quality and timeliness of hundreds of choices they must make every day, allowing patients to move smoothly between different healthcare services.



The rapidly growing amount of patient data both within and outside of hospitals shows no signs of slowing down. 


Healthcare organizations are under pressure from ongoing financial challenges, operational inefficiencies, a global shortage of health workers, and rising costs. 

They need technology solutions that drive process improvement and better care delivery while meeting critical operational and clinical metrics.


The potential for AI in healthcare to enhance the quality and efficiency of healthcare delivery by analyzing and extracting intelligent insights from vast amounts of data is boundless and well-documented.



What role does AI play in medicine and informatics in the future?

According to Accenture Consulting, the artificial intelligence (AI) industry in healthcare is estimated to reach $6.6 billion by 2021. 

From AI-based software for managing medical data to Practice Management software to robots helping surgeries, this creative technology has led to numerous improvements.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


AI Glossary - What Is An Array?

 



 A collection of items that is indexed and ordered (i.e., a list with indices).

The index might be numerical (O, 1, 2, 3,...) or symbolic ('Mary, Mike, Murray, etc.).

"Associative arrays" is a term used to describe the latter.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

AI Glossary - What Is Artificial Intelligence Or AI?



Artificial Intelligence (AI) is a term that refers to the use of computers to make decisions.

Artificial intelligence, in general, is the area concerned with creating strategies that enable computers to function in a way that seems intelligent, similar to how a person might.

The goals range from the rudimentary, where a program seems "a little wiser" than one would anticipate, to the more ambitious, where the goal is to create a fully aware, intelligent, computer-based being.

As software and hardware improves, the lower end is gradually fading into the background of ordinary computing.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

AI Glossary - What Is ARPAbet?



The English language phenome set is encoded in ASCII as ARPAbet.

The Advanced Research Projects Agency (ARPA) created ARPABET (sometimes written ARPAbet) as part of their Speech Understanding Research initiative in the 1970s. 


It uses unique ASCII letter sequences to represent phonemes and allophones in General American English. 

Two methods were developed: one that represented each segment with one character (alternating upper- and lower-case letters) and the other that represented each segment with one or two (case-insensitive). 

The latter was significantly more commonly used. 


Computalker for the S-100 system, SAM for the Commodore 64, SAY for the Amiga, TextAssist for the PC, and Speakeasy from Intelligent Artefacts, all of which utilized the Votrax SC-01 speech synthesiser IC, have all used ARPABET. 

The CMU Pronouncing Dictionary also uses it. 

In the TIMIT corpus, an updated version of ARPABET is employed.



The ARPABET: one of many possible short versions

ARPABET
Vowels
Consonants
Less Used Phones/Allophones
Symbol
Example
Symbol
Example
Symbol
Example
iy beat b bad dx butter
ih bit p pad el bottle
eh bet d dad em bottom
ae bat t toy nx (flapped) n
aa cot g gag en button
ax the k kick eng Washington
ah butt bcl (b closure) ux
uw boot pcl (p closure) el bottle
uh book dcl (d closure) q (glottal stop)
aw about tcl (t closure) ix roses
er bird gcl (g closure) epi (epinthetic closure)
axr diner kcl (k closure) sil silence
ey bait dh they pau silence
ay bite th thief
oy boy v very
ow boat f fief
ao bought z zoo
s see
ch church
m mom
n non
ng sing
w wet
y yet
hh hay
r red
l led
zh measure
sh shoe



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

AI Glossary - What Is ARIS?

 



ARIS is a commercially available artificial intelligence system that aids in the assignment of airport gates to inbound planes.

It assigns airport gates and provides an overall perspective of current operations to human decision makers using rule-based reasoning, constraint propagation, and spatial planning.

Using Ascent Technology's fully-integrated From Touchdown to Takeoff® cloud-hosted service, ARIS is a part of the SmartAirport Operations Center solution allows you to deploy human and physical resources on the ground to maximum advantage, even in the face of inevitable delays to your operations.


The SmartAirport Information Manager tools let you to codify and modify your business information, store it in a secure yet flexible repository, and share it throughout your company to facilitate collaborative decision-making. 


They also allow you to develop, amend, and manage flight schedules that drive resource allocation choices, as well as connect the SmartAirport Operations Center to other systems. 

ARIS/SmartBase airport database (AODB), ARIS/SmartBus communication middleware, ARIS/Reports data analyzer, ARIS/SL schedule loader, ARIS/SB schedule builder, and ARIS/BIS billing-information system are some of the company's most popular products.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


AI Glossary - What Is ARF?

 


R.R. Fikes created ARF in the late 1960s as a general problem solver.

It used a combination of constraint-satisfaction and heuristic searches.

Fikes also created REF, a problem-statement language for ARF. 


REF-ARF is a procedure-based approach for addressing problems. 


The paper by Fikes presents an attempt to create a heuristic problem-solving software that takes issues expressed in a nondeterministic programming language and finds solutions using constraint fulfillment and heuristic search approaches. 

The usage of nondeterministic programming languages for presenting issues is examined, as well as ref, the language that the problem solver ARF accepts.

Different ref extensions are examined. 

The program's basic framework is detailed in depth, and several options for expanding it are examined. 

In sixteen example problems, the usage of the input language and the behavior of the program are presented and investigated.



The paper discusses Ref2 and POPS, two heuristic problem-solving algorithms. 

Both systems take issues expressed as nondeterministic programming language programs and solve them using heuristic approaches to locate successful program executions. 

Ref2 is built on Richard Fikes' REF-ARF system and includes REF-issue-solving ARF's techniques as well as new methods based on a different representation for the problem context. 

Ref2 may also handle issues involving integer programming. POPS is an updated and expanded version of Ref2 that incorporates goal-directed procedures based on GPS ideas.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram



AI Glossary - What Is Arcing?

 



Arcing methods are a broad category of Adaptive Resampling and Combining approaches for boosting machine learning and statistical techniques' performance.

ADABOOST and bagging are two prominent examples.

In general, these strategies apply a learning technique to a training set repeatedly, such as a decision tree, and then reweight, or resample, the data and refit the learning technique to the data.

This results in a set of learning rules.

New observations are passed through all members of the collection, and the predictions or classifications are aggregated by averaging or a majority rule prediction to generate a combined result.

These strategies may provide findings that are significantly more accurate than a single classifier, but being less interpretable than a single classifier.

They can build minimum (Bayes) risk classifiers, according to research.


See Also: 


ADABOOST, Bootstrap AGGregation


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

AI - SyNAPSE

 


 

Project SyNAPSE (Systemsof Neuromorphic Adaptive Plastic Scalable Electronics) is a collaborativecognitive computing effort sponsored by the Defense Advanced Research ProjectsAgency to develop the architecture for a brain-inspired neurosynaptic computercore.

The project, which began in 2008, is a collaboration between IBM Research, HRL Laboratories, and Hewlett-Packard.

Researchers from a number of universities are also involved in the project.


The acronym SyNAPSE comes from the Ancient Greek word v, which means "conjunction," and refers to the neural connections that let information go to the brain.



The project's purpose is to reverse-engineer the functional intelligence of rats, cats, or potentially humans to produce a flexible, ultra-low-power system for use in robots.

The initial DARPA announcement called for a machine that could "scale to biological levels" and break through the "algorithmic-computational paradigm" (DARPA 2008, 4).

In other words, they needed an electronic computer that could analyze real-world complexity, respond to external inputs, and do so in near-real time.

SyNAPSE is a reaction to the need for computer systems that can adapt to changing circumstances and understand the environment while being energy efficient.

Scientists at SyNAPSE are working on neuromorphicelectronics systems that are analogous to biological nervous systems and capable of processing data from complex settings.




It is envisaged that such systems would gain a considerable deal of autonomy in the future.

The SyNAPSE project takes an interdisciplinary approach, drawing on concepts from areas as diverse as computational neuroscience, artificial neural networks, materials science, and cognitive science.


Basic science and engineering will need to be expanded in the following areas by SyNAPSE: 


  •  simulation—for the digital replication of systems in order to verify functioning prior to the installation of material neuromorphological systems.





In 2008, IBM Research and HRL Laboratories received the first SyNAPSE grant.

Various aspects of the grant requirements were subcontracted to a variety of vendors and contractors by IBM and HRL.

The project was split into four parts, each of which began following a nine-month feasibility assessment.

The first simulator, C2, was released in 2009 and operated on a BlueGene/P supercomputer, simulating cortical simulations with 109 neurons and 1013 synapses, similar to those seen in a mammalian cat brain.

Following a revelation by the Blue Brain Project leader that the simulation did not meet the complexity claimed, the software was panned.

Each neurosynaptic core is 2 millimeters by 3 millimeters in size and is made up of materials derived from human brain biology.

The cores and actual brains have a more symbolic than comparable relationship.

Communication replaces real neurons, memory replaces synapses, and axons and dendrites are replaced by communication.

This enables the team to explain a biological system's hardware implementation.





HRL Labs stated in 2012 that it has created the world's first working memristor array layered atop a traditional CMOS circuit.

The term "memristor," which combines the words "memory" and "transistor," was invented in the 1970s.

Memory and logic functions are integrated in a memristor.

In 2012, project organizers reported the successful large-scale simulation of 530 billion neurons and 100 trillion synapses on the Blue Gene/Q Sequoia machine at Lawrence Livermore National Laboratory in California, which is the world's second fastest supercomputer.





The TrueNorth processor, a 5.4-billion-transistor chip with 4096 neurosynaptic cores coupled through an intrachip network that includes 1 million programmable spiking neurons and 256 million adjustable synapses, was presented by IBM in 2014.

Finally, in 2016, an end-to-end ecosystem (including scalable systems, software, and apps) that could fully use the TrueNorth CPU was unveiled.

At the time, there were reports on the deployment of applications such as interactive handwritten character recognition and data-parallel text extraction and recognition.

TrueNorth's cognitive computing chips have now been put to the test in simulations like a virtual-reality robot driving and playing the popular videogame Pong.

DARPA has been interested in the construction of brain-inspired computer systems since the 1980s.

Dharmendra Modha, director of IBM Almaden's Cognitive ComputingInitiative, and Narayan Srinivasa, head of HRL's Center for Neural and Emergent Systems, are leading the Project SyNAPSE project.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; Computational Neuroscience.


References And Further Reading


Defense Advanced Research Projects Agency (DARPA). 2008. “Systems of Neuromorphic Adaptive Plastic Scalable Electronics.” DARPA-BAA 08-28. Arlington, VA: DARPA, Defense Sciences Office.

Hsu, Jeremy. 2014. “IBM’s New Brain.” IEEE Spectrum 51, no. 10 (October): 17–19.

Merolla, Paul A., et al. 2014. “A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface.” Science 345, no. 6197 (August): 668–73.

Monroe, Don. 2014. “Neuromorphic Computing Gets Ready for the (Really) Big Time.” Communications of the ACM 57, no. 6 (June): 13–15.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...