Artificial Intelligence - What Is Cognitive Computing?


 


Self-learning hardware and software systems that use machine learning, natural language processing, pattern recognition, human-computer interaction, and data mining technologies to mimic the human brain are referred to as cognitive computing.


The term "cognitive computing" refers to the use of advances in cognitive science to create new and complex artificial intelligence systems.


Cognitive systems aren't designed to take the place of human thinking, reasoning, problem-solving, or decision-making; rather, they're meant to supplement or aid people.

A collection of strategies to promote the aims of affective computing, which entails narrowing the gap between computer technology and human emotions, is frequently referred to as cognitive computing.

Real-time adaptive learning approaches, interactive cloud services, interactive memo ries, and contextual understanding are some of these methodologies.

To conduct quantitative assessments of organized statistical data and aid in decision-making, cognitive analytical tools are used.

Other scientific and economic systems often include these tools.

Complex event processing systems utilize complex algorithms to assess real-time data regarding events for patterns and trends, offer choices, and make judgments.

These kinds of systems are widely used in algorithmic stock trading and credit card fraud detection.

Face recognition and complex image recognition are now possible with image recognition systems.

Machine learning algorithms build models from data sets and improve as new information is added.

Neural networks, Bayesian classifiers, and support vector machines may all be used in machine learning.

Natural language processing entails the use of software to extract meaning from enormous amounts of data generated by human conversation.

Watson from IBM and Siri from Apple are two examples.

Natural language comprehension is perhaps cognitive computing's Holy Grail or "killer app," and many people associate natural language processing with cognitive computing.

Heuristic programming and expert systems are two of the oldest branches of so-called cognitive computing.

Since the 1980s, there have been four reasonably "full" cognitive computing architectures: Cyc, Soar, Society of Mind, and Neurocognitive Networks.

Speech recognition, sentiment analysis, face identification, risk assessment, fraud detection, and behavioral suggestions are some of the applications of cognitive computing technology.

These applications are referred regarded as "cognitive analytics" systems when used together.

In the aerospace and defense industries, agriculture, travel and transportation, banking, health care and the life sciences, entertainment and media, natural resource development, utilities, real estate, retail, manufacturing and sales, marketing, customer service, hospitality, and leisure, these systems are in development or are being used.

Netflix's movie rental suggestion algorithm is an early example of predictive cognitive computing.

Computer vision algorithms are being used by General Electric to detect tired or distracted drivers.

Customers of Domino's Pizza can place orders online by speaking with a virtual assistant named Dom.

Elements of Google Now, a predictive search feature that debuted in Google applications in 2012, assist users in predicting road conditions and anticipated arrival times, locating hotels and restaurants, and remembering anniversaries and parking spots.


In IBM marketing materials, the term "cognitive" computing appears frequently.

Cognitive computing, according to the company, is a subset of "augmented intelligence," which is preferred over artificial intelligence.


The Watson machine from IBM is frequently referred to as a "cognitive computer" since it deviates from the traditional von Neumann design and instead draws influence from neural networks.

Neuroscientists are researching the inner workings of the human brain, seeking for connections between neuronal assemblies and mental aspects, and generating new mental ideas.

Hebbian theory is an example of a neuroscientific theory that underpins cognitive computer machine learning implementations.

The Hebbian theory is a proposed explanation for neural adaptation during the learning process.

Donald Hebb initially proposed the hypothesis in his 1949 book The Organization of Behavior.

Learning, according to Hebb, is a process in which the causal induction of recurrent or persistent neuronal firing or activity causes neural traces to become stable.

"Any two cells or systems of cells that are consistently active at the same time will likely to become'associated,' such that activity in one favors activity in the other," Hebb added (Hebb 1949, 70).

"Cells that fire together, wire together," is how the idea is frequently summarized.

According to this hypothesis, the connection of neuronal cells and tissues generates neurologically defined "engrams" that explain how memories are preserved in the brain as biophysical or biochemical changes.

Engrams' actual location, as well as the procedures by which they are formed, are currently unknown.

IBM machines are stated to learn by aggregating information into a computational convolution or neural network architecture made up of weights stored in a parallel memory system.

Intel introduced Loihi, a cognitive chip that replicates the functions of neurons and synapses, in 2017.

Loihi is touted to be 1,000 times more energy efficient than existing neurosynaptic devices, with 128 clusters of 1,024 simulated neurons on per chip, for a total of 131,072 simulated neurons.

Instead of relying on simulated neural networks and parallel processing with the overarching goal of developing artificial cognition, Loihi uses purpose-built neural pathways imprinted in silicon.

These neuromorphic processors are likely to play a significant role in future portable and wire-free electronics, as well as automobiles.

Roger Schank, a cognitive scientist and artificial intelligence pioneer, is a vocal opponent of cognitive computing technology.

"Watson isn't thinking. You can only reason if you have objectives, plans, and strategies to achieve them, as well as an understanding of other people's ideas and a knowledge of prior events to draw on.

"Having a point of view is also beneficial," he writes.

"How does Watson feel about ISIS, for example?" Is this a stupid question? ISIS is a topic on which actual thinking creatures have an opinion" (Schank 2017).



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Computational Neuroscience; General and Narrow AI; Human Brain Project; SyNAPSE.


Further Reading

Hebb, Donald O. 1949. The Organization of Behavior. New York: Wiley.

Kelly, John, and Steve Hamm. 2013. Smart Machines: IBM’s Watson and the Era of Cognitive Computing. New York: Columbia University Press.

Modha, Dharmendra S., Rajagopal Ananthanarayanan, Steven K. Esser, Anthony Ndirango, Anthony J. Sherbondy, and Raghavendra Singh. 2011. “Cognitive Computing.” Communications of the ACM 54, no. 8 (August): 62–71.

Schank, Roger. 2017. “Cognitive Computing Is Not Cognitive at All.” FinTech Futures, May 25. https://www.bankingtech.com/2017/05/cognitive-computing-is-not-cognitive-at-all

Vernon, David, Giorgio Metta, and Giulio Sandini. 2007. “A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents.” IEEE Transactions on Evolutionary Computation 11, no. 2: 151–80.







Artificial Intelligence - What Are Cognitive Architectures?

 


A cognitive architecture is a customized computer model of the human mind that aims to imitate all elements of human cognition completely.


Cognitive architectures are coherent theories that explain how a set of fixed mental structures and mechanisms may do intelligent work in a range of diverse surroundings.

A cognitive architecture has two main components: a theory of how the human mind works and a computing representation of the theory.

The cognitive theory that underpins a cognitive architecture will attempt to bring together the findings of a wide range of experiments and hypotheses into a single, comprehensive framework capable of describing a wide range of human behavior utilizing a set of evidence-based processes.

The framework established in the theory of cognition is then used to build the computational representation.

Cognitive architectures like ACT-R (Adaptive Control of Thought Rational), Soar, and CLARION (Connectionist Learning with Adaptive Rule Induction On-line) can predict, explain, and model complex human behavior like driving a car, solving a math problem, or recalling when you last saw the hippie in the park by combining modeling behavior and modeling the structure of a cognitive system.


There are four techniques to reaching human-level intelligence inside a cognitive architecture, according to computer scientists Stuart Russell and Peter Norvig: 


(1) constructing systems that think like people, 

(2) constructing systems that think logically, 

(3) constructing systems that behave rationally, and 

(4) constructing systems that act like humans.


The behavior of a system that thinks like a person is produced using recognized human processes.

This is the most common technique in cognitive modeling, as shown by structures such as John Anderson's ACT-R, Allen Newell and Herb Simon's General Problem Solver, and the first applications of the general cognitive architecture known as Soar.

For example, the ACT-R model combines theories of physical movement, visual attention, and cognition.

The model differentiates between declarative and procedural knowledge.

Production rules, which are statements expressed as condition action pairs, are used to express procedural knowledge.

A statement written in the form of IF THEN is an example.

Declarative knowledge is grounded on reality.

It refers to data that is considered static, such as characteristics, events, or objects.

This sort of architecture will provide behavior that contains both right and incorrect behavior.

Instead, a system that thinks rationally would employ logic, computational reasoning, and rules of mind to create consistent and correct behaviors and outputs.

A rational system will employ intrinsic ideas and knowledge to attain objectives via a more broad logic and movement from premises to consequences, which is more flexible to circumstances when complete information is not available.

The rational agent method is another name for acting rationally.

Finally, the Turing Test technique may be viewed of as creating a system that behaves like a person.

To attain humanlike behavior, this technique requires the development of a system capable of natural language processing, knowledge representation, automated reasoning, and machine learning in its most stringent form.

This method does not require every system to achieve all of these requirements; instead, it focuses on the standards that are most important to the job at hand.

Apart from those four methods, cognitive architectures are divided into three categories based on how they process information: symbolic (or cognitivist), emergent (or connectionist), and hybrid.

Symbolic systems are controlled at a high level from the top down and analyze data using a series of IF-THEN statements known as production rules.

EPIC (Executive-Process/Interactive Control) and Soar are two examples of symbolic information processing cognitive systems.

Emergent systems, like neural networks, are constructed by a bottom-up flow of information propagating from input nodes into the remainder of the system.

Emergent systems like Leabra and BECCA (Brain-Emulating Cognition and Control Architecture) employ a self-organizing, distributed network of nodes that may function in parallel.

ACT-R and CAPS (Collaborative, Activation-based, Production System) are examples of hybrid architectures that integrate elements from both forms of information processing.

A hybrid cognitive architecture geared at visual perception and understanding, for example, may utilize symbolic processing for labels and text but an emergent method for visual feature and object recognition.

As particular subtasks become more known, this kind of mixed-methods approach to developing cognitive architectures is becoming increasingly widespread.

This may lead to some confusion when categorizing designs, but it also leads to architectural improvement since the best solutions for each subtask can be included.

In both academic and industry settings, a number of cognitive architectures have been created.

Because of its sturdy and speedy software, Soar is one of the most well-known cognitive architectures that has gone out into industrial applications.

Digital Equipment Corporation (DEC) utilized a proprietary Soar program named R1-Soar to help with the complicated ordering process for the VAX computer system in 1985.

Previously, each component of the system (from software to cable connections) would have to be planned out according to a complicated set of rules and eventualities.

The R1-Soar system used the Soar cognitive architecture to automate this operation, saving an estimated $25 million each year.

Soar Technology, Inc. maintains Soar's industrial implementation, and the business continues to engage on military and government projects.

John Anderson's ACT-R is one of the most well-known and researched cognitive architectures in academia.

ACT-R builds on and expands on previous architectures such as HAM (Human Associative Memory).

Much of the research on ACT-R has focused on expanding memory modeling to include a wider range of memory types and cognitive processes.

ACT-R, on the other hand, has been used in natural language processing, brain area activation prediction, and the construction of smart tutors that can model student behavior and tailor the learning curriculum to their unique requirements.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Intelligent Tutoring Systems; Interaction for Cognitive Agents; Newell, Allen.


Further Reading


Anderson, John R. 2007. How Can the Human Mind Occur in the Physical Universe? Oxford, UK: Oxford University Press.

Kotseruba, Iuliia, and John K. Tsotsos. 2020. “40 Years of Cognitive Architectures: Core Cognitive Abilities and Practical Applications.” Artificial Intelligence Review 53, no. 1 (January): 17–94.

Ritter, Frank E., Farnaz Tehranchi, and Jacob D. Oury. 2018. “ACT-R: A Cognitive Architecture for Modeling Cognition.” Wiley Interdisciplinary Reviews: Cognitive Science 10, no. 4: 1–19.





Artificial Intelligence - What Are Clinical Decision Support Systems?

 


In patient-physician contacts, decision-making is a critical activity, with judgements often based on partial and insufficient patient information.

In principle, physician decision-making, which is undeniably complicated and dynamic, is hypothesis-driven.

Diagnostic intervention is based on a hypothetically deductive process of testing hypotheses against clinical evidence to arrive at conclusions.

Evidence-based medicine is a method of medical practice that incorporates individual clinical skill and experience with the best available external evidence from scientific literature to enhance decision-making.

Evidence-based medicine must be based on the highest quality, most trustworthy, and systematic data available.

The important issues remain, knowing that both evidence-based medicine and clinical research are required, but that none is perfect: How can doctors get the most up-to-date scientific evidence? What constitutes the best evidence? How may doctors be helped to decide whether external clinical evidence from systematic research should have an impact on their practice? A hierarchy of evidence may help you figure out which sorts of evidence are more likely to produce reliable answers to clinical problems if done correctly.

Despite the lack of a broadly agreed hierarchy of evidence, Alba DiCenso et al. (2009) established the 6S Hierarchy of Evidence-Based Resources as a framework for classifying and selecting resources that assess and synthesize research results.

The 6S pyramid was created to help doctors and other health-care professionals make choices based on the best available research data.

It shows a hierarchy of evidence in which higher levels give more accurate and efficient forms of information.

Individual studies are at the bottom of the pyramid.

Although they serve as the foundation for research, a single study has limited practical relevance for practicing doctors.

Clinicians have been taught for years that randomized controlled trials are the gold standard for making therapeutic decisions.

Researchers may use randomized controlled trials to see whether a treatment or intervention is helpful in a particular patient population, and a strong randomized controlled trial can overturn years of conventional wisdom.

Physicians, on the other hand, care more about whether it will work for their patient in a specific situation.

A randomized controlled study cannot provide this information.

A research synthesis may be considered of as a study of studies, since it reflects a greater degree of evidence than individual studies.

It makes conclusions about a practice's efficacy by carefully examining evidence from various experimental investigations.

Systematic reviews and meta-analyses, which are often seen as the pillars of evidence-based medicine, have their own set of issues and rely on rigorous evaluation of the features of the available data.

The problem is that most doctors are unfamiliar with the statistical procedures used in a meta-analysis and are uncomfortable with the fundamental scientific ideas needed to evaluate data.

Clinical practice recommendations are intended to bridge the gap between research and existing practice, reducing unnecessary variation in practice.

In recent years, the number of clinical practice recommendations has exploded.

The development process is largely responsible for the guidelines' credibility.

The most serious problem is the lack of scientific evidence that these clinical practice guidelines are based on.

They don't all have the same level of quality and trustworthiness in their evidence.

The search for evidence-based resources should start at the top of the 6S pyramid, at the systems layer, which includes computerized clinical decision support systems.

Computerized clinical decision support systems (also known as intelligent medical platforms) are health information technology-based software that builds on the foundation of an electronic health record to provide clinicians with intelligently filtered and organized general and patient-specific information to improve health and clinical care.

Laboratory measurements, for example, are often color-coded to show whether they lie inside or outside of a reference range.

The computerized clinical decision support systems that are now available are not a simple model that produces just an output.

Multiple phases are involved in the interpretation and use of a computerized clinical decision support system, including displaying the algorithm output in a specified fashion, the clinician's interpretation, and finally the medical decision.

Despite the fact that computerized clinical decision support systems have been proved to minimize medical mistakes and enhance patient outcomes, user acceptability has prevented them from reaching their full potential.

Aside from the interface problems, doctors are wary about computerized clinical decision support systems because they may limit their professional autonomy or be utilized in the case of a medical-legal dispute.

Although computerized clinical decision support systems still need human participation, some critical sectors of medicine, such as cancer, cardiology, and neurology, are adopting artificial intelligence-based diagnostic tools.

Machine learning methods and natural language processing systems are the two main groups of these instruments.

Patients' data is used to construct a structured database for genetic, imaging, and electrophysiological records, which is then analyzed for a diagnosis using machine learning methods.

To assist the machine learning process, natural language processing systems construct a structured database utilizing clinical notes and medical periodicals.

Furthermore, machine learning algorithms in medical applications seek to cluster patients' features in order to predict the likelihood of illness outcomes and offer a prognosis to the clinician.

Several machine learning and natural language processing technologies have been coupled to produce powerful computerized clinical decision support systems that can process and offer diagnoses as well as or better than doctors.

When it came to detecting lymph node metastases, a Google-developed AI approach called convolutional neural networking surpassed pathologists.

In compared to pathologists, who had a sensitivity of 73 percent, the convolutional neural network was sensitive 97 percent of the time.

Furthermore, when the same convolutional neural network was used to classify skin cancers, it performed at a level comparable to dermatologists (Krittanawong 2018).

Depression is also diagnosed and classified using such approaches.

By merging artificial intelligence's capability with human views, empathy, and experience, physicians' potential will be increased.

The advantages of advanced computerized clinical decision support systems, on the other hand, are not limited to diagnoses and classification.

By reducing processing time and thus improving patient care, computerized clinical decision support systems can be used to improve communication between physicians and patients.

To avoid drug-drug interactions, computerized clinical decision support systems can prioritize medication prescription for patients based on their medical history.

More importantly, by extracting past medical history and using patient symptoms to determine whether the patient should be referred to urgent care, a specialist, or a primary care doctor, computerized clinical decision support systems equipped with artificial intelligence can aid triage diagnosis and reduce triage processing times.

Because they are the primary causes of mortality in North America, developing artificial intelligence around these acute and highly specialized medical problems is critical.

Artificial intelligence has also been used in other ways with computerized clinical decision support systems.

The studies of Long et al. (2017), who used ocular imaging data to identify congenital cataract illness, and Gulshan et al.

(2016), who used retinal fundus pictures to detect referable diabetic retinopathy, are two recent instances.

Both stories show how artificial intelligence is growing exponentially in the medical industry and how it may be used in a variety of ways.

Although computerized clinical decision support systems hold great promise for facilitating evidence-based medicine, much work has to be done to reach their full potential in health care.

The growing familiarity of new generations of doctors with sophisticated digital technology may encourage the usage and integration of computerized clinical decision support systems.

Over the next decade, the market for such systems is expected to expand dramatically.

The pressing need to lower the prevalence of drug mistakes and worldwide health-care expenditures is driving this expansion.

Computerized clinical decision support systems are the gold standard for assisting and supporting physicians in their decision-making.

In order to benefit doctors, patients, health-care organizations, and society, the future should include more advanced analytics, automation, and a more tailored interaction with the electronic health record. 



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Automated Multiphasic Health Testing; Expert Systems; Explainable AI; INTERNIST-I and QMR.


Further Reading

Arnaert, Antonia, and Norma Ponzoni. 2016. “Promoting Clinical Reasoning Among Nursing Students: Why Aren’t Clinical Decision Support Systems a Popular Option?” Canadian Journal of Nursing Research 48, no. 2: 33–34.

Arnaert, Antonia, Norma Ponzoni, John A. Liebert, and Zoumanan Debe. 2017. “Transformative Technology: What Accounts for the Limited Use of Clinical Decision Support Systems in Nursing Practice?” In Health Professionals’ Education in the Age of Clinical Information Systems, Mobile Computing, and Social Media, edited by Aviv Shachak, Elizabeth M. Borycki, and Shmuel P. Reis, 131–45. Cambridge, MA: Academic Press.

DiCenso, Alba, Liz Bayley, and R. Brian Haynes. 2009. “Accessing Preappraised Evidence: Fine-tuning the 5S Model into a 6S Model.” ACP Journal Club 151, no. 6 (September): JC3-2–JC3-3.

Gulshan, Varun, et al. 2016. “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.” JAMA 316, no. 22 (December): 2402–10.

Krittanawong, Chayakrit. 2018. “The Rise of Artificial Intelligence and the Uncertain Future for Physicians.” European Journal of Internal Medicine 48 (February): e13–e14.

Long, Erping, et al. 2017. “An Artificial Intelligence Platform for the Multihospital Collaborative Management of Congenital Cataracts.” Nature Biomedical Engineering 1, no. 2: n.p.

Miller, D. Douglas, and Eric W. Brown. 2018. “Artificial Intelligence in Medical Practice: The Question to the Answer?” American Journal of Medicine 131, no. 2: 129–33.


Artificial Intelligence - Climate Change Crisis And AI.

 




Artificial intelligence has a double-edged sword when it comes to climate change and the environment.


Artificial intelligence is being used by scientists to detect, adapt, and react to ecological concerns.

Civilization is becoming exposed to new environmental hazards and vulnerabilities as a result of the same technologies.

Much has been written on the importance of information technology in green economy solutions.

Data from natural and urban ecosystems is collected and analyzed using intelligent sensing systems and environmental information systems.

Machine learning is being applied in the development of sustainable infrastructure, citizen detection of environmental perturbations and deterioration, contamination detection and remediation, and the redefining of consumption habits and resource recycling.



Planet hacking is a term used to describe such operations.


Precision farming is one example of planet hacking.

Artificial intelligence is used in precision farming to diagnose plant illnesses and pests, as well as detect soil nutrition issues.

Agricultural yields are increased while water, fertilizer, and chemical pesticides are used more efficiently thanks to sensor technology directed by AI.

Controlled farming approaches offer more environmentally friendly land management and (perhaps) biodiversity conservation.

Another example is IBM Research's collaboration with the Chinese government to minimize pollution in the nation via the Green Horizons program.

Green Horizons is a ten-year effort that began in July 2014 with the goal of improving air quality, promoting renewable energy integration, and promoting industrial energy efficiency.

To provide air quality reports and track pollution back to its source, IBM is using cognitive computing, decision support technologies, and sophisticated sensors.

Green Horizons has grown to include global initiatives such as collaborations with Delhi, India, to link traffic congestion patterns with air pollution; Johannesburg, South Africa, to fulfill air quality objectives; and British wind farms, to estimate turbine performance and electricity output.

According to the National Renewable Energy Laboratory at the University of Maryland, AI-enabled automobiles and trucks are predicted to save a significant amount of gasoline, maybe in the region of 15% less use.


Smart cars eliminate inefficient combustion caused by stop-and-go and speed-up and slow-down driving behavior, resulting in increased fuel efficiency (Brown et al.2014).


Intelligent driver input is merely the first step toward a more environmentally friendly automobile.

According to the Society of Automotive Engineers and the National Renewable Energy Laboratory, linked automobiles equipped with vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication might save up to 30% on gasoline (Gonder et al.

2012).

Smart trucks and robotic taxis will be grouped together to conserve fuel and minimize carbon emissions.

Environmental robots (ecobots) are projected to make significant advancements in risk monitoring, management, and mitigation.

At nuclear power plants, service robots are in use.

Two iRobot PackBots were sent to Japan's Fukushima nuclear power plant to measure radioactivity.

Treebot is a dexterous tree-climbing robot that is meant to monitor arboreal environments that are too difficult for people to access.

The Guardian, a robot created by the same person who invented the Roomba, is being developed to hunt down and remove invasive lionfish that endanger coral reefs.

A similar service is being provided by the COTSbot, which employs visual recognition technology to wipe away crown-of-thorn starfish.

Artificial intelligence is assisting in the discovery of a wide range of human civilization's effects on the natural environment.

Cornell University's highly multidisciplinary Institute for Computer Sustainability brings together professional scientists and citizens to apply new computing techniques to large-scale environmental, social, and economic issues.

Birders are partnering with the Cornell Lab of Ornithology to submit millions of observations of bird species throughout North America, to provide just one example.

An app named eBird is used to record the observations.

To monitor migratory patterns and anticipate bird population levels across time and space, computational sustainability approaches are applied.

Wildbook, iNaturalist, Cicada Hunt, and iBats are some of the other crowdsourced nature observation apps.

Several applications are linked to open-access databases and big data initiatives, such as the Global Biodiversity Information Facility, which will include 1.4 billion searchable entries by 2020.


By modeling future climate change, artificial intelligence is also being utilized to assist human populations understand and begin dealing with environmental issues.

A multidisciplinary team from the Montreal Institute for Learning Algorithms, Microsoft Research, and ConscientAI Labs is using street view imagery of extreme weather events and generative adversarial networks—in which two neural networks are pitted against one another—to create realistic images depicting the effects of bushfires and sea level rise on actual neighborhoods.

Human behavior and lifestyle changes may be influenced by emotional reactions to photos.

Virtual reality simulations of contaminated ocean ecosystems are being developed by Stanford's Virtual Human Interaction Lab in order to increase human empathy and modify behavior in coastal communities.


Information technology and artificial intelligence, on the other hand, play a role in the climate catastrophe.


The pollution created by the production of electronic equipment and software is one of the most pressing concerns.

These are often seen as clean industries, however they often use harsh chemicals and hazardous materials.

With twenty-three active Superfund sites, California's Silicon Valley is one of the most contaminated areas in the country.

Many of these hazardous waste dumps were developed by computer component makers.

Trichloroethylene, a solvent used in semiconductor cleaning, is one of the most common soil pollutants.

Information technology uses a lot of energy and contributes a lot of greenhouse gas emissions.

Solar-powered data centers and battery storage are increasingly being used to power cloud computing data centers.


In recent years, a number of cloud computing facilities have been developed around the Arctic Circle to take use of the inherent cooling capabilities of the cold air and ocean.


The so-called Node Pole, situated in Sweden's northernmost county, is a favored location for such building.

In 2020, a data center project in Reykjavik, Iceland, will run entirely on renewable geo thermal and hydroelectric energy.

Recycling is also a huge concern, since life cycle engineering is just now starting to address the challenges of producing environmentally friendly computers.

Toxic electronic trash is difficult to dispose of in the United States, thus a considerable portion of all e-waste is sent to Asia and Africa.

Every year, some 50 million tons of e-waste are produced throughout the globe (United Nations 2019).

Jack Ma of the international e-commerce company Alibaba claimed at the World Economic Forum annual gathering in Davos, Switzerland, that artificial intelligence and big data were making the world unstable and endangering human life.

Artificial intelligence research's carbon impact is just now being quantified with any accuracy.

While Microsoft and Pricewaterhouse Coopers reported that artificial intelligence could reduce carbon dioxide emissions by 2.4 gigatonnes by 2030 (the combined emissions of Japan, Canada, and Australia), researchers at the University of Massachusetts, Amherst discovered that training a model for natural language processing can emit the equivalent of 626,000 pounds of greenhouse gases.

This is over five times the carbon emissions produced by a typical automobile throughout the course of its lifespan, including original production.

Artificial intelligence has a massive influence on energy usage and carbon emissions right now, especially when models are tweaked via a technique called neural architecture search (Strubell et al. 2019).

It's unclear if next-generation technologies like quantum artificial intelligence, chipset designs, and unique machine intelligence processors (such as neuromorphic circuits) would lessen AI's environmental effect.


Artificial intelligence is also being utilized to extract additional oil and gas from beneath, but more effectively.


Oilfield services are becoming more automated, and businesses like Google and Microsoft are opening offices and divisions to cater to them.

Since the 1990s, Total S.A., a French multinational oil firm, has used artificial intelligence to enhance production and understand subsurface data.

Total partnered up with Google Cloud Advanced Solutions Lab professionals in 2018 to use modern machine learning techniques to technical data analysis difficulties in the exploration and production of fossil fuels.

Every geoscience engineer at the oil company will have access to an AI intelligent assistant, according to Google.

With artificial intelligence, Google is also assisting Anadarko Petroleum (bought by Occidental Petroleum in 2019) in analyzing seismic data to discover oil deposits, enhance production, and improve efficiency.


Working in the emerging subject of evolutionary robotics, computer scientists Joel Lehman and Risto Miikkulainen claim that in the case of a future extinction catastrophe, superintelligent robots and artificial life may swiftly breed and push out humans.


In other words, robots may enter the continuing war between plants and animals.

To investigate evolvability in artificial and biological populations, Lehman and Miikkulainen created computer models to replicate extinction events.

The study is mostly theoretical, but it may assist engineers comprehend how extinction events could impact their work; how the rules of variation apply to evolutionary algorithms, artificial neural networks, and virtual organisms; and how coevolution and evolvability function in ecosystems.

As a result of such conjecture, Emerj Artificial Intelligence Research's Daniel Faggella notably questioned if the "environment matter[s] after the Singularity" (Faggella 2019).

Ian McDonald's River of Gods (2004) is a notable science fiction novel about climate change and artificial intelligence.

The book's events take place in 2047 in the Indian subcontinent.

A.I.Artificial Intelligence (2001) by Steven Spielberg is set in a twenty-second-century planet plagued by global warming and rising sea levels.

Humanoid robots are seen as important to the economy since they do not deplete limited resources.

Transcendence, a 2014 science fiction film starring Johnny Depp as an artificial intelligence researcher, portrays the cataclysmic danger of sentient computers as well as its unclear environmental effects.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Chatbots and Loebner Prize; Gender and AI; Mobile Recommendation Assistants; Natural Language Processing and Speech Understanding.


Further Reading


Bort, Julie. 2017. “The 43 Most Powerful Female Engineers of 2017.” Business Insider. https://www.businessinsider.com/most-powerful-female-engineers-of-2017-2017-2.

Chan, Sharon Pian. 2011. “Tech-Savvy Dreamer Runs Microsoft’s Social-Media Lab.” Seattle Times. https://www.seattletimes.com/business/tech-savvy-dreamer-runs-microsofts-social-media-lab.

Cheng, Lili. 2018. “Why You Shouldn’t Be Afraid of Artificial Intelligence.” Time. http://time.com/5087385/why-you-shouldnt-be-afraid-of-artificial-intelligence.

Cheng, Lili, Shelly Farnham, and Linda Stone. 2002. “Lessons Learned: Building and Deploying Shared Virtual Environments.” In The Social Life of Avatars: Com￾puter Supported Cooperative Work, edited by Ralph Schroeder, 90–111. London: Springer.

Davis, Jeffrey. 2018. “In Chatbots She Trusts: An Interview with Microsoft AI Leader Lili Cheng.” Workflow. https://workflow.servicenow.com/customer-experience/lili-chang-ai-chatbot-interview.



Artificial Intelligence - What Is The Loebner Prize For Chatbots? Who Was Lili Cheng?



A chatbot is a computer software that communicates with people using artificial intelligence. Text or voice input may be used in the talks.

In certain circumstances, chatbots are also intended to take automatic activities in response to human input, such as running an application or sending an email.


Most chatbots try to mimic human conversational behavior, however no chatbot has succeeded in doing so flawlessly to far.



\

Chatbots may assist with a number of requirements in a variety of circumstances.

The capacity to save time and money for people by employing a computer program to gather or disseminate information rather than needing a person to execute these duties is perhaps the most evident.

For example, a corporation may develop a customer service chatbot that replies to client inquiries with information that the chatbot believes to be relevant based on user queries using artificial intelligence.

The chatbot removes the requirement for a human operator to conduct this sort of customer service in this fashion.

Chatbots may also be useful in other situations since they give a more convenient means of interacting with a computer or software application.

A digital assistant chatbot, such as Apple's Siri or Google Assistant, for example, enables people to utilize voice input to get information (such as the address of a requested place) or conduct activities (such as sending a text message) on smartphones.

In cases when alternative input methods are cumbersome or unavailable, the ability to communicate with phones by speech, rather than needing to type information on the devices' displays, is helpful.


Consistency is a third benefit of chatbots.


Because most chatbots react to inquiries using preprogrammed algorithms and data sets, they will often respond with the same replies to the same questions.

Human operators cannot always be relied to act in the same manner; one person's response to a query may differ from another's, or the same person's replies may change from day to day.

Chatbots may aid with consistency in experience and information for the users with whom they communicate in this way.

However, chatbots that employ neural networks or other self-learning techniques to answer to inquiries may "evolve" over time, with the consequence that a query given to a chatbot one day may get a different response from a question posed the next day.

However, just a handful chatbots have been built to learn on their own thus far.

Some, such as Microsoft Tay, have proved to be ineffective.

Chatbots may be created using a number of ways and can be built in practically any programming language.

However, to fuel their conversational skills and automated decision-making, most chatbots depend on a basic set of traits.

Natural language processing, or the capacity to transform human words into data that software can use to make judgments, is one example.

Writing code that can process natural language is a difficult endeavor that involves knowledge of computer science, linguistics, and significant programming.

It requires the capacity to comprehend text or speech from individuals who use a variety of vocabulary, sentence structures, and accents, and who may talk sarcastically or deceptively at times.

Because programmers had to design natural language processing software from scratch before establishing a chatbot, the problem of creating good natural language processing engines made chatbots difficult and time-consuming to produce in the past.

Natural language processing programming frameworks and cloud-based services are now widely available, considerably lowering this barrier.

Modern programmers may either employ a cloud-based service like Amazon Comprehend or Azure Language Understanding to add the capability necessary to read human language, or they can simply import a natural language processing library into their apps.

Most chatbots also need a database of information to answer to queries.

They analyze their own data sets to choose which information to provide or which action to take in response to the inquiry after using natural language processing to comprehend the meaning of input.

Most chatbots do this by matching phrases in queries to predefined tags in their internal databases, which is a very simple process.

More advanced chatbots, on the other hand, may be programmed to continuously adjust or increase their internal databases by evaluating how users have reacted to previous behavior.

For example, a chatbot may ask a user whether the answer it provided in response to a specific query was helpful, and if the user replies no, the chatbot would adjust its internal data to avoid repeating the response the next time a user asks a similar question.



Although chatbots may be useful in a variety of settings, they are not without flaws and the potential for abuse.


One obvious flaw is that no chatbot has yet been proven to be capable of perfectly simulating human behavior, and chatbots can only perform tasks that they have been programmed to do.

They don't have the same aptitude as humans to "think outside the box" or solve issues imaginatively.

In many cases, people engaging with a chatbot may be looking for answers to queries that the chatbot was not designed to answer.


Chatbots raise certain ethical issues for similar reasons.


Chatbot critics have claimed that it is immoral for a computer program to replicate human behavior without revealing to individuals with whom it communicates that it is not a real person.

Some have also stated that chatbots may contribute to an epidemic of loneliness by replacing real human conversations with chatbot conversations that are less intellectually and socially gratifying for human users.

Chatbots, on the other hand, such as Replika, were designed with the express purpose of providing lonely people with an entity to communicate to when real people are unavailable.

Another issue with chatbots is that, like other software programs, they might be utilized in ways that their authors did not anticipate.

Misuse could occur as a result of software security flaws that allow malicious parties to gain control of a chatbot; for example, an attacker seeking to harm a company's reputation might try to compromise its customer-support chatbot in order to provide false or unhelpful support services.

In other circumstances, simple design flaws or oversights may result in chatbots acting unpredictably.

When Microsoft debuted the Tay chatbot in 2016, it learnt this lesson.

The Tay chatbot was meant to teach itself new replies based on past discussions.

When users engaged Tay in racist conversations, Tay began making public racist or inflammatory remarks of its own, prompting Microsoft to shut down the app.

The word "chatbot" was first used in the 1990s as an abbreviated version of chatterbot, a phrase invented in 1994 by computer scientist Michael Mauldin to describe a chatbot called Julia that he constructed in the early 1990s.


Chatbot-like computer programs, on the other hand, have been around for a long time.


The first was ELIZA, a computer program created by Joseph Weizenbaum at MIT's Artificial Intelligence Lab between 1964 and 1966.

Although the software was confined to just a few themes, ELIZA employed early natural language processing methods to participate in text-based discussions with human users.

Stanford psychiatrist Kenneth Colby produced a comparable chatbot software called PARRY in 1972.

It wasn't until the 1990s, when natural language processing techniques had advanced, that chatbot development gained traction and programmers got closer to their goal of building chatbots that could participate in discussion on any subject.

A.L.I.C.E., a chat bot debuted in 1995, and Jabberwacky, a chatbot created in the early 1980s and made accessible to users on the web in 1997, both have this purpose in mind.

The second significant wave of chatbot invention occurred in the early 2010s, when increased smartphone usage fueled demand for digital assistant chatbots that could engage with people through voice interactions, beginning with Apple's Siri in 2011.


The Loebner Prize competition has served to measure the efficacy of chatbots in replicating human behavior throughout most of the history of chatbot development.


The Loebner Prize, which was established in 1990, is given to computer systems (including, but not limited to, chatbots) that judges believe demonstrate the most human-like behavior.

A.L.I.C.E, which won the award three times in the early 2000s, and Jabberwacky, which won twice in 2005 and 2006, are two notable chatbots that have been examined for the Loebner Prize.


Lili Cheng




Lili Cheng is the Microsoft AI and Research division's Corporate Vice President and Distinguished Engineer.


She is in charge of the company's artificial intelligence platform's developer tools and services, which include cognitive services, intelligent software assistants and chatbots, as well as data analytics and deep learning tools.

Cheng has emphasized that AI solutions must gain the confidence of a larger segment of the community and secure users' privacy.

Her group is focusing on artificial intelligence bots and software apps that have human-like dialogues and interactions, according to her.


The ubiquity of social software—technology that lets people connect more effectively with one another—and the interoperability of software assistants, or AIs that chat to one another or pass tasks to one another, are two further ambitions.


Real-time language translation is one example of such an application.

Cheng is also a proponent of technical education and training for individuals, especially women, in order to prepare them for future careers (Davis 2018).

Cheng emphasizes the need of humanizing AI.

Rather than adapting human interactions to computer interactions, technology must adapt to people's working cycles.

Language recognition and conversational AI, according to Cheng, are insufficient technical advancements.

Human emotional needs must be addressed by AI.

One goal of AI research, she says, is to understand "the rational and surprising ways individuals behave." Cheng graduated from Cornell University with a bachelor's degree in architecture."

She started her work as an architect/urban designer at Nihon Sekkei International in Tokyo.

She also worked in Los Angeles for the architectural firm Skidmore Owings & Merrill.

Cheng opted to pursue a profession in information technology while residing in California.

She thought of architectural design as a well-established industry with well-defined norms and needs.

Cheng returned to school and graduated from New York University with a master's degree in Interactive Telecommunications, Computer Programming, and Design.

Her first position in this field was at Apple Computer in Cupertino, California, where she worked as a user experience researcher and designer for QuickTime VR and QuickTime Conferencing in the Advanced Technology Group-Human Interface Group.

In 1995, she joined Microsoft's Virtual Worlds Group, where she worked on the Virtual Worlds Platform and Microsoft V-Chat.

Kodu Game Lab, an environment targeted at teaching youngsters programming, was one of Cheng's efforts.

In 2001, she founded the Social Computing group with the goal of developing social networking prototypes.

She then worked at Microsoft Research-FUSE Labs as the General Manager of Windows User Experience for Windows Vista, eventually ascending to the post of Distinguished Engineer and General Manager.

Cheng has spoken at Harvard and New York Universities and is considered one of the country's top female engineers 

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; ELIZA; Natural Language Processing and Speech Understanding; PARRY; Turing Test.


Further Reading


Abu Shawar, Bayan, and Eric Atwell. 2007. “Chatbots: Are They Really Useful?” LDV Forum 22, no.1: 29–49.

Abu Shawar, Bayan, and Eric Atwell. 2015. “ALICE Chatbot: Trials and Outputs.” Computación y Sistemas 19, no. 4: 625–32.

Deshpande, Aditya, Alisha Shahane, Darshana Gadre, Mrunmayi Deshpande, and Prachi M. Joshi. 2017. “A Survey of Various Chatbot Implementation Techniques.” Inter￾national Journal of Computer Engineering and Applications 11 (May): 1–7.

Shah, Huma, and Kevin Warwick. 2009. “Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes.” In Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, 325–49. Hershey, PA: IGI Global.

Zemčík, Tomáš. 2019. “A Brief History of Chatbots.” In Transactions on Computer Science and Engineering, 14–18. Lancaster: DEStech.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...