Artificial Intelligence - What Is Bayesian Inference?

 





Bayesian inference is a method of calculating the likelihood of a proposition's validity based on a previous estimate of its likelihood plus any new and relevant facts.

In the twentieth century, Bayes' Theorem, from which Bayesian statistics are derived, was a prominent mathematical technique employed in expert systems.

The Bayesian theorem has been used to issues such as robot locomotion, weather forecasting, juri metry (the application of quantitative approaches to legislation), phylogenetics (the evolutionary links among animals), and pattern recognition in the twenty-first century.

It's also used in email spam filters and can be used to solve the famous Monty Hall issue.

The mathematical theorem was derived by Reverend Thomas Bayes (1702–1761) of England and published posthumously in the Philosophical Transactions of the Royal Society of London in 1763 as "An Essay Towards Solving a Problem in the Doctrine of Chances." Bayes' Theorem of Inverse Probabilities is another name for it.

A classic article titled "Reasoning Foundations of Medical Diagnosis," written by George Washington University electrical engineer Robert Ledley and Rochester School of Medicine radiologist Lee Lusted and published by Science in 1959, was the first notable discussion of Bayes' Theorem as applied to the field of medical artificial intelligence.

Medical information in the mid-twentieth century was frequently given as symptoms connected with an illness, rather than diseases associated with a symptom, as Lusted subsequently recalled.

They came up with the notion of expressing medical knowledge as the likelihood of a disease given the patient's symptoms using Bayesian reasoning.

Bayesian statistics are conditional, allowing one to determine the likelihood that a specific disease is present based on a specific symptom, but only with prior knowledge of how frequently the disease and symptom are correlated, as well as how frequently the symptom is present in the absence of the disease.

It's pretty similar to what Alan Turing called the evidence-based element in support of the hypothesis.

The symptom-disease complex, which involves several symptoms in a patient, may also be resolved using Bayes' Theorem.

In computer-aided diagnosis, Bayesian statistics analyzes the likelihood of each illness manifesting in a population with the chance of each symptom manifesting given each disease to determine the probability of all possible diseases given each patient's symptom-disease complex.

All induction, according to Bayes' Theorem, is statistical.

In 1960, the theory was used to generate the posterior probability of certain illnesses for the first time.

In that year, University of Utah cardiologist Homer Warner, Jr.

used Bayesian statistics to detect well-defined congenital heart problems at Salt Lake's Latter-Day Saints Hospital, thanks to his access to a Burroughs 205 digital computer.

The theory was used by Warner and his team to calculate the chances that an undiscovered patient having identifiable symptoms, signs, or laboratory data would fall into previously recognized illness categories.

As additional information became available, the computer software could be employed again and again, creating or rating diagnoses via serial observation.

The Burroughs computer outperformed any professional cardiologist in applying Bayesian conditional-probability algorithms to a symptom-disease matrix of thirty-five cardiac diseases, according to Warner.

John Overall, Clyde Williams, and Lawrence Fitzgerald for thyroid problems; Charles Nugent for Cushing's illness; Gwilym Lodwick for primary bone tumors; Martin Lipkin for hematological diseases; and Tim de Dombal for acute abdominal discomfort were among the early supporters of Bayesian estimation.

In the previous half-century, the Bayesian model has been expanded and changed several times to account for or compensate for sequential diagnosis and conditional independence, as well as to weight other elements.

Poor prediction of rare diseases, insufficient discrimination between diseases with similar symptom complexes, inability to quantify qualitative evidence, troubling conditional dependence between evidence and hypotheses, and the enormous amount of manual labor required to maintain the requisite joint probability distribution tables are all criticisms leveled at Bayesian computer-aided diagnosis.

Outside of the populations for which they were intended, Bayesian diagnostic helpers have been chastised for their shortcomings.

When rule-based decision support algorithms became more prominent in the mid-1970s, the application of Bayesian statistics in differential diagnosis reached a low.

In the 1980s, Bayesian approaches resurfaced and are now extensively employed in the area of machine learning.

From the concept of Bayesian inference, artificial intelligence researchers have developed robust techniques for supervised learning, hidden Markov models, and mixed approaches for unsupervised learning.

Bayesian inference has been controversially utilized in artificial intelligence algorithms that aim to calculate the conditional chance of a crime being committed, to screen welfare recipients for drug use, and to identify prospective mass shooters and terrorists in the real world.

The method has come under fire once again, especially when screening includes infrequent or severe incidents, where the AI system might act arbitrarily and flag too many people as being at danger of partaking in the unwanted behavior.

In the United Kingdom, Bayesian inference has also been used into the courtroom.

The defense team in Regina v.

Adams (1996) offered jurors the Bayesian approach to aid them in forming an unbiased mechanism for combining introduced evidence, which included a DNA profile and varying match probability calculations, as well as constructing a personal threshold for convicting the accused "beyond a reasonable doubt." Before Ledley, Lusted, and Warner revived Bayes' theorem in the 1950s, it had previously been "rediscovered" multiple times.

Pierre-Simon Laplace, the Marquis de Condorcet, and George Boole were among the historical figures who saw merit in the Bayesian approach to probability.

The Monty Hall dilemma, named after the presenter of the famous game show Let's Make a Deal, involves a contestant selecting whether to continue with the door they've chosen or swap to another unopened door when Monty Hall (who knows where the reward is) opens one to reveal a goat.

Switching doors, contrary to popular belief, doubles your odds of winning under conditional probability.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Computational Neuroscience; Computer-Assisted Diagnosis.


Further Reading

Ashley, Kevin D., and Stefanie BrĂ¼ninghaus. 2006. “Computer Models for Legal Prediction.” Jurimetrics 46, no. 3 (Spring): 309–52.

Barnett, G. Octo. 1968. “Computers in Patient Care.” New England Journal of Medicine
279 (December): 1321–27.

Bayes, Thomas. 1763. “An Essay Towards Solving a Problem in the Doctrine of Chances.” 
Philosophical Transactions 53 (December): 370–418.

Donnelly, Peter. 2005. “Appealing Statistics.” Significance 2, no. 1 (February): 46–48.
Fox, John, D. Barber, and K. D. Bardhan. 1980. “Alternatives to Bayes: A Quantitative 
Comparison with Rule-Based Diagnosis.” Methods of Information in Medicine 19, 
no. 4 (October): 210–15.

Ledley, Robert S., and Lee B. Lusted. 1959. “Reasoning Foundations of Medical Diagnosis.” Science 130, no. 3366 (July): 9–21.

Lusted, Lee B. 1991. “A Clearing ‘Haze’: A View from My Window.” Medical Decision 
Making 11, no. 2 (April–June): 76–87.

Warner, Homer R., Jr., A. F. Toronto, and L. G. Veasey. 1964. “Experience with Bayes’ 
Theorem for Computer Diagnosis of Congenital Heart Disease.” Annals of the 
New York Academy of Sciences 115: 558–67.


Artificial Intelligence - Autonomy And Complacency In AI Systems.

 




The concepts of machine autonomy and human autonomy and complacency are intertwined.

Artificial intelligences are undoubtedly getting more independent as they are trained to learn from their own experiences and data intake.

Machines that gain more skills than humans tend to become increasingly dependent on them to make judgments and react correctly to unexpected events.

This dependence on AI systems' decision-making processes might lead to a loss of human agency and complacency.

This complacency may result in the AI's system or decision-making processes failing to respond to major faults.

Autonomous machines are ones that can function in unsupervised settings, adapt to new situations and experiences, learn from previous errors, and decide the best potential outcomes in each case without the need for fresh programming input.

To put it another way, these robots learn from their experiences and are capable of going beyond their original programming in certain respects.

The concept is that programmers won't be able to foresee every circumstance that an AI-enabled machine could experience based on its activities, thus it must be able to adapt.

This is not widely recognized, since others say that these systems' adaptability is inherent in their programming, as their programs are designed to be adaptable.

The disagreement over whether any agent, including humans, can express free will and act autonomously exacerbates these debates.

With the advancement of technology, the autonomy of AI programs is not the only element of autonomy that is being explored.

Worries have also been raised concerning the influence on human autonomy, as well as concerns about machine complacency.

People who gain from the machine's choice being irrelevant since they no longer have to make decisions as AI systems grow increasingly tuned to anticipate people's wishes and preferences.

The interaction of human employees and automated systems has gotten a lot of attention.

According to studies, humans are more prone to overlook flaws in these procedures, particularly when they get routinized, which leads to a positive expectation of success rather than a negative expectation of failure.

This sense of accomplishment causes the operators or supervisors of automated processes to place their confidence in inaccurate readouts or machine judgments, which may lead to mistakes and accidents.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems.



Further Reading


AndrĂ©, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Frank Douglas, William 
Goldstein, Joel Huber, Leaf Van Boven, Bernd Weber, and Haiyang Yang. 2018. 

“Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data.” Customer Needs and Solutions 5, no. 1–2: 28–37.

Bahner, J. Elin, Anke-Dorothea HĂ¼per, and Dietrich Manzey. 2008. “Misuse of Auto￾mated Decision Aids: Complacency, Automation Bias, and the Impact of Training 
Experience.” International Journal of Human-Computer Studies 66, no. 9: 
688–99.

Lawless, W. F., Ranjeev Mittu, Donald Sofge, and Stephen Russell, eds. 2017. Autonomy 
and Intelligence: A Threat or Savior? Cham, Switzerland: Springer.

Parasuraman, Raja, and Dietrich H. Manzey. 2010. “Complacency and Bias in Human 
Use of Automation: An Attentional Integration.” Human Factors 52, no. 3: 
381–410.





Artificial Intelligence - Ethics Of Autonomous Weapons Systems.

 



Autonomous weapons systems (AWS) are armaments that are designed to make judgments without the constant input of their programmers.

Navigation, target selection, and when to attack opposing fighters are just a few of the decisions that must be made.

Because of the imminence of this technology, numerous ethical questions and arguments have arisen regarding whether it should be developed and how it should be utilized.

The technology's seeming inevitability prompted Human Rights Watch to launch a campaign in 2013 called "Stop Killer Robots," which pushes for universal bans on their usage.

This movement continues to exist now.

Other academics and military strategists point to AWS' strategic and resource advantages as reasons for continuing to develop and use them.

A discussion of whether it is desirable or feasible to construct an international agreement on their development and/or usage is central to this argument.

Those who advocate for further technological advancement in these areas focus on the advantages that a military power can gain from using AWS.

These technologies have the potential to reduce collateral damage, battle casualties, the capacity to minimize needless risk, more efficient military operations, reduced psychological harm to troops from war, and armies with declining human numbers.

In other words, they concentrate on the advantages of the weapon to the military that will use it.

The essential assumption in these discussions is that the military's aims are morally worthwhile in and of themselves.

AWS may result in less civilian deaths since the systems can make judgments faster than humans; however, this is not always the case with technology, as the decision-making procedures of AWS may result in higher civilian fatalities rather than the opposite.

However, if they can avoid civilian fatalities and property damage more effectively than conventional fighting, they are more efficient and hence preferable.

In times of conflict, they might also improve efficiency by minimizing resource waste.

Transportation of people and the resources required to keep them alive is a time-consuming and challenging part of battle.

AWS provides a solution to complex logistical issues.

Drones and other autonomous systems don't need rain gear, food, drink, or medical attention, making them less cumbersome and perhaps more successful in completing their objectives.

AWS are considered as eliminating waste and offering the best possible outcome in a combat situation in these and other ways.

The employment of AWS in military operations is inextricably linked to Just War Theory.

Just War Theory examines whether it is morally acceptable or essential for a military force to engage in war, as well as what activities are ethically justifiable during wartime.

If an autonomous system may be used in a military strike, it can only be done if the attack is justifiable in the first place.

According to this viewpoint, the manner in which one is murdered is less essential than the reason for one's death.

Those who believe AWS is unethical concentrate on the hazards that such technology entails.

These scenarios include scenarios in which enemy combatants obtain weaponry and use it against the military power that deploys it, as well as scenarios in which there is increased (and uncontrollable) collateral damage, reduced retaliation capability (against enemy combatant aggressors), and loss of human dignity.

One key concern is whether being murdered by a computer without a person as the final decision-maker is consistent with human dignity.

There appears to be something demeaning about being murdered by an AWS that has had minimal human interaction.

Another key worry is the risk aspect, which includes the danger to the user of the technology that if the AWS is taken down (either because to a malfunction or an enemy assault), it will be seized and used against the owner.

Those who oppose the use of AWS are likewise concerned about the concept of just war.

The targeting of civilians by military agents is expressly prohibited under Just War Theory; the only lawful military targets are other military bases or personnel.

However, the introduction of autonomous weapons may imply that a state, particularly one without access to AWS, may be unable to react to military attacks launched by AWS.

In a scenario where one side has access to AWS but the other does not, the side without the weapons will inevitably be without a legal military target, forcing them to either target nonmilitary (civilian) targets or not react at all.

Neither alternative is feasible in terms of ethics or practicality.

Because automated weaponry is widely assumed to be on the horizon, another ethical consideration is how to regulate its use.

Because of the United States' extensive use of remote control drones in the Middle East, this debate has gotten a lot of attention.

Some advocate for a worldwide ban on the technology; although this is often seen as foolish and hence impractical, these advocates frequently point to the UN's restriction against blinding lasers, which has been ratified by 108 countries.

Others want to create an international convention that controls the proper use of these technologies, with consequences and punishments for nations that break these standards, rather than a full prohibition.

There is currently no such agreement, and each state must decide how to govern the usage of these technologies on its own.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Battlefield AI and Robotics; Campaign to Stop Killer Robots; Lethal Autonomous Weapons Systems; Robot Ethics.



Further Reading

Arkin, Ronald C. 2010. “The Case for Ethical Autonomy in Unmanned Systems.” Journal 
of Military Ethics 9, no. 4: 332–41.

Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. 
Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, UK: Cambridge 
University Press.

Killmister, Suzy. 2008. “Remote Weaponry: The Ethical Implications.” Journal of 
Applied Philosophy 25, no. 2: 121–33.

Leveringhaus, Alex. 2015. “Just Say ‘No!’ to Lethal Autonomous Robotic Weapons.” 
Journal of Information, Communication, and Ethics in Society 13, no. 3–4: 
299–313.

Sparrow, Robert. 2016. “Robots and Respect: Assessing the Case Against Autonomous 
Weapon Systems.” Ethics & International Affairs 30, no. 1: 93–116.





Artificial Intelligence - What Is Automatic Film Editing?

  



Automatic film editing is a method of assembling full motion movies in which an algorithm, taught to obey fundamental cinematography standards, cuts and sequences footage.

Automated editing is part of a larger endeavor, known as intelligent cinematography, to include artificial intelligence into filmmaking.

Alfred Hitchcock, the legendary director, predicted that an IBM computer will one day be capable of converting a written script into a polished picture in the mid-1960s.

Many of the concepts of modern filmmaking were created by Alfred Hitchcock.

His argument that, if feasible, the size of a person or item in frame should be proportionate to their importance in the plot at that precise moment in time is one well-known rule of thumb.

"Exit left, enter right," which helps the audience follow lateral motions of actors on the screen, and the 180 and 30-degree principles for preserving spatial connections between subjects and the camera, are two more film editing precepts that arose through extensive experience by filmmakers.

Over time, these principles evolved into heuristics that regulate shot selection, editing, and rhythm and tempo.

Joseph Mascelli's Five C's of Cinematography (1965), for example, has become a large knowledge base for making judgments regarding camera angles, continuity, editing, closeups, and composition.

These human-curated guidelines and human-annotated movie stock material and snippets gave birth to the first artificial intelligence film editing systems.

IDIC, created by Warren Sack and Marc Davis at the MIT Media Lab in the early 1990s, is an example of a system from that era.

IDIC is based on Herbert Simon, J. C. Shaw, and Allen Newell's General Issue Solver, an early artificial intelligence software that was supposed to answer any general problem using the same fundamental method.

IDIC was used to create fictitious Star Trek television trailers based on a human-specified narrative plan focusing on a certain plot element.

Several film editing systems depend on idioms, or standard techniques for editing and framing recorded action in certain contexts.

The idioms themselves will differ depending on the film's style, the setting, and the action to be shown.

In this manner, experienced editors' expertise may be accessed using case-based reasoning, with prior editing recipes being used to tackle comparable present and future challenges.

Editing for combat sequences, like regular character talks, follows standard idiomatic route methods.

This is the method used by Li-wei He, Michael F. Cohen, and David H. Salesin in their Virtual Cinema tographer, which uses expert idiom knowledge in the editing of fully computer-generated video for interactive virtual environments.

He's group created the Declarative Camera Control Language (DCCL), which formalizes the control of camera locations in the editing of CGI animated films to match cinematographic traditions.

Researchers have lately begun experimenting with deep learning algorithms and training data extracted from existing collections of well-known films with good cinematographic quality to develop recommended best cuts of new films.

Many of the latest apps may be used with mobile, drone, or portable devices.

Short and interesting films constructed from pictures taken by amateurs with smartphones are projected to become a preferred medium of interaction over future social media due to easy automated video editing.

Photography is presently filling that need.

In machinima films generated with 3D virtual game engines and virtual actors, automatic film editing is also used as an editing technique.




~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Workplace Automation.


Further Reading

Galvane, Quentin, RĂ©mi Ronfard, and Marc Christie. 2015. “Comparing Film-Editing.” In Eurographics Workshop on Intelligent Cinematography and Editing, edited by William H. Bares, Marc Christie, and RĂ©mi Ronfard, 5–12. Aire-la-Ville, Switzerland: Eurographics Association.

He, Li-wei, Michael F. Cohen, and David H. Salesin. 1996. “The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing.” In 

Proceedings of SIGGRAPH ’96, 217–24. New York: Association for Computing Machinery.

Ronfard, RĂ©mi. 2012. “A Review of Film Editing Techniques for Digital Games.” In Workshop on Intelligent Cinematography and Editing. https://hal.inria.fr/hal-00694444/.

Artificial Intelligence - What Is Automated Multiphasic Health Testing?

 




Automated Multiphasic Health Testing (AMHT) is an early medical computer system for screening large numbers of ill or healthy people in a short period of time semiautomatically.

Lester Breslow, a public health official, pioneered the AMHT idea in 1948, integrating typical automated medical questionnaires with mass screening procedures for groups of individuals being examined for specific illnesses like diabetes, TB, or heart disease.

Multiphasic health testing involves integrating a number of tests into a single package to screen a group of individuals for different diseases, illnesses, or injuries.

AMHT might be related to regular physical exams or health programs.

Humans are subjected to examinations similar to those used in state inspections of autos.

In other words, AMHT approaches preventative medical care in a factory-like manner.

In the 1950s, Automated Multiphasic Health Testing (AMHT) became popular, allowing health care networks to swiftly screen new candidates.

In 1951, the Kaiser Foundation Health Plan began offering a Multiphasic Health Checkup to its members.

Morris F. Collen, an electrical engineer and physician, was the program's director from 1961 until 1979.

The "Kaiser Checkup," which used an IBM 1440 computer to crunch data from patient interviews, lab testing, and clinical findings, looked for undetected illnesses and made treatment suggestions.

Patients hand-sorted 200 prepunched cards with printed questions requiring "yes" or "no" replies at the questionnaire station (one of twenty such stations).

The computer shuffled the cards and used a probability ratio test devised by Jerzy Neyman, a well-known statistician.

Electrocardiographic, spirographic, and ballistocardiographic medical data were also captured by Kaiser's computer system.

A Kaiser Checkup takes around two and a half hours to complete.

BUPA in the United Kingdom and a nationwide program created by the Swedish government are two examples of similar AMHT initiatives that have been introduced in other countries.

The popularity of computerized health testing has fallen in recent decades.

There are issues concerning privacy as well as financial considerations.

Working with AMHT, doctors and computer scientists learned that the body typically masks symptoms.

A sick person may pass through diagnostic devices successfully one day and then die the next.

Electronic medical recordkeeping, on the other hand, has succeeded where AMHT has failed.

Without physical handling or duplication, records may be sent, modified, and returned.

Multiple health providers may utilize patient charts at the same time.

Uniform data input ensures readability and consistency in structure.

Summary reports may now be generated automatically from the information gathered in individual electronic medical records using electronic medical records software.

These "big data" reports make it possible to monitor changes in medical practice as well as evaluate results over time.

Summary reports also enable cross-patient analysis, a detailed algorithmic examination of prognoses by patient groups, and the identification of risk factors prior to the need for therapy.

The application of deep learning algorithms to medical data has sparked a surge of interest in so-called cognitive computing for health care.

IBM's Watson system and Google DeepMind Health, two current leaders, promise changes in eye illness and cancer detection and treatment.

Also unveiled by IBM is the Medical Sieve system, which analyzes both radiological images and textual documents.

Clinical Decision Support Systems, Computer-Assisted Diagnosis, INTERNIST-I, and QMR are all examples of clinical decision support systems.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Clinical Decision Support Systems; Computer-Assisted Diagnosis; INTERNIST-I and QMR.


Further Reading


Ayers, W. R., H. M. Hochberg, and C. A. Caceres. 1969. “Automated Multiphasic Health Testing.” Public Health Reports 84, no. 7 (July): 582–84.

Bleich, Howard L. 1994. “The Kaiser Permanente Health Plan, Dr. Morris F. Collen, and Automated Multiphasic Testing.” MD Computing 11, no. 3 (May–June): 136–39.

Collen, Morris F. 1965. “Multiphasic Screening as a Diagnostic Method in Preventive Medicine.” Methods of Information in Medicine 4, no. 2 (June): 71–74.

Collen, Morris F. 1988. “History of the Kaiser Permanente Medical Care Program.” Inter￾viewed by Sally Smith Hughes. Berkeley: Regional Oral History Office, Bancroft Library, University of California.

Mesko, Bertalan. 2017. “The Role of Artificial Intelligence in Precision Medicine.” Expert Review of Precision Medicine and Drug Development 2, no. 5 (September): 239–41.

Roberts, N., L. Gitman, L. J. Warshaw, R. A. Bruce, J. Stamler, and C. A. Caceres. 1969. “Conference on Automated Multiphasic Health Screening: Panel Discussion, Morning Session.” Bulletin of the New York Academy of Medicine 45, no. 12 (December): 1326–37.



Artificial Intelligence - What Is Automated Machine Learning?

 


 

Machine learning algorithms are created with the goal of detecting and describing complex patterns in massive datasets.

By taking the uncertainty out of constructing instruments of convenience, automated machine learning (AutoML) aims to deliver these analytical tools to everyone interested in large data research.

"Computational analysis pipelines" is the name given to these instruments.

While there is still a lot of work to be done in automated machine learning, early achievements show that it will be an important tool in the arsenal of computer and data scientists.

It will be critical to customize these software packages to beginner users, enabling them to undertake difficult machine learning activities in a user-friendly way while still allowing for the integration of domain-specific knowledge and model interpretation and action.

These latter objectives have received less attention, but they will need to be addressed in future study before AutoML is able to tackle complicated real-world situations.

Automated machine learning is a relatively young field of research that has risen in popularity in the past ten years as a consequence of the widespread availability of strong open-source machine learning frameworks and high-performance computers.

AutoML software packages are currently available in both open-source and commercial versions.

Many of these packages allow for the exploration of machine learning pipelines, which can include feature transformation algorithms like discretization (which converts continuous equations, functions, models, and variables into discrete equations, functions, and so on for digital computers), feature engineering algorithms like principal components analysis (which removes large dimensions of "less important" data while keeping a subset of "more important" variables), and so on.

Bayesian optimization, ensemble techniques, and genetic programming are examples of stochastic search strategies utilized in AutoML.

Stochastic search techniques may be used to solve deterministic issues that have random noise or deterministic problems that have randomness injected into them.

New methods for extracting "signal from noise" in datasets, as well as finding insights and making predictions, are currently being developed and tested.

One of the difficulties with machine learning is that each algorithm examines data in a unique manner.

That is, each algorithm recognizes and classifies various patterns.

Linear support vector machines and k-nearest neighbor algorithms are excellent at detecting linear patterns, whereas k-nearest neighbor methods are effective at detecting nonlinear patterns.

The problem is that scientists don't know which algorithm(s) to employ when they start their job since they don't know what patterns they're looking for in the data.

The majority of users select an algorithm that they are acquainted with or that seems to operate well across a variety of datasets.

Some people may choose an algorithm because the models it generates are simple to compare.

There are a variety of reasons why various algorithms are used for data analysis.

Nonetheless, the approach selected may not be optimal for a particular data set.

This task is especially tough for a new user who may not be aware of the strengths and disadvantages of each algorithm.

A grid search is one way to address this issue.

Multiple machine learning algorithms and parameter settings are applied to a dataset in a systematic manner, with the results compared to determine which approach is the best.

This is a frequent strategy that may provide positive outcomes.

The grid search's drawback is that it may be computationally demanding when a large number of methods, each with several parameter values, need to be examined.

Random forests are classification algorithms comprised of numerous decision trees with a number of regularly used parameters that must be fine-tuned for best results on a specific dataset.

The accepted machine learning approach adjusts the data using parameters, which are configuration variables.

The maximum number of characteristics that may be used in the decision trees that are constructed and assessed is a typical parameter.

Automated machine learning may aid in the management of the complicated, computationally costly combinatorial explosion that occurs during the execution of specialized investigations.

A single parameter might have 10 distinct configurations, for example.

Another parameter might be the number of decision trees to be included in the forest, which could be 10 in total.

Another ten possible parameters might be the minimum amount of samples that would be permitted in the "leaves" of the decision trees.

Based on the examination of just three parameters, this example gives 1000 distinct alternative parameter configurations.

A data scientist looking at ten different machine learning methods, each with 1000 different parameter values, would have to undertake 10,000 different studies.

Hyperparameters, which are characteristics of the analyses that are established ahead of time and hence not learnt from the data, are added on top of these studies.

They are often established by the data scientist using a variety of rules of thumb or values derived from previous challenges.

Comparisons of numerous alternative cross-validation procedures or the influence of sample size on findings are examples of hyperparameter setups.

Hundreds of hyperparameter combinations may need to be assessed in a typical case.

The data scientist would have to execute a total of one million analyses using a mix of machine learning algorithms, parameter settings, and hyperparameter settings.

Given the computer resources available to the user, so many distinct studies might be prohibitive depending on the sample size of the data to be examined, the number of features, and the kinds of machine learning algorithms used.

Using a stochastic search to approximate the optimum mix of machine learning algorithms, parameter settings, and hyperparameter settings is an alternate technique.

Until a computational limit is reached, a random number generator is employed to sample from all potential possibilities.

Before making a final decision, the user manually explores various parameter and hyperparameter settings around the optimal technique.

This has the virtue of being computationally controllable, but it has the disadvantage of being stochastic, since chance may not explore the best combinations.

To address this, a stochastic search algorithm with a heuristic element—a practical technique, guide, or rule—may be created that can adaptively explore algorithms and settings while improving over time.

Because they automate the search for optimum machine learning algorithms and parameters, approaches that combine stochastic searches with heuristics are referred to as automated machine learning.

A stochastic search could begin by creating a variety of machine learning algorithm, parameter setting, and hyperparameter setting combinations at random and then evaluate each one using cross-validation, a method for evaluating the effectiveness of a machine learning model.

The best of these is chosen, modified at random, and assessed once again.

This procedure is continued until a computational limit or a performance goal has been met.

This stochastic search is guided by the heuristic algorithm.

Optimal search strategy development is a hot topic in academia right now.

There are various benefits to using AutoML.

To begin with, it has the potential to be more computationally efficient than the exhaustive grid search method.

Second, it makes machine learning more accessible by removing some of the guesswork involved in choosing the best machine learning algorithm and its many parameters for a particular dataset.

This allows even the most inexperienced user to benefit from machine learning.

Third, if generalizability measurements are included into the heuristic being utilized, it may provide more repeatable outcomes.

Fourth, including complexity metrics into the heuristic might result in more understandable outcomes.

Fifth, if expert knowledge is included into the heuristic, it may produce more actionable findings.

AutoML techniques do, however, present certain difficulties.

The first is the risk of overfitting, which occurs when numerous distinct methods are evaluated, resulting in an analysis that matches existing data too closely but does not fit or forecast unknown or fresh data.

The more analytical techniques used on a dataset, the more likely it is to learn the data's noise, resulting in a model that is hard to generalize to new data.

With any AutoML technique, this must be thoroughly handled.

Second, AutoML is computationally demanding in and of itself.

Third, AutoML approaches may create very complicated pipelines including several machine learning algorithms.

This may make interpretation considerably more challenging than just selecting a single analytic method.

Fourth, this is a very new field.

Despite some promising early instances, ideal AutoML solutions may not have yet been devised.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Deep Learning.

Further Reading

Feurer, Matthias, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. “Efficient and Robust Automated Machine Learning.” In Advances in Neural Information Processing Systems, 28. Montreal, Canada: Neural Information Processing Systems. http://papers.nips.cc/paper/5872-efficient-and-robust-automated-machine-learning.

Hutter, Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, eds. 2019. Automated Machine Learning: Methods, Systems, Challenges. New York: Springer.



Artificial Intelligence - What Has Been Isaac Asimov's Influence On AI?



(c. 1920–1992) Isaac Asimov was a professor of biochemistry at Boston University and a well-known science fiction novelist.

Asimov was a prolific writer in a variety of genres, and his corpus of science fiction has had a major impact on not just the genre, but also on ethical concerns surrounding science and technology.

Asimov was born in the Russian Federation.

He celebrated his birthday on January 2, 1920, despite not knowing his official birth date.

In 1923, his family moved to New York City.

At the age of sixteen, Asimov applied to Columbia College, the undergraduate school of Columbia University, but was refused admission owing to anti-Semitic restrictions on the number of Jewish students.

He finally enrolled in Seth Low Junior College, a connected undergraduate institution.

Asimov switched to Columbia College when Seth Low closed its doors, but obtained a Bachelor of Science rather than a Bachelor of Arts, which he regarded as "a gesture of second-class citizenship" (Asimov 1994, n.p.).

Asimov grew interested in science fiction about this time and started writing letters to science fiction periodicals, ultimately attempting to write his own short tales.

His debut short story, "Marooned off Vesta," was published in Amazing Stories in 1938.

His early works placed him in the company of science fiction pioneers like Robert Heinlein.

After graduation, Asimov attempted, but failed, to enroll in medical school.

Instead, at the age of nineteen, he enrolled in graduate school for chemistry.

World War II halted Asimov's graduate studies, and at Heinlein's recommendation, he completed his military duty by working at the Naval Air Experimental Station in Philadelphia.

He created short tales while stationed there, which constituted the foundation for Foundation (1951), one of his most well-known works and the first of a multi-volume series that he would eventually tie to numerous of his other pieces.

He earned his doctorate from Columbia University in 1948.

The pioneering Robots series by Isaac Asimov (1950s–1990s) has served as a foundation for ethical norms to alleviate human worries about technology gone awry.

The Three Laws of Robotics, for example, are often mentioned as guiding principles for artificial intelligence and robotics.

The Three Laws were initially mentioned in the short tale "Run about" (1942), which was eventually collected in I, Robot (1950):

1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

2. Except when such commands contradict with the First Law, a robot shall follow orders provided to it by humans.

3. A robot must defend its own existence as long as this does not violate the First or Second Laws.

A "zeroth rule" is devised in Robots and Empire (1985) in order for robots to prevent a scheme to destroy Earth: "A robot may not damage mankind, or enable humanity to come to danger via inactivity." The original Three Laws are superseded by this statute.

Characters in Asimov's Robot series of books and short stories are often tasked with solving a mystery in which a robot seems to have broken one of the Three Laws.

In "Runaround," for example, two field experts with U.S. Robots and Mechanical Men, Inc. discover they're in danger of being stuck on Mercury since their robot "Speedy" hasn't returned with selenium required to power a protective shield in an abandoned base to screen them from the sun.

Speedy has malfunctioned because he is stuck in a conflict between the Second and Third Laws: when the robot approaches the selenium, it is obliged to recede in order to defend itself from a corrosive quantity of carbon monoxide near the selenium.

The humans must discover out how to apply the Three Laws to free Speedy from a conflict-induced feedback cycle.

More intricate arguments concerning the application of the Three Laws appear in later tales and books.

The Machines manage the world's economy in "The Evitable Conflict" (1950), and "robopsychologist" Susan Calvin notices that they have changed the First Law into a predecessor of Asimov's zeroth law: "the Machines work not for any one human being, but for all humanity" (Asimov 2004b, 222).

Calvin is concerned that the Machines are guiding mankind toward "the ultimate good of humanity" (Asimov 2004b, 222), even if humanity is unaware of what it is.

Furthermore, Asimov's Basis trilogy (1940s–1990s) coined the word "psychohistory," which may be interpreted as foreshadowing the algorithms that provide the foundation for artificial intelligence today.

In Foundation, the main character Hari Seldon creates psychohistory as a method of making broad predictions about the future behavior of extremely large groups of people, such as the breakdown of civilization (here, the Galactic Empire) and the ensuing Dark Ages.

Seldon, on the other hand, claims that using psychohistory may shorten the era of anarchy: Psychohistory, which has the ability to foretell the fall, may make pronouncements about the coming dark times.

The Empire has been in existence for almost a thousand years.

The next dark ages will last thirty thousand years, not twelve.

A Second Empire will develop, but there will be a thousand generations of suffering humanity between it and our civilization...

If my party is permitted to operate immediately, it is conceivable to cut the period of anarchy to a single millennium.

30–31 (Asimov, 2004a) Psychohistory produces "a mathematical prediction" (Asimov, 2004a, 30), similar to how artificial intelligence would make a forecast.

Seldon established the Basis in the Foundation trilogy, a hidden collection of people enacting humanity's collective knowledge and so serving as the physical foundation for a hypothetical second Galactic Empire.

The Foundation is threatened by the Mule in later parts of the Foundation series, a mutant and consequently an aberration that was not predicted by psychohistory's predictive research.

Although Seldon's thousand-year plan depends on macro conceptions—"the future isn't nebulous," the friction between large-scale theories and individual actions is a crucial factor driving Foundation.

Seldon has computed and plotted it" (Asimov 2004a, 100)—individual acts may save or destroy the scheme.

Asimov's works were frequently foreshadowing, prompting some to label his work as "future history" or "speculative fiction." Asimov's ethical challenges are often cited in legal, political, and policy arguments years after they were published.

For example, in 2007, the South Korean Ministry of Commerce, Industry, and Energy established a Robot Ethics Charter based on the Three Laws, predicting that by 2020, every Korean household will have a robot.

The British House of Lords' Artificial Intelligence Committee adopted a set of guidelines in 2017 that are similar to the Three Laws.

The Three Laws' utility has been questioned by others.

First, some opponents point out that robots are often employed for military purposes, and that the Three Laws would restrict this usage, which Asimov would have supported given his anti-war short tales like "The Gentle Vultures" (1957).

Second, some argue that today's robots and AI applications vary significantly from those shown in the Robot series.

Asimov's imaginary robots are still powered by a "positronic brain," which is still science fiction and beyond current computer capacity.

Third, the Three Laws are clearly fiction, and Asimov's Robot series is founded on misinterpretations in order to advance ethical concerns and for dramatic impact.

Critics claim that the Three Laws cannot serve as a true moral framework for controlling AI or robotics since they may be misunderstood just like any other legislation.

Finally, some argue that these ethical principles should be applied to all people.

Asimov died in 1992 from symptoms related to AIDS, which he caught after receiving a tainted blood transfusion during a heart bypass operation in 1983.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Beneficial AI, Asilomar Meeting on; Pathetic Fallacy; Robot Ethics.


Further Reading

Asimov, Isaac. 1994. I, Asimov: A Memoir. New York: Doubleday.

Asimov, Isaac. 2002. It’s Been a Good Life. Amherst: Prometheus Books.

Asimov, Isaac. 2004a. The Foundation Novels. New York: Bantam Dell.

Asimov, Isaac. 2004b. I, Robot. New York: Bantam Dell.





Artificial Intelligence - Animal Consciousness, Social Cognition, Soul, And AI.



Researchers have gained a growing understanding for animal and other nonhuman intelligences in recent decades.

Ravens, bower birds, gorillas, elephants, cats, crows, dogs, dolphins, chimps, grey parrots, jackdaws, magpies, beluga whales, octopi, and a variety of other creatures have all argued for consciousness or sentience, sophisticated kinds of cognition, and personal rights.

By adding one more prejudice to the contemporary struggle against racism, classism, sexism, and ethnocentrism, the Cambridge Declaration on Consciousness and the separate Nonhuman Rights Project mirror the contemporary struggle against racism, classism, sexism, and ethnocentrism: "speciesism," coined in 1970 by psychologist Richard Ryder and popularized by philosopher Peter Singer.

Animal awareness, in fact, may open the way for the investigation and appreciation of other sorts of postulated intelligences, including artificial (traditionally regarded as "mindless machines," such as animals) and alien intelligences.

The knowability of subjective experience and objective properties of other forms of consciousness is one of the most significant topics professionals in many professions are grappling with today.

"What is it like to be a bat?" reportedly enquired philosopher Thomas Nagel, particularly because bats can use echolocation but humans cannot.

Most selfishly, a greater knowledge of animal consciousness might lead to a better comprehension of human mind by comparison.

Looking to animals may also give fresh insights into the principles behind the emergence of consciousness in humans, which may aid scientists in equipping robots with comparable characteristics, appreciating their moral standing, or sympathizing with their actions.

Animals have been utilized as a tool to achieve human goals rather than as ends in and of themselves throughout history.

Cows produce milk that is consumed by humans.

Sheep produce wool, which is used to manufacture clothes.

Horses used to be used for transportation and power in agriculture, but today they are used for amusement and gambling.

The "discovery" of animal awareness may imply that humans are no longer at the center of their own mental world.

The twentieth-"Cognitive century's Revolution," which ostensibly eliminated the soul as a scientific explanation for mental life, opened the door to studying and conducting experiments in animal perception, memory, cognition, and reasoning, as well as exploring the possibilities for incorporating sophisticated information processing convolutions and integrative capabilities into machines.

The possibility of a fundamental cognitive "software" that is shared by humans, animals, and artificial general intelligences is often addressed in emerging interdisciplinary sciences like neuroscience, evolutionary psychology, and computer science.

In his book Man and Dolphin (1961), independent researcher John Lilly was one of the first to propose that dolphins are not only intelligent, but also exhibit traits and communication abilities that are superior to humans in many aspects.

Many of his results have subsequently been validated by other researchers such as Lori Marino and Diana Reiss, and a broad consensus has been formed that dolphins' self-awareness falls somewhere between humans and chimpanzees.

Dolphins have been spotted fishing together with human fishers, while Pelorus Jack, the most renowned dolphin in history, reliably and freely accompanied ships for twenty-four years over the treacherous rocks and tidal surges of Cook Strait in New Zealand.

Some animals seem to pass the well-known self-recognition mirror test.

Dolphins and killer whales, chimps and bonobos, magpies, and elephants are among them.

The test is often performed by painting a tiny mark on an animal in a location that it cannot see without using a mirror.

The animal is reported to identify itself if they touch the mark on their own body after seeing it reflected in the mirror.

Certain detractors claim that the mirror-mark test is unfair to some animal species because it favors vision over other sense organs.

The study of animal consciousness, according to SETI experts, may help humans contend with the existential implications of self-aware alien intelligences.

Similarly, work with animals has sparked curiosity in artificial intelligences' awareness.

To give you an example, consider the following: John Lilly discusses a hypothetical Solid State Intelligence (SSI) that will eventually evolve from the labor of human computer scientists and engineers in his book The Scientist (1978).

This SSI would be built out of computer components, develop its own integrations and advancements, and eventually self-replicate to confront and destroy humans.

Some human beings would be protected by the SSI in domed "reservations" that it would maintain and govern.

The SSI would eventually develop the capacity to move the planet and traverse the cosmos in search of other intelligences similar to itself.

Artificial intelligence's self-consciousness has been criticized on many grounds.

Machines, according to John Searle, lack intentionality, or the capacity to discover meaning in the computations they do.

Inanimate things are seldom considered to have free will, and so are not considered to be human.

Furthermore, they may be considered as as lacking something, such as a soul, the capacity to distinguish between good and evil, or emotion and creativity.

Animal consciousness findings, on the other hand, have introduced a new dimension to debates over animal and robot rights since they allow for the claim that these animals have the ability to understand whether they are experiencing good or unpleasant experiences.

They also pave the way for powerful kinds of social cognition like attachment, communication, and empathy to be recognized.

A long list of chimps, gorillas, orangutans, and bonobos, including the well-known Washoe, Koko, Chantek, and Kanzi, have mastered an incredible number of gestures in American Sign Language or artificial lexigrams (keyboard symbols of objects or ideas), raising the possibility of true interspecies social exchange.

The Great Ape Project was formed in 1993 by an international group of primatologists with the stated goal of granting these creatures fundamental human rights to life, liberty, and protection from torture.

They argued that these creatures should be granted nonhuman personhood and brought to the forefront of the big mammalian "community of equals." Many well-known marine mammal biologists have become outspoken opponents of fishermen's indiscriminate slaughter of cetaceans or their usage in captivity shows.

In 2007, American lawyer Steven Wise created the Nonhuman Rights Project at the Center for the Expansion of Fundamental Rights.

The Nonhuman Rights Project aims to give animals that are today considered property of legal humans legal personhood.

Physical liberty (against incarceration) and bodily integrity would be among these core personhood rights (against laboratory experimentation).

According to the group, there is no common law norm that prevents animal personhood, and the law finally allowed human slaves to become legal people without precedent via the writ of habeas corpus.

Individuals may use habeas corpus writs to assert their right to liberty and to challenge unjust confinement.

The Nonhuman Rights Project has been fighting for animal rights in the courts since 2013.

The first case was brought in New York State to protect the rights of four confined chimps, and it contained an affidavit from renowned primatologist Jane Goodall as proof.

The North American Primate Sanctuary Alliance requested that the chimpanzees be freed and relocated to their reserve.

The applications and appeals filed by the organization were refused.

Steven Wise has found heart in the fact that, in one judgment, the Supreme Court acknowledged that the subject of personhood is determined by public policy and social norms rather than biology.

The Cambridge Declaration on Consciousness was signed by a group of neuroscientists during the Francis Crick Memorial Conference in 2012.

David Edelman of the Neurosciences Institute in La Jolla, California, Christof Koch of the California Institute of Technology, and Philip Low of Stanford University were the three scientists most directly engaged in the document's creation.

All signatories agreed that scientific methodologies have shown evidence that mammal brain circuits seem to be linked to consciousness, affective moods, and emotional actions.

Birds seem to have developed awareness in a similar way to mammals, according to the researchers.

REM sleep patterns in zebra finches and equivalent effects of hallucinogenic medications were also discovered as proof of conscious behavior in animals by the researchers.

Despite lacking a neocortex for higher-order brain activities, invertebrate cephalopods seem to exhibit self-conscious consciousness, according to the declaration's signers.

Such views have not gone unchallenged.

Humans should continue to carry legal duty for animal care, according to attorney Richard Cupp.

He also contends that animal personhood may block the rights and autonomy of people with cognitive disabilities, leaving them exposed to decreased legal personality.

Cupp also believes that animals are outside of the human moral community, and so outside of the social contract that established personhood rights in the first place.

Daniel Dennett, a philosopher and cognitive scientist, is a vocal opponent of animal sentience, saying that consciousness is a "fiction" that can only be generated via the use of human language.

Animals can't make up tales like this, thus they can't be aware.

Because consciousness is a story we tell ourselves, scientific disciplines will never be able to grasp what it means to be a conscious animal because science is based on objective facts and universal descriptions rather than tales.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Nonhuman Rights and Personhood; Sloman, Aaron.

Further Reading

Dawkins, Marian S. 2012. Why Animals Matter: Animal Consciousness, Animal Welfare, and Human Well-being. Oxford, UK: Oxford University Press.

Kaplan, Gisela. 2016. “Commentary on ‘So Little Brain, So Much Mind: Intelligence and Behavior in Nonhuman Animals’ by Felice Cimatti and Giorgio Vallortigara.” Italian Journal of Cognitive Science 4, no. 8: 237–52.

Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Caro￾lina Law Review 70, no. 4: 1231–87.

Wise, Steven M. 2010. “Legal Personhood and the Nonhuman Rights Project.” Animal Law Review 17, no. 1: 1–11.

Wise, Steven M. 2013. “Nonhuman Rights to Personhood.” Pace Environmental Law Review 30, no. 3: 1278–90.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...