Showing posts sorted by date for query Artificial General Intelligence. Sort by relevance Show all posts
Showing posts sorted by date for query Artificial General Intelligence. Sort by relevance Show all posts

Artificial Intelligence - Who Was Alan Turing?

 


 

 Alan Mathison Turing OBE FRS(1912–1954) was a logician and mathematician from the United Kingdom.

He is known as the "Father of Artificial Intelligence" and "The Father of Computer Science." 

Turing earned a first-class honors degree in mathematics from King's College, Cambridge, in 1934.

Turing received his PhD from Princeton University after a fellowship at King's College, where he studied under American mathematician Alonzo Church.

Turing wrote numerous important publications during his studies, including "On Computable Numbers, with an Application to the Entscheidungsproblem," which proved that the so-called "decision problem" had no solution.

The decision issue asks if there is a method for determining the correctness of any assertion inside a mathematical system.

This paper also explored a hypothetical Turing machine (basically an early computer) that, if represented by an algorithm, could execute any mathematical operation.


Turing is best known for his codebreaking work at Bletchley Park's Government Code and Cypher School (GC&CS) during World War II (1939–1945).

Turing's work at GC&CS included heading Hut 8, which was tasked with cracking the German Enigma and other very difficult naval encryption.

Turing's work undoubtedly shortened the war by years, saving millions of lives, but it is hard to measure with precision.

Turing wrote "The Applications of Probability to Cryptography" and "Paper on Statistics of Repetitions" during his tenure at GC&CS, both of which were held secret for seventy years by the Government Communications Headquarters (GCHQ) until being given to the UK National Archives in 2012.



Following WWII, Turing enrolled at the Victoria University of Manchester to study mathematical biology while continuing his work in mathematics, stored-program digital computers, and artificial intelligence.

Turing's 1950 paper "Computing Machinery and Intelligence" looked into artificial intelligence and introduced the concept of the Imitation Game (also known as the Turing Test), in which a human judge uses a set of written questions and responses to try to distinguish between a computer program and a human.

If the computer program imitates a person to the point that the human judge cannot discern the difference between the computer program's and the human's replies, the program has passed the test, indicating that it is capable of intelligent reasoning.


Turochamp, a chess program written by Turing and his colleague D.G. Champernowne, was meant to be executed by a computer, but no machine with adequate capacity existed to test the program.

Turing instead manually ran the algorithms to test the software.

Turing was well-recognized during his lifetime, despite the fact that most of his work remained secret until after his death.


Turing was made a Fellow of the Royal Society in 1951 and was awarded to the Order of the British Empire in 1946.(FRS).

The Turing Award, named after him, is given annually by the Association for Computing Machinery for contributions to the area of computing.

The Turing Award, which comes with a $1 million reward, is commonly recognized as the Nobel Prize of Computing.


Turing was outspoken about his sexuality at a period when homosexuality was still illegal in the United Kingdom.

Turing was accused in 1952 under Section 11 of the Criminal Law Amendment Act 1885 with "gross indecency." 

Turing was found guilty, granted probation, and was sentenced to a year of "chemical castration," in which he was injected with synthetic estrogen.


Turing's conviction had an influence on his career as well.


His security clearance was withdrawn, and he was compelled to stop working for the GCHQ as a cryptographer.

Following successful campaigning for an apology and pardon, the British government passed the Alan Turing bill in 2016, which retrospectively pardoned hundreds of persons imprisoned under Section 11 and other historical laws.


In 1954, Turing died of cyanide poisoning.

Turing's death may have been caused by inadvertent inhalation of cyanide vapors, despite the fact that it was officially considered a suicide.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Chatbots and Loebner Prize; General and Narrow AI; Moral Turing Test; Turing Test.


References And Further Reading

Hodges, Andrew. 2004. “Turing, Alan Mathison (1912–1954).” In Oxford Dictionary of National Biography. https://www.oxforddnb.com/view/10.1093/ref:odnb/9780198614128.001.0001/odnb-9780198614128-e-36578.

Lavington, Simon. 2012. Alan Turing and His Contemporaries: Building the World’s First Computers. Swindon, UK: BCS, The Chartered Institute for IT.

Sharkey, Noel. 2012. “Alan Turing: The Experiment that Shaped Artificial Intelligence.” BBC News, June 21, 2012. https://www.bbc.com/news/technology-18475646.



Artificial Intelligence - Who Is Mark Tilden?

 


Mark Tilden(1961–) is a biomorphic robot freelance designer from Canada.

A number of his robots are sold as toys.

Others have appeared in television and cinema as props.

Tilden is well-known for his opposition to the notion that strong artificial intelligence is required for complicated robots.

Tilden is a forerunner in the field of BEAM robotics (biology, electronics, aesthetics, and mechanics).

To replicate biological neurons, BEAM robots use analog circuits and systems, as well as continuously varying signals, rather as digital electronics and microprocessors.

Biomorphic robots are programmed to change their gaits in order to save energy.

When such robots come into impediments or changes in the underlying terrain, they are knocked out of their lowest energy condition, forcing them to adapt to a new walking pattern.

The mechanics of the underlying machine rely heavily on self-adaptation.

After failing to develop a traditional electronic robot butler in the late 1980s, Tilden resorted to BEAM type robots.

The robot could barely vacuum floors after being programmed with Isaac Asimov's Three Laws of Robotics.



After hearing MIT roboticist Rodney Brooks speak at Waterloo University on the advantages of basic sensorimotor, stimulus-response robotics versus computationally complex mobile devices, Tilden completely abandoned the project.

Til den left Brooks' lecture questioning if dependable robots might be built without the use of computer processors or artificial intelligence.

Rather than having the intelligence written into the robot's programming, Til den hypothesized that the intelligence may arise from the robot's operating environment, as well as the emergent features that resulted from that world.

Tilden studied and developed a variety of unusual analog robots at the Los Alamos National Laboratory in New Mexico, employing fast prototyping and off-the-shelf and cannibalized components.



Los Alamos was looking for robots that could operate in unstructured, unpredictable, and possibly hazardous conditions.

Tilden built almost a hundred robot prototypes.

His SATBOT autonomous spaceship prototype could align itself with the Earth's magnetic field on its own.

He built fifty insectoid robots capable of creeping through minefields and identifying explosive devices for the Marine Corps Base Quantico.

A robot known as a "aggressive ashtray" spits water at smokers.

A "solar spinner" was used to clean the windows.

The actions of an ant were reproduced by a biomorph made from five broken Sony Walkmans.

Tilden started building Living Machines powered by solar cells at Los Alamos.

These machines ran at extremely sluggish rates due to their energy source, but they were dependable and efficient for lengthy periods of time, often more than a year.

Tilden's first robot designs were based on thermodynamic conduit engines, namely tiny and efficient solar engines that could fire single neurons.

Rather than the workings of their brains, his "nervous net" neurons controlled the rhythms and patterns of motion in robot bodies.

Tilden's idea was to maximize the amount of patterns conceivable while using the fewest number of implanted transistors feasible.

He learned that with just twelve transistors, he could create six different movement patterns.

Tilden might replicate hopping, leaping, running, sitting, crawling, and a variety of other patterns of behavior by folding the six patterns into a figure eight in a symmetrical robot chassis.

Since then, Tilden has been a proponent of a new set of robot principles for such survivalist wild automata.

Tilden's Laws of Robotics say that (1) a robot must safeguard its survival at all costs; (2) a robot must get and keep access to its own power source; and (3) a robot must always seek out better power sources.

Tilden thinks that wild robots will be used to rehabilitate ecosystems that have been harmed by humans.

Tilden had another breakthrough when he introduced very inexpensive robots as toys for the general public and robot aficionados.

He wanted his robots to be in the hands of as many people as possible, so that hackers, hobbyists, and members of different maker communities could reprogramme and modify them.

Tilden designed the toys in such a way that they could be dismantled and analyzed.

They might be hacked in a basic way.

Everything is color-coded and labeled, and all of the wires have gold-plated contacts that can be ripped apart.

Tilden is presently working with WowWee Toys in Hong Kong on consumer-oriented entertainment robots:

  • B.I.O. Bugs, Constructobots, G.I. Joe Hoverstrike, Robosapien, Roboraptor, Robopet, Roborep tile, Roboquad, Roboboa, Femisapien, and Joebot are all popular WowWee robot toys.
  • The Roboquad was designed for the Jet Propulsion Laboratory's (JPL) Mars exploration program.
  • Tilden is also the developer of the Roomscooper cleaning robot.


WowWee Toys sold almost three million of Tilden's robot designs by 2005.


Tilden made his first robotic doll when he was three years old.

At the age of six, he built a Meccano suit of armor for his cat.

At the University of Waterloo, he majored in Systems Engineering and Mathematics.


Tilden is presently working on OpenCog and OpenCog Prime alongside artificial intelligence pioneer Ben Goertzel.


OpenCog is a worldwide initiative supported by the Hong Kong government that aims to develop an open-source emergent artificial general intelligence framework as well as a common architecture for embodied robotic and virtual cognition.

Dozens of IT businesses across the globe are already using OpenCog components.

Tilden has worked on a variety of films and television series as a technical adviser or robot designer, including Lara Croft: Tomb Raider (2001), The 40-Year-Old Virgin (2005), Paul Blart Mall Cop (2009), and X-Men: The Last Stand (2006).

In the Big Bang Theory (2007–2019), his robots are often displayed on the bookshelves of Sheldon's apartment.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Brooks, Rodney; Embodiment, AI and.


References And Further Reading

Frigo, Janette R., and Mark W. Tilden. 1995. “SATBOT I: Prototype of a Biomorphic Autonomous Spacecraft.” Mobile Robotics, 66–75.

Hapgood, Fred. 1994. “Chaotic Robots.” Wired, September 1, 1994. https://www.wired.com/1994/09/tilden/.

Hasslacher, Brosl, and Mark W. Tilden. 1995. “Living Machines.” Robotics and Autonomous Systems 15, no. 1–2: 143–69.

Marsh, Thomas. 2010. “The Evolution of a Roboticist: Mark Tilden.” Robot Magazine, December 7, 2010. http://www.botmag.com/the-evolution-of-a-roboticist-mark-tilden.

Menzel, Peter, and Faith D’Aluisio. 2000. “Biobots.” Discover Magazine, September 1, 2000. https://www.discovermagazine.com/technology/biobots.

Rietman, Edward A., Mark W. Tilden, and Manor Askenazi. 2003. “Analog Computation with Rings of Quasiperiodic Oscillators: The Microdynamics of Cognition in Living Machines.” Robotics and Autonomous Systems 45, no. 3–4: 249–63.

Samans, James. 2005. The Robosapiens Companion: Tips, Tricks, and Hacks. New York: Apress.



AI Glossary - What Is Artificial Intelligence Or AI?



Artificial Intelligence (AI) is a term that refers to the use of computers to make decisions.

Artificial intelligence, in general, is the area concerned with creating strategies that enable computers to function in a way that seems intelligent, similar to how a person might.

The goals range from the rudimentary, where a program seems "a little wiser" than one would anticipate, to the more ambitious, where the goal is to create a fully aware, intelligent, computer-based being.

As software and hardware improves, the lower end is gradually fading into the background of ordinary computing.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

AI Glossary - What Is ARPAbet?



The English language phenome set is encoded in ASCII as ARPAbet.

The Advanced Research Projects Agency (ARPA) created ARPABET (sometimes written ARPAbet) as part of their Speech Understanding Research initiative in the 1970s. 


It uses unique ASCII letter sequences to represent phonemes and allophones in General American English. 

Two methods were developed: one that represented each segment with one character (alternating upper- and lower-case letters) and the other that represented each segment with one or two (case-insensitive). 

The latter was significantly more commonly used. 


Computalker for the S-100 system, SAM for the Commodore 64, SAY for the Amiga, TextAssist for the PC, and Speakeasy from Intelligent Artefacts, all of which utilized the Votrax SC-01 speech synthesiser IC, have all used ARPABET. 

The CMU Pronouncing Dictionary also uses it. 

In the TIMIT corpus, an updated version of ARPABET is employed.



The ARPABET: one of many possible short versions

ARPABET
Vowels
Consonants
Less Used Phones/Allophones
Symbol
Example
Symbol
Example
Symbol
Example
iy beat b bad dx butter
ih bit p pad el bottle
eh bet d dad em bottom
ae bat t toy nx (flapped) n
aa cot g gag en button
ax the k kick eng Washington
ah butt bcl (b closure) ux
uw boot pcl (p closure) el bottle
uh book dcl (d closure) q (glottal stop)
aw about tcl (t closure) ix roses
er bird gcl (g closure) epi (epinthetic closure)
axr diner kcl (k closure) sil silence
ey bait dh they pau silence
ay bite th thief
oy boy v very
ow boat f fief
ao bought z zoo
s see
ch church
m mom
n non
ng sing
w wet
y yet
hh hay
r red
l led
zh measure
sh shoe



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

AI Glossary - What Is ARF?

 


R.R. Fikes created ARF in the late 1960s as a general problem solver.

It used a combination of constraint-satisfaction and heuristic searches.

Fikes also created REF, a problem-statement language for ARF. 


REF-ARF is a procedure-based approach for addressing problems. 


The paper by Fikes presents an attempt to create a heuristic problem-solving software that takes issues expressed in a nondeterministic programming language and finds solutions using constraint fulfillment and heuristic search approaches. 

The usage of nondeterministic programming languages for presenting issues is examined, as well as ref, the language that the problem solver ARF accepts.

Different ref extensions are examined. 

The program's basic framework is detailed in depth, and several options for expanding it are examined. 

In sixteen example problems, the usage of the input language and the behavior of the program are presented and investigated.



The paper discusses Ref2 and POPS, two heuristic problem-solving algorithms. 

Both systems take issues expressed as nondeterministic programming language programs and solve them using heuristic approaches to locate successful program executions. 

Ref2 is built on Richard Fikes' REF-ARF system and includes REF-issue-solving ARF's techniques as well as new methods based on a different representation for the problem context. 

Ref2 may also handle issues involving integer programming. POPS is an updated and expanded version of Ref2 that incorporates goal-directed procedures based on GPS ideas.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram



AI Glossary - What Is Arcing?

 



Arcing methods are a broad category of Adaptive Resampling and Combining approaches for boosting machine learning and statistical techniques' performance.

ADABOOST and bagging are two prominent examples.

In general, these strategies apply a learning technique to a training set repeatedly, such as a decision tree, and then reweight, or resample, the data and refit the learning technique to the data.

This results in a set of learning rules.

New observations are passed through all members of the collection, and the predictions or classifications are aggregated by averaging or a majority rule prediction to generate a combined result.

These strategies may provide findings that are significantly more accurate than a single classifier, but being less interpretable than a single classifier.

They can build minimum (Bayes) risk classifiers, according to research.


See Also: 


ADABOOST, Bootstrap AGGregation


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

AI - Technological Singularity

 




The emergence of technologies that could fundamentally change humans' role in society, challenge human epistemic agency and ontological status, and trigger unprecedented and unforeseen developments in all aspects of life, whether biological, social, cultural, or technological, is referred to as the Technological Singularity.

The Singularity of Technology is most often connected with artificial intelligence, particularly artificial general intelligence (AGI).

As a result, it's frequently depicted as an intelligence explosion that's pushing advancements in fields like biotechnology, nanotechnology, and information technologies, as well as inventing new innovations.

The Technological Singularity is sometimes referred to as the Singularity, however it should not be confused with a mathematical singularity, since it has only a passing similarity.

This singularity, on the other hand, is a loosely defined term that may be interpreted in a variety of ways, each highlighting distinct elements of the technological advances.

The thoughts and writings of John von Neumann (1903–1957), Irving John Good (1916–2009), and Vernor Vinge (1944–) are commonly connected with the Technological Singularity notion, which dates back to the second half of the twentieth century.

Several universities, as well as governmental and corporate research institutes, have financed current Technological Singularity research in order to better understand the future of technology and society.

Despite the fact that it is the topic of profound philosophical and technical arguments, the Technological Singularity remains a hypothesis, a guess, and a pretty open hypothetical idea.

While numerous scholars think that the Technological Singularity is unavoidable, the date of its occurrence is continuously pushed back.

Nonetheless, many studies agree that the issue is not whether or whether the Technological Singularity will occur, but rather when and how it will occur.

Ray Kurzweil proposed a more exact timeline for the emergence of the Technological Singularity in the mid-twentieth century.

Others have sought to give a date to this event, but there are no well-founded grounds in support of any such proposal.

Furthermore, without applicable measures or signs, mankind would have no way of knowing when the Technological Singularity has occurred.

The history of artificial intelligence's unmet promises exemplifies the dangers of attempting to predict the future of technology.

The themes of superintelligence, acceleration, and discontinuity are often used to describe the Technological Singularity.

The term "superintelligence" refers to a quantitative jump in artificial systems' cognitive abilities, putting them much beyond the capabilities of typical human cognition (as measured by standard IQ tests).

Superintelligence, on the other hand, may not be restricted to AI and computer technology.

Through genetic engineering, biological computing systems, or hybrid artificial–natural systems, it may manifest in human agents.

Superintelligence, according to some academics, has boundless intellectual capabilities.

The curvature of the time curve for the advent of certain key events is referred to as acceleration.

Stone tools, the pottery wheel, the steam engine, electricity, atomic power, computers, and the internet are all examples of technological advancement portrayed as a curve across time emphasizing the discovery of major innovations.

Moore's law, which is more precisely an observation that has been viewed as a law, represents the increase in computer capacity.

"Every two years, the number of transistors in a dense integrated circuit doubles," it says.

People think that the emergence of key technical advances and new technological and scientific paradigms will follow a super-exponential curve in the event of the Technological Singularity.

One prediction regarding the Technological Singularity, for example, is that superintelligent systems would be able to self-improve (and self-replicate) in previously unimaginable ways at an unprecedented pace, pushing the technological development curve far beyond what has ever been witnessed.

The Technological Singularity discontinuity is referred to as an event horizon, and it is similar to a physical idea linked with black holes.

The analogy to this physical phenomena, on the other hand, should be used with care rather than being used to credit the physical world's regularity and predictability to technological singularity.

The limit of our knowledge about physical occurrences beyond a specific point in time is defined by an event horizon (also known as a prediction horizon).

It signifies that there is no way of knowing what will happen beyond the event horizon.

The discontinuity or event horizon in the context of technological singularity suggests that the technologies that precipitate technological singularity would cause disruptive changes in all areas of human life, developments about which experts cannot even conjecture.

The end of humanity and the end of human civilization are often related with technological singularity.

According to some research, social order will collapse, people will cease to be major actors, and epistemic agency and primacy would be lost.

Humans, it seems, will not be required by superintelligent systems.

These systems will be able to self-replicate, develop, and build their own living places, and humans will be seen as either barriers or unimportant, outdated things, similar to how humans now consider lesser species.

One such situation is represented by Nick Bostrom's Paperclip Maximizer.

AI is included as a possible danger to humanity's existence in the Global Catastrophic Risks Survey, with a reasonably high likelihood of human extinction, placing it on par with global pandemics, nuclear war, and global nanotech catastrophes.

However, the AI-related apocalyptic scenario is not a foregone conclusion of the Technological Singularity.

In other more utopian scenarios, technology singularity would usher in a new period of endless bliss by opening up new opportunities for humanity's infinite expansion.

Another element of technological singularity that requires serious consideration is how the arrival of superintelligence may imply the emergence of superethical capabilities in an all-knowing ethical agent.

Nobody knows, however, what superethical abilities might entail.

The fundamental problem, however, is that superintelligent entities' higher intellectual abilities do not ensure a high degree of ethical probity, or even any level of ethical probity.

As a result, having a superintelligent machine with almost infinite (but not quite) capacities but no ethics seems to be dangerous to say the least.

A sizable number of scholars are skeptical about the development of the Technological Singularity, notably of superintelligence.

They rule out the possibility of developing artificial systems with superhuman cognitive abilities, either on philosophical or scientific grounds.

Some contend that while artificial intelligence is often at the heart of technological singularity claims, achieving human-level intelligence in artificial systems is impossible, and hence superintelligence, and thus the Technological Singularity, is a dream.

Such barriers, however, do not exclude the development of superhuman brains via the genetic modification of regular people, paving the door for transhumans, human-machine hybrids, and superhuman agents.

More scholars question the validity of the notion of the Technological Singularity, pointing out that such forecasts about future civilizations are based on speculation and guesswork.

Others argue that the promises of unrestrained technological advancement and limitless intellectual capacities made by the Technological Singularity legend are unfounded, since physical and informational processing resources are plainly limited in the cosmos, particularly on Earth.

Any promises of self-replicating, self-improving artificial agents capable of super-exponential technological advancement are false, since such systems will lack the creativity, will, and incentive to drive their own evolution.

Meanwhile, social opponents point out that superintelligence's boundless technological advancement would not alleviate issues like overpopulation, environmental degradation, poverty, and unparalleled inequality.

Indeed, the widespread unemployment projected as a consequence of AI-assisted mass automation of labor, barring significant segments of the population from contributing to society, would result in unparalleled social upheaval, delaying the development of new technologies.

As a result, rather than speeding up, political or societal pressures will stifle technological advancement.

While technological singularity cannot be ruled out on logical grounds, the technical hurdles that it faces, even if limited to those that can presently be determined, are considerable.

Nobody expects the technological singularity to happen with today's computers and other technology, but proponents of the concept consider these obstacles as "technical challenges to be overcome" rather than possible show-stoppers.

However, there is a large list of technological issues to be overcome, and Murray Shanahan's The Technological Singularity (2015) gives a fair overview of some of them.

There are also some significant nontechnical issues, such as the problem of superintelligent system training, the ontology of artificial or machine consciousness and self-aware artificial systems, the embodiment of artificial minds or vicarious embodiment processes, and the rights granted to superintelligent systems, as well as their role in society and any limitations placed on their actions, if this is even possible.

These issues are currently confined to the realms of technological and philosophical discussion.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; de Garis, Hugo; Diamandis, Peter; Digital Immortality; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Post-Scarcity, AI and; Superintelligence.


References And Further Reading


Bostrom, Nick. 2014. Superintelligence: Path, Dangers, Strategies. Oxford, UK: Oxford University Press.

Chalmers, David. 2010. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17: 7–65.

Eden, Amnon H. 2016. The Singularity Controversy. Sapience Project. Technical Report STR 2016-1. January 2016.

Eden, Amnon H., Eric Steinhart, David Pearce, and James H. Moor. 2012. “Singularity Hypotheses: An Overview.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, 1–12. Heidelberg, Germany: Springer.

Good, I. J. 1966. “Speculations Concerning the First Ultraintelligent Machine.” Advances in Computers 6: 31–88.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Sandberg, Anders, and Nick Bostrom. 2008. Global Catastrophic Risks Survey. Technical Report #2008/1. Oxford University, Future of Humanity Institute.

Shanahan, Murray. 2015. The Technological Singularity. Cambridge, MA: The MIT Press.

Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.


AI - Symbolic Logic

 





In mathematical and philosophical reasoning, symbolic logic entails the use of symbols to express concepts, relations, and positions.

Symbolic logic varies from (Aristotelian) syllogistic logic in that it employs ideographs or a particular notation to "symbolize exactly the item discussed" (Newman 1956, 1852), and it may be modified according to precise rules.

Traditional logic investigated the truth and falsehood of assertions, as well as their relationships, using terminology derived from natural language.

Unlike nouns and verbs, symbols do not need interpretation.

Because symbol operations are mechanical, they may be delegated to computers.

Symbolic logic eliminates any ambiguity in logical analysis by codifying it entirely inside a defined notational framework.

Gottfried Wilhelm Leibniz (1646–1716) is widely regarded as the founding father of symbolic logic.

Leibniz proposed the use of ideographic symbols instead of natural language in the seventeenth century as part of his goal to revolutionize scientific thinking.

Leibniz hoped that by combining such concise universal symbols (characteristica universalis) with a set of scientific reasoning rules, he could create an alphabet of human thought that would promote the growth and dissemination of scientific knowledge, as well as a corpus containing all human knowledge.

Boolean logic, the logical underpinnings of mathematics, and decision issues are all topics of symbolic logic that may be broken down into subcategories.

George Boole, Alfred North Whitehead, and Bertrand Russell, as well as Kurt Gödel, wrote important contributions in each of these fields.

George Boole published The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought in the mid-nineteenth century (1854).




Boole zoomed down on a calculus of deductive reasoning, which led him to three essential operations in a logical mathematical language known as Boolean algebra: AND, OR, and NOT.

The use of symbols and operators greatly aided the creation of logical formulations.

Claude Shannon (1916–2001) employed electromechanical relay circuits and switches to reproduce Boolean algebra in the twentieth century, laying crucial foundations in the development of electronic digital computing and computer science in general.

Alfred North Whitehead and Bertrand Russell established their seminal work in the subject of symbolic logic in the early twentieth century.

Their Principia Mathematica (1910, 1912, 1913) demonstrated how all of mathematics may be reduced to symbolic logic.

Whitehead and Russell developed a logical system from a handful of logical concepts and a set of postulates derived from those ideas in the first book of their work.

Whitehead and Russell established all mathematical concepts, including number, zero, successor of, addition, and multiplication, using fundamental logical terminology and operational principles like proposition, negation, and either-or in the second book of the Principia.



In the last and third volumes, Whitehead and Russell were able to demonstrate that the nature and reality of all mathematics is built on logical concepts and connections.

The Principia showed how every mathematical postulate might be inferred from previously explained symbolic logical facts.

Only a few decades later, Kurt Gödel's On Formally Undecidable Propositions in the Principia Mathematica and Related Systems (1931) critically analyzed the Principia's strong and deep claims, demonstrating that Whitehead and Russell's axiomatic system could not be consistent and complete at the same time.

Even so, it required another important book in symbolic logic, Ernst Nagel and James Newman's Gödel's Proof (1958), to spread Gödel's message to a larger audience, including some artificial intelligence practitioners.

Each of these seminal works in symbolic logic had a different influence on the development of computing and programming, as well as our understanding of a computer's capabilities as a result.

Boolean logic has made its way into the design of logic circuits.

The Logic Theorist program by Simon and Newell provided logical arguments that matched those found in the Principia Mathematica, and was therefore seen as evidence that a computer could be programmed to do intelligent tasks via symbol manipulation.

Gödel's incompleteness theorem raises intriguing issues regarding how programmed machine intelligence, particularly strong AI, will be realized in the end.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.


See also: 

Symbol Manipulation.



References And Further Reading


Boole, George. 1854. Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities. London: Walton.

Lewis, Clarence Irving. 1932. Symbolic Logic. New York: The Century Co.

Nagel, Ernst, and James R. Newman. 1958. Gödel’s Proof. New York: New York University Press.

Newman, James R., ed. 1956. The World of Mathematics, vol. 3. New York: Simon and Schuster.

Whitehead, Alfred N., and Bertrand Russell. 1910–1913. Principia Mathematica. Cambridge, UK: Cambridge University Press.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...