Space, Exploratory Behavior And Genetics




So much for any psychological support for assertions regarding space as a universal or distinctive object of human curiosity. 



What about biology and anthropology? 



Isn't it true that numerous migrations, from the out-of-Africa exodus to the settlement of the American West, have altered our genetic heritage? 

Shouldn't this lengthy history of migration after migration have resulted in human creatures with a proclivity for exploration and movement? 


Genes linked to migratory behavior have been discovered, which is fascinating. 

Various polymorphisms of the dopamine D4 (DRD4) receptor, in particular, have been linked to the novelty seeking (NS) phenotype, which refers to a heritable tendency to respond strongly to novelty and cues for reward or relief from punishment, leading to exploratory activity in search of rewards as well as avoidance of monotony and punishment. 

Roussos, Giakoumaki, and Bitsios (Roussos, Giakoumaki, and Bitsios, 2009, 1655) The activities prompted by the many types of curiosity previously outlined are referred to as novelty-seeking. 


Individuals with the NS phenotype may engage in a variety of activities, including migratory activity and more "local" kinds of exploration, such as examining local resources. 



The link between DRD4 and the NS phenotype has yet to be shown clearly. 


  • Some studies and meta-analyses have shown a link between specific DRD4 polymorphisms and novelty seeking, including Laucht, Becker, and Schmidt (2006), Munaf, et al. (2008), and Roussos, Giakoumaki, and Bitsios (2009). 
  • Other investigations and meta-analyses, such as those by Schinka, Letsch, and Crawford (2002) and Kluger, Siegfried, and Ebstein, have shown no link (2002). 



The impacts of the environment on the determination of the novelty-seeking phenotype are largely unknown, as is the case with many phenotypic-genotype connections. 


Both sex (Laucht, Becker, and Schmidt 2006) and socioeconomic characteristics (Lahti, et al. 2006) have been suggested as possible modifiers of novelty seeking. 


Similarly, additional genes are likely to influence novelty seeking in substantial but unknown ways. 


  • A priori, if there is a positive association between DRD4 and novelty seeking, as some of these findings suggest, then a positive correlation between the proportion of individuals with the relevant DRD4 polymorphisms in a population and the population's distance from East Africa would not be unreasonable. 
  • As Roussos, Giakoumaki, and Bitsios point out, traits associated with novelty seeking, such as "efficient problem solving," "under-reactivity to unconditioned aversive stimuli," and "low emotional reactivity in the face of preserved attentional processing of emotional stimuli," may have been advantageous during migration periods (Roussos, Giakoumaki, and Bitsios 2009, 1658). 

Other researches have backed up this claim. 


There is "a very high connection between the number of long alleles of the DRD4 gene in a population and its prehistorical macro-migration histories," according to Chen et al. (1999, 317). (It's worth noting that 7R is the most prevalent DRD4 long allele.) 


What is the source of this link? 


Two theories are proposed by Chen et al. 


  • One is what I call the "wanderlust" theory, which claims that DRD4-related qualities encouraged people to migrate. 
  • The second idea is what I term the "selection" hypothesis, according to which DRD4-related features were chosen for after migration. 


The wanderlust theory, according to Chen et al., has "limited evidence": Immigrants have nearly the same rate of long alleles of DRD4 as their respective reference groups in their native country. 


  • These findings show that migratory tribes' greater rate of long alleles may have resulted from adaptation to the unique needs of migration. 
  • To put it another way, Chen, et alresults .'s show that the 7R variation of DRD4 (along with other long alleles) was selected for as a consequence of migration, for essentially the same reasons as Roussos, Giakoumaki, and Bitsios: Long alleles (e.g., 7-repeats) of the DRD4 gene have been associated to novelty-seeking personality, hyperactivity, and risk-taking behaviors, according to prior studies.


The inquisitive part of human nature seems to be the common thread that runs across all of these actions. 


  • It is reasonable to argue that exploratory behaviors are adaptive in migratory societies because they allowed for more successful resource exploitation in the particular environment migration entails—which is typically harsh, constantly changing, and always providing a plethora of novel stimuli and ongoing survival challenges. 
  • (320, Chen et al., 1999) Following study has backed up Chen, et al preference's for the selection hypothesis over the wanderlust theory. 



There is a substantial amount of evidence indicating qualities associated with novelty seeking DRD4 alleles have adaptive relevance for people living in migratory communities. 


This does not bode well for efforts to legitimize the exploration or settlement of space on the basis of supposedly intrinsic exploratory or migratory inclinations. 

Novelty-seeking behaviors are not the only candidate explananses for why NS-alleles of DRD4 were adaptive post-migration. According to Ciani, Edelman, and Ebstein, “the DRD4 polymorphism seems also associated with very different factors, such as nutrition, starvation resistance and the body mass index” and that “it is possible that these factors alone might have conferred an advantage of selected alleles, such as 7R, on nomadic individuals compared with sedentary ones” (2013, 595).


  • In the event that people are genetically or mentally predisposed to exploration or migration, this has minimal bearing on space exploration and migration in particular. 
  • We may all be interested and participate in exploratory activity, but we all do so in our own unique way. 
  • We aren't all enthralled by the same things, and we don't all explore for the same reasons or in the same manner. 


Importantly, the desire to travel or move to unknown regions in space is not a universal aspect of human psyche or biology. 


  • Though some of us may have one of the DRD4 gene variants linked to ancient migration, there is more evidence that these genes were chosen after migration rather than before it (because it is likely these genes were adaptive for migrants28). 
  • And perhaps also maladaptive for individuals in societies that do not provide outlets for novelty seeking, which has been proposed as an explanation for ADHD, substance abuse, and compulsive gambling in modern sedentary societies; see the references in Roussos, Giakoumaki, and Bitsios (2009).
  • This isn't conclusive evidence that DRD4 or another gene (or group of genes) was not a driving force behind migration, but there's clearly a lack of compelling evidence that it was. 


As a result, we can't use the presence of particular DRD4 polymorphisms in certain people as proof that the urge to explore and colonize space is in our genes. 



While it is conceivable that future study may find a significant genetic predictor of behaviors such as space curiosity or a desire or impulse to explore space, there is currently no evidence that these behaviors have a distinct genetic foundation. 

As a result, any reasoning for space travel that presupposes differently should be rejected at this time.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Space Exploration, Space Missions and Systems here.



References and Further Reading:



Chen, Chuansheng, et al. 1999. Population Migration and the Variation of Dopamine D4 Receptor (DRD4) Allele Frequencies Around the Globe. Evolution & Human Behavior 20: 309–324.

Ciani, Andrea, Shany Edelman, and Richard Ebstein. 2013. The Dopamine D4 Receptor (DRD4) Exon 3 VNTR Contributes to Adaptive Personality Differences in an Italian Small Island Population. European Journal of Personality 27: 593–604.

Laucht, Manfred, Katja Becker, and Martin Schmidt. 2006. Visual Exploratory Behavior in Infancy and Novelty Seeking in Adolescence: Two Developmentally Specific Phenotypes of DRD4? Journal of Child Psychology and Psychiatry 47: 1143–1151.

Roussos, Panos, Stella Giakoumaki, and Panos Bitsios. 2009. Cognitive and Emotional Processing in High Novelty Seeking Associated with the L-DRD4 Genotype. Neuropsychologia 47: 1654–1659

Schinka, J. A., E. A. Letsch, and F. C. Crawford. 2003. DRD4 and Novelty Seeking: Results of Meta-Analyses. American Journal of Medical Genetics 114: 643–648.

Wang, Eric, et al. 2004. The Genetic Architecture of Selection at the Human Dopamine Receptor D4 (DRD4) Gene Locus. American Journal of Human Genetics 74: 931–944.







Artificial Intelligence - Who Was Herbert A. Simon?

 


Herbert A. Simon (1916–2001) was a multidisciplinary scholar who contributed significantly to artificial intelligence.


He is largely regarded as one of the twentieth century's most prominent social scientists.

His contributions at Carnegie Mellon University lasted five decades.

Early artificial intelligence research was driven by the idea of the computer as a symbol manipulator rather than a number cruncher.

Emil Post, who originally wrote about this sort of computational model in 1943, is credited with inventing production systems, which included sets of rules for symbol strings used to establish conditions—which must exist before rules can be applied—and the actions to be done or conclusions to be drawn.

Simon and his Carnegie Mellon colleague Allen Newell popularized these theories regarding symbol manipulation and production systems by praising their potential benefits for general-purpose reading, storing, and replicating, as well as comparing and contrasting various symbols and patterns.


Simon, Newell, and Cliff Shaw's Logic Theorist software was the first to employ symbol manipulation to construct "intelligent" behavior.


Theorems presented in Bertrand Russell and Alfred North Whitehead's Principia Mathematica (1910) might be independently proved by logic theorists.

Perhaps most notably, the Logic Theorist program uncovered a shorter, more elegant demonstration of Theorem 2.85 in the Principia Mathematica, which was subsequently rejected by the Journal of Symbolic Logic since it was coauthored by a machine.

Although it was theoretically conceivable to prove the Principia Mathematica's theorems in an exhaustively detailed and methodical manner, it was impractical in reality due to the time required.

Newell and Simon were fascinated by the human rules of thumb for solving difficult issues for which an extensive search for answers was impossible due to the massive quantities of processing necessary.

They used the term "heuristics" to describe procedures that may solve issues but do not guarantee success.


A heuristic is a "rule of thumb" used to solve a problem that is too difficult or time consuming to address using an exhaustive search, a formula, or a step-by-step method.


Heuristic approaches are often compared with algorithmic methods in computer science, with the result of the method being a significant differentiating element.

According to this contrast, a heuristic program will provide excellent results in most cases, but not always, while an algorithmic program is a clear technique that guarantees a solution.

This is not, however, a technical difference.

In fact, a heuristic procedure that consistently yields the best result may no longer be deemed "heuristic"—alpha-beta pruning is an example of this.

Simon's heuristics are still utilized by programmers who are trying to solve issues that demand a lot of time and/or memory.

The game of chess is one such example, in which an exhaustive search of all potential board configurations for the proper solution is beyond the human mind's or any computer's capabilities.


Indeed, for artificial intelligence research, Herbert Simon and Allen Newell referred to computer chess as the Drosophila or fruit fly.


Heuristics may also be used to solve issues that don't have a precise answer, such as in medical diagnosis, when heuristics are applied to a collection of symptoms to determine the most probable diagnosis.

Production rules are derived from a class of cognitive science models that apply heuristic principles to productions (situations).

In practice, these rules reduce down to "IF-THEN" statements that reflect specific preconditions or antecedents, as well as the conclusions or consequences that these preconditions or antecedents justify.

"IF there are two X's in a row, THEN put an O to block," is a frequent example offered for the application of production rules to the tic-tac-toe game.

These IF-THEN statements are incorporated into expert systems' inference mechanisms so that a rule interpreter can apply production rules to specific situations lodged in the context data structure or short-term working memory buffer containing information supplied about that situation and draw conclusions or make recommendations.


Production rules were crucial in the development of artificial intelligence as a discipline.


Joshua Lederberg, Edward Feigenbaum, and other Stanford University partners would later use this fundamental finding to develop DENDRAL, an expert system for detecting molecular structure, in the 1960s.

These production guidelines were developed in DENDRAL after discussions between the system's developers and other mass spectrometry specialists.

Edward Shortliffe, Bruce Buchanan, and Edward Feigenbaum used production principles to create MYCIN in the 1970s.

MYCIN has over 600 IFTHEN statements in it, each reflecting domain-specific knowledge about microbial illness diagnosis and treatment.

PUFF, EXPERT, PROSPECTOR, R1, and CLAVIER were among the several production rule systems that followed.


Simon, Newell, and Shaw demonstrated how heuristics may overcome the drawbacks of classical algorithms, which promise answers but take extensive searches or heavy computing to find.


A process for solving issues in a restricted, clear sequence of steps is known as an algorithm.

Sequential operations, conditional operations, and iterative operations are the three kinds of fundamental instructions required to create computable algorithms.

Sequential operations perform tasks in a step-by-step manner.

The algorithm only moves on to the next job when each step is completed.

Conditional operations are made up of instructions that ask questions and then choose the next step dependent on the response.

One kind of conditional operation is the "IF-THEN" expression.

Iterative operations run "loops" of instructions.

These statements tell the task flow to go back and repeat a previous series of statements in order to solve an issue.

Algorithms are often compared to cookbook recipes, in which a certain order and execution of actions in the manufacture of a product—in this example, food—are dictated by a specific sequence of set instructions.


Newell, Shaw, and Simon created list processing for the Logic Theorist software in 1956.


List processing is a programming technique for allocating dynamic storage.

It's mostly utilized in symbol manipulation computer applications like compiler development, visual or linguistic data processing, and artificial intelligence, among others.

Allen Newell, J. Clifford Shaw, and Herbert A. Simon are credited with creating the first list processing software with enormous, sophisticated, and flexible memory structures that were not reliant on subsequent computer/machine memory.

List processing techniques are used in a number of higher-order languages.

IPL and LISP, two artificial intelligence languages, are the most well-known.


Simon and Newell's Generic Problem Solver (GPS) was published in the early 1960s, and it thoroughly explains the essential properties of symbol manipulation as a general process that underpins all types of intelligent problem-solving behavior.


GPS formed the foundation for decades of early AI research.

To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.

GPS was created with the goal of separating the problem-solving process from information particular to the situation at hand, allowing it to be used to a wide range of issues.

Simon is an economist, a political scientist, and a cognitive psychologist.


Simon is known for the notions of limited rationality, satisficing, and power law distributions in complex systems, in addition to his important contributions to organizational theory, decision-making, and problem-solving.


Computer and data scientists are interested in all three themes.

Human reasoning is inherently constrained, according to bounded rationality.

Humans lack the time or knowledge required to make ideal judgments; problems are difficult, and the mind has cognitive limitations.

Satisficing is a term used to describe a decision-making process that produces a solution that "satisfies" and "suffices," rather than the most ideal answer.

Customers use satisficing in market conditions when they choose things that are "good enough," meaning sufficient or acceptable.


Simon described how power law distributions were obtained from preferred attachment mechanisms in his study on complex organizations.


When a relative change in one variable induces a proportionate change in another, power laws, also known as scaling laws, come into play.

A square is a simple illustration; when the length of a side doubles, the square's area quadruples.

Power laws may be found in biological systems, fractal patterns, and wealth distributions, among other things.

Preferential attachment processes explain why the affluent grow wealthier in income/wealth distributions: Individuals' wealth is dispersed according on their current level of wealth; those with more wealth get proportionately more income, and hence greater overall wealth, than those with less.

When graphed, such distributions often create so-called long tails.

These long-tailed distributions are being employed to explain crowdsourcing, microfinance, and online marketing, among other things.



Simon was born in Milwaukee, Wisconsin, to a Jewish electrical engineer with multiple patents who came from Germany in the early twentieth century.


His mother was a musical prodigy. Simon grew interested in the social sciences after reading books on psychology and economics written by an uncle.

He has said that two works inspired his early thinking on the subjects: Norman Angell's The Great Illusion (1909) and Henry George's Progress and Poverty (1879).



Simon obtained his doctorate in organizational decision-making from the University of Chicago in 1943.

Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and Henry Schultz were among his instructors.

He started his career as a political science professor at the Illinois Institute of Technology, where he taught and conducted research.

In 1949, he transferred to Carnegie Mellon University, where he stayed until 2001.

He progressed through the ranks of the Department of Industrial Management to become its chair.

He has written twenty-seven books and several articles that have been published.

In 1959, he was elected a member of the American Academy of Arts and Sciences.

In 1975, Simon was awarded the coveted Turing Award, and in 1978, he was awarded the Nobel Prize in Economics.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell, Allen.


References & Further Reading:


Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern America. Baltimore: Johns Hopkins Press.

Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex Information Processing System. Santa Monica, CA: The RAND Corporation.

Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.

Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.



AI Terms Glossary - Active Learning

 



Active Learning is a suggested strategy for improving the accuracy of machine learning algorithms by enabling them to designate test zones.

The algorithm may choose a new point x at any time, examine the outcome, and add the new (x, y) pair to its training base.

Neural networks, prediction functions, and clustering functions have all benefited from it.




~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.




AI Terms Glossary - Activation Functions

 


The use of activation functions rather than linear functions in traditional regression models gives neural networks a lot of their strength.

In most neural networks, the inputs to a node are weighted and then summed.

After that, a non-linear activation function is applied to the total.

Although output nodes should have activation functions suited to the distribution of the output variables, these functions are often sigmoidal (monotone rising) functions like a logistic or Gaussian function.

In statistical generalized linear models, activation functions are closely connected to link functions and have been extensively researched in that context.


See Also: 

Softmax.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.




AI Terms Glossary - ACORN

 


ACORN was a Hybrid rule-based Bayesian system for directing emergency department doctors on how to treat patients with chest discomfort.

It was created and put into usage in the mid-1980s.


Further Reading:





AI Terms Glossary - ACE

 


ACE (Additive Model Estimation for Smoothed Response Attributes) is a regression-based approach for estimating additive models for smoothed response attributes.

The alterations it discovers are valuable for both understanding and predicting the nature of the situation at hand.


See Also: 


Additive models, Additivity And Variance Stabilization.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.




AI Terms Glossary - Accuracy

 


A machine learning system's accuracy is defined as the proportion of accurate predictions or classifications the model makes over a given data set.

It's usually calculated using a different sample from the one(s) used to build the model, called a test or "hold out" sample.

The error rate, on the other hand, is the percentage of inaccurate predictions on the same data.


See Also: 

Hold out sample, Machine Learning.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.





AI Terms Glossary - ABEL.

 



Assumption Based Reasoning is supported by ABEL, a modeling language.

It is presently accessible on the World Wide Web and is written in MacIntosh Common Lisp (WWW).

Assumption Based System (ABS) is a logic system that employs Assumption Based Reasoning.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.

AI Terms Glossary - Abduction.

 



Abduction is a kind of nonmonotone logic initially proposed in the 1870s by Charles Pierce.


It tries to quantify patterns and come up with viable hypotheses for a collection of data.


See Also: 


Deduction, Induction



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



AI Terms Glossary - ABSTRIPS.






The ABSTRIPS software, which was derived from the STRIPS program, was created to tackle robotic placement and movement challenges.


Unlike STRIPS, it works from the most significant to the least critical difference when comparing the present and desired states.




See Also: 


Means-Ends analysis.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



AI Terms Glossary - Aalborg Architecture



The Aalborg architecture offers a way for calculating marginals in a belief net's join tree representation.

It is the architecture of choice for computing marginals of factored probability distributions since it handles fresh data quickly and flexibly.

However, since it simply retains the present findings rather than all of the data, it does not allow for data retraction.


See Also: 

belief net, join tree, Shafer-Shenoy Architecture.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...