State Of An Emerging Quantum Computing Technology Ecosystem And Areas Of Business Applications.






    Quantum Computing Hardware.


    The ecosystem's hardware is a major barrier. The problem is both technical and structural in nature. 


    • The first issue is growing the number of qubits in a quantum computer while maintaining a high degree of qubit quality. 
    • Hardware has a high barrier to entry because it requires a rare mix of cash, experimental and theoretical quantum physics competence, and deep knowledge—particularly domain knowledge of the necessary implementation possibilities. 

    Several quantum-computing hardware platforms are presently in the works. 



    The realization of completely error-corrected, fault-tolerant quantum computing will be the most significant milestone, since a quantum computer cannot give precise, mathematically accurate outputs without it. 



    • Experts argue over whether quantum computers can provide substantial commercial value until they are entirely fault resilient. 
    • Many argue, however, that a lack of fault tolerance does not render quantum-computing systems unworkable. 



    When will we be able to tolerate flaws as in produce viable fault-tolerant quantum computing systems? 


    Most hardware companies are cautious to publish their development intentions, although a handful have done so openly. 

    By 2030, five manufacturers have said that they will have fault-tolerant quantum computing hardware. 

    If this timeframe holds true, the industry will most likely have established a distinct quantum advantage for many applications by then. 




    Quantum Computing Software.


    The number of software-focused startups is growing at a higher rate than any other part of the quantum-computing value chain. 


    • Sector players in the software industry today provide bespoke services and want to provide turnkey services as the industry matures. 
    • Organizations will be able to update their software tools and ultimately adopt completely quantum tools as quantum-computing software develops. 
    • Quantum computing, in the meanwhile, necessitates a new programming paradigm—as well as a new software stack. 
    • The bigger industry players often distribute their software-development kits for free in order to foster developer communities around their goods. 



    Quantum Computing Cloud-Based Services. 


    In the end, cloud-based quantum-computing services may become the most important aspect of the ecosystem, and those who manage them may reap enormous riches. 


    • Most cloud computing service providers now give access to quantum computers on their platforms, allowing prospective customers to try out the technology. 
    • Due to the impossibility of personal or mobile quantum computing this decade, early users may have to rely on the cloud to get a taste of the technology before the wider ecosystem grows. 



    Ecosystem of Quantum Computing.




    The foundations for a quantum-computing business have started to take shape. 

    According to our analysis, the value at stake for quantum-computing businesses is close to $80 billion (not to be confused with the value that quantum-computing use cases could generate). 



    Private And Public Funding For Quantum Computing




    Because quantum computing is still a relatively new topic, the bulk of funding for fundamental research is currently provided by the government. 

    Private financing, on the other hand, is fast expanding. 


    Investments in quantum computing start-ups have topped $1.7 billion in 2021 alone, more than double the amount raised in 2020. 

    • As quantum computer commercialization gathers steam, I anticipate private financing to increase dramatically. 
    • If leaders prepare now, a blossoming quantum-computing ecosystem and developing commercial use cases promise to produce enormous value for sectors. 



    Quantum computing's fast advancements serve as potent reminders that the technology is soon approaching commercial viability. 


    • For example, a Japanese research institute recently revealed a breakthrough in entangling qubits (quantum's fundamental unit of information, equivalent to bits in conventional computers) that might enhance error correction in quantum systems and pave the way for large-scale quantum computers. 
    • In addition, an Australian business has created software that has been demonstrated to boost the performance of any quantum-computing hardware in trials. 
    • Investment funds are flowing in, and quantum-computing start-ups are sprouting as advancements speed. 
    • Quantum computing is still being developed by major technological firms, with Alibaba, Amazon, IBM, Google, and Microsoft having already introduced commercial quantum-computing cloud services. 


    Of course, all of this effort does not always equate to commercial success. 



    While quantum computing has the potential to help organizations tackle challenges that are beyond the reach and speed of traditional high-performance computers, application cases are still mostly experimental and conceptual. 


    • Indeed, academics are still disputing the field's most fundamental concerns (for more on these unresolved questions, see the sidebar "Quantum Computing Debates"). 
    • Nonetheless, the behavior shows that CIOs and other executives who have been keeping an eye on quantum-computing developments may no longer be considered spectators. 
    • Leaders should begin to plan their quantum-computing strategy, particularly in businesses like pharmaceuticals that might profit from commercial quantum computing early on. 
    • Change might arrive as early as 2030, according to some firms, who anticipate that practical quantum technologies will be available by then. 


    I conducted extensive research and interviewed experts from around the world about quantum hardware, software, and applications; the emerging quantum-computing ecosystem; possible business use cases; and the most important drivers of the quantum-computing market to help leaders get started planning. 


    ~ Jai Krishna Ponnappan


    Further Reading:



    You may also want to read more about Quantum Computing here.






    Quantum Computing's Future Outlook.

     



    Corporate executives from all sectors should plan for quantum computing's development. 


    I predict that quantum-computing use cases will have a hybrid operating model that is a mix of quantum and traditional high-performance computing until about 2030. 


    • Quantum-inspired algorithms, for example, may improve traditional high-performance computers. 
    • In order to develop quantum hardware and allow greater—and more complex—use cases beyond 2030, intensive continuous research by private enterprises and governmental organizations will be required
    • The route to commercialization of the technology will be determined by six important factors: finance, accessibility, standards, industry consortia, talent, and digital infrastructure. 


    Outsiders to the quantum-computing business should take five tangible measures to prepare for quantum computing's maturation: 


    • With an in-house team of quantum-computing specialists or by partnering with industry organizations and joining a quantum-computing consortium, keep up with industry advances and actively screen quantum-computing application cases. 
    • Recognize the most important risks, disruptions, and opportunities in their respective businesses. 
    • Consider partnering with or investing in quantum-computing players (mainly software) to make knowledge and expertise more accessible. 
    • Consider hiring quantum-computing experts in-house. Even a small team of up to three specialists may be sufficient to assist a company in exploring prospective use cases and screening potential quantum computing strategic investments. 
    • Build a digital infrastructure that can handle the fundamental operational needs of quantum computing, store important data in digital databases, and configure traditional computing processes to be quantum-ready whenever more powerful quantum hardware becomes available. 



    Every industry's leaders have a once-in-a-lifetime chance to keep on top of a generation-defining technology. 

    The reward might be strategic insights and increased company value.



    ~ Jai Krishna Ponnappan


    You may also want to read more about Quantum Computing here.





    Quantum Computing - Areas Of Application.

     




    Quantum simulation, quantum linear algebra for AI and machine learning, quantum optimization and search, and quantum factorization are the four most well-known application cases. 


    I go through them in detail in this paper, as well as issues leaders should think about when evaluating prospective use cases. 

    I concentrate on prospective applications in a few areas that, according to studies, might profit the most in the near term from the technology: medicines, chemicals, automotive, and finance. 

    The total value at risk for these sectors might be between $300 billion and $700 billion (to be cautious). 




    Chemicals


    Chemicals may benefit from quantum computing for R&D, manufacturing, and supply-chain optimization. 


    • Consider how quantum computing may be utilized to enhance catalyst designs in the manufacturing process. 
    • New and improved catalysts, for example, could allow existing production processes to save energy—a single catalyst can increase efficiency by up to 15%—and innovative catalysts could allow for the replacement of petrochemicals with more sustainable feedstocks or the breakdown of carbon for CO2 usage. 

    A realistic 5 to 10% efficiency boost in the chemicals sector, which spends $800 billion on production each year (half of which depends on catalysis), would result in a $20 billion to $40 billion gain in value. 





    Pharmaceuticals


    Quantum computing has the potential to improve the biopharmaceutical industry's research and development of molecular structures, as well as providing value in manufacturing and farther down the value chain. 


    • New medications, for example, cost an average of $2 billion and take more than 10 years to reach the market once they are discovered in R&D. 
    • Quantum computing has the potential to make R&D more quicker, more focused, and accurate by reducing the reliance on trial and error in target identification, drug design, and toxicity assessment. 
    • A shorter R&D timetable might help deliver medications to the correct patients sooner and more efficiently—in other words, it would enhance the quality of life of more people. 
    • Quantum computing might also improve production, logistics, and the supply chain. 


    While it's difficult to predict how much revenue or patient impact such advancements will have, in a $1.5 trillion industry with average EBIT margins of 16 percent (by our calculations), even a 1 to 5% revenue increase would result in $15 billion to $75 billion in additional revenue and $2 billion to $12 billion in EBIT. 




    Finance


    Quantum-computing applications in banking remain a ways off, and the benefits of any short-term applications are speculative. 


    • However, I feel that portfolio and risk management are the most potential applications of quantum computing in finance. 
    • Quantum-optimized loan portfolios that concentrate on collateral, for example, might let lenders to enhance their services by decreasing interest rates and freeing up money. 


    Although it is too early—and complicated—to evaluate the value potential of quantum computing–enhanced collateral management, the worldwide loan industry is estimated to be $6.9 trillion in 2021, implying that quantum optimization might have a substantial influence.



    Automobiles


    Quantum computing can help the automotive sector with R&D, product design, supply-chain management, manufacturing, mobility, and traffic management. 


    • By improving features such as route planning in complicated multirobot processes (the path a robot travels to perform a job), such as welding, gluing, and painting, the technology might, for example, reduce manufacturing process–related costs and cut cycle times. 


    Even a 2% to 5% increase in efficiency might provide $10 billion to $25 billion in annual value in an industry that spends $500 billion on manufacturing expenditures. 



    ~ Jai Krishna Ponnappan


    You may also want to read more about Quantum Computing here.





    Artificial Intelligence - What Is Computer-Assisted Diagnosis?

     



    Computer-assisted diagnosis (CAD) is a branch of medical informatics that deals with the use of computer and communications technologies in medicine.

    Beginning in the 1950s, physicians and scientists used computers and software to gather and organize expanding collections of medical data and to offer important decision and treatment assistance in contacts with patients.

    The use of computers in medicine has resulted in significant improvements in the medical diagnostic decision-making process.

    Tables of differential diagnoses inspired the first diagnostic computing devices.

    Differential diagnosis entails the creation of a set of sorting criteria that may be used to determine likely explanations of symptoms during a patient's examination.

    A excellent example is the Group Symbol Associator (GSA), a slide rule-like device designed about 1950 by F.A.

    Nash of the South West London Mass X-Ray Service that enabled the physician to line up a patient's symptoms with 337 symptom-disease complexes to obtain a diagnosis (Nash 1960, 1442–46).

    At the Rockefeller Institute for Medical Research's Medical Electronics Center, Cornell University physician Martin Lipkin and physiologist James Hardy developed a manual McBee punched card system for the detection of hematological illnesses.

    Beginning in 1952, researchers linked patient data to findings previously known about each of twenty-one textbook hematological diseases (Lipkin and Hardy 1957, 551–52).

    The findings impressed the Medical Electronics Center's director, television pioneer Vladimir Zworykin, who used Lipkin and Hardy's method to create a comparable digital computer equipment.

    By compiling and sorting findings and creating a weighted diagnostic index, Zworykin's system automated what had previously been done manually.

    Zworykin used vacuum tube BIZMAC computer coders at RCA's Electronic Data Processing Division to convert the punched card system to the digital computer.

    On December 10, 1957, in Camden, New Jersey, the finalized Zworykin programmed hematological differential diagnosis system was first exhibited on the BIZMAC computer (Engle 1992, 209–11).

    As a result, the world's first totally digital electronic computer diagnostic assistance was developed.

    In the 1960s, a new generation of doctors collaborated with computer scientists to link the concept of reasoning under ambiguity to the concept of personal probability, where orderly medical judgments might be indexed along the lines of gambling behavior.

    Probability is used to quantify uncertainty in order to determine the likelihood that a single patient has one or more illnesses.

    The use of personal probability in conjunction with digital computer technologies yielded unexpected outcomes.

    Medical decision analysis is an excellent example of this, since it entails using utility and probability theory to compute alternative patient diagnoses, prognoses, and treatment management options.

    Stephen Pauker and Jerome Kassirer, both of Tufts University's medical informatics department, are often acknowledged as among the first to explicitly apply computer-aided decision analysis to clinical medicine (Pauker and Kassirer 1987, 250–58).

    Identifying all available options and their possible consequences, as well as creating a decision model, generally in the form of a decision tree so complicated and changing that only a computer can keep track of changes in all of the variables in real time, is what decision analysis entails.

    Nodes in such a tree describe options, probabilities, and outcomes.

    The tree is used to show the strategies accessible to the physician and to quantify the chance of each result occurring if a certain approach is followed (sometimes on a moment-by-moment basis).

    Each outcome's relative value is also expressed mathematically, as a utility, on a clearly defined scale.

    Decision analysis assigns an estimate of the cost of getting each piece of clinical or laboratory-derived information, as well as the possible value that may be gained from it.

    The costs and benefits may be measured in qualitative terms, such as the quality of life or amount of pain derived from the acquisition and use of medical information, but they are usually measured in quantitative or statistical terms, such as when calculating surgical success rates or cost-benefit ratios for new medical technologies.

    Critics claimed that cost-benefit calculations made rationing of scarce health-care resources more appealing, but decision analysis at irregular intervals resisted the onslaught (Berg 1997, 54).

    Artificial intelligence expert systems started to supplant more logical and sequential algorithmic processes for attaining medical agreement in the 1960s and 1970s.

    Miller and Masarie, Jr. (1990, 1–2) criticized the so-called oracles of medical computing's past, claiming that they created factory diagnoses (Miller and Masarie, Jr. 1990, 1–2).

    Computer scientists collaborated with clinicians to integrate assessment procedures into medical applications, repurposing them as criticizing systems of last resort rather than diagnostic systems (Miller 1984, 17–23).

    The ATTENDING expert system for anesthetic administration, created at Yale University School of Medicine, may have been the first to use a criticizing method.

    Routines for risk assessment are at the heart of the ATTENDING system, and they assist residents and doctors in weighing factors such as patient health, surgical procedure, and available anesthetics when making clinical decisions.

    Unlike diagnostic tools that suggest a procedure based on previously entered data, ATTENDING reacts to user recommendations in a stepwise manner (Miller 1983, 362–69).

    Because it requires the attentive attention of a human operator, the criticizing technique absolves the computer of ultimate responsibility for diagnosis.

    This is a critical characteristic in an era where strict responsibility applies to medical technology failures, including complicated software.

    Computer-assisted diagnosis migrated to home computers and the internet in the 1990s and early 2000s.

    Medical HouseCall and Dr. Schueler's Home Medical Advisor are two instances of so-called "doc-in-a-box" software.

    Medical HouseCall is a generalized, consumer-oriented version of the University of Utah's Iliad decision-support system.

    The information foundation for Medical HouseCall took an estimated 150,000 person hours to develop.

    The first software package, which was published in May 1994, had information on over 1,100 ailments as well as 3,000 prescription and nonprescription medications.

    It also included cost and treatment alternatives information.

    The encyclopedia included in the program spanned 5,000 printed pages.

    Medical HouseCall also has a module for maintaining medical records for family members.

    Medical HouseCall's first version required users to choose one of nineteen symptom categories by clicking on graphical symbols depicting bodily parts, then answer a series of yes-or-no questions.

    After that, the program generates a prioritized list of potential diagnoses.

    Bayesian estimate is used to obtain these diagnoses (Bouhaddou and Warner, Jr. 1995, 1181–85).


    Dr. Schueler's Home Medical Advisor was a competitive software program in the 1990s.

    Home Medical Advisor is a consumer-oriented CD-ROM set that contains a wide library of health and medical information, as well as a diagnostic-assistance application that offers probable diagnoses and appropriate courses of action.

    In 1997, its medical encyclopedia defined more than 15,000 words.

    There's also a picture library and full-motion video presentations in Home Medical Advisor.


    The program's artificial intelligence module may be accessed via two alternative interfaces.

    1. The first involves using mouse clicks to tick boxes.
    2. The second interface requires the user to provide written responses to particular inquiries in natural language.


    The program's differential diagnoses are connected to more detailed information about those illnesses (Cahlin 1994, 53–56).

    Online symptom checks are becoming commonplace.

    Deep learning in big data analytics has the potential to minimize diagnostic and treatment mistakes, lower costs, and improve workflow efficiency in the future.

    CheXpert, an automated chest x-ray diagnostic system, was unveiled in 2019 by Stanford University's Machine Learning Group and Intermountain Healthcare.

    In under 10 seconds, the radiology AI program can identify pneumonia.

    In the same year, Massachusetts General Hospital reported the development of a convolutional neural network based on a huge collection of chest radiographs to detect persons at high risk of death from any cause, including heart disease and cancer.

    The identification of wrist fractures, breast cancer that has spread, and cataracts in youngsters has improved thanks to pattern recognition utilizing deep neural networks.

    Although the accuracy of deep learning findings varies by field of health and kind of damage or sickness, the number of applications is growing to the point where smartphone apps with integrated AI are already in limited usage.

    Deep learning approaches are projected to be used in the future to help with in-vitro fertilization embryo selection, mental health diagnosis, cancer categorization, and weaning patients off of ventilator support.



    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.


    See also: 


    Automated Multiphasic Health Testing; Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR.


    Further Reading


    Berg, Marc. 1997. Rationalizing Medical Work: Decision Support Techniques and Medical Practices. Cambridge, MA: MIT Press.

    Bouhaddou, Omar, and Homer R. Warner, Jr. 1995. “An Interactive Patient Information and Education System (Medical HouseCall) Based on a Physician Expert System (Iliad).” Medinfo 8, pt. 2: 1181–85.

    Cahlin, Michael. 1994. “Doc on a Disc: Diagnosing Home Medical Software.” PC Novice, July 1994: 53–56.

    Engle, Ralph L., Jr. 1992. “Attempts to Use Computers as Diagnostic Aids in Medical Decision Making: A Thirty-Year Experience.” Perspectives in Biology and Medicine 35, no. 2 (Winter): 207–19.

    Lipkin, Martin, and James D. Hardy. 1957. “Differential Diagnosis of Hematologic Diseases Aided by Mechanical Correlation of Data.” Science 125 (March 22): 551–52.

    Miller, Perry L. 1983. “Critiquing Anesthetic Management: The ‘ATTENDING’ Computer System.” Anesthesiology 58, no. 4 (April): 362–69.

    Miller, Perry L. 1984. “Critiquing: A Different Approach to Expert Computer Advice in Medicine.” In Proceedings of the Annual Symposium on Computer Applications in Medical Care, vol. 8, edited by Gerald S. Cohen, 17–23. Piscataway, NJ: IEEE Computer Society.

    Miller, Randolph A., and Fred E. Masarie, Jr. 1990. “The Demise of the Greek Oracle Model for Medical Diagnosis Systems.” Methods of Information in Medicine 29, no. 1 (January): 1–2.

    Nash, F. A. 1960. “Diagnostic Reasoning and the Logoscope.” Lancet 276, no. 7166 (December 31): 1442–46.

    Pauker, Stephen G., and Jerome P. Kassirer. 1987. “Decision Analysis.” New England Journal of Medicine 316, no. 5 (January): 250–58.

    Topol, Eric J. 2019. “High-Performance Medicine: The Convergence of Human and Artificial Intelligence.” Nature Medicine 25, no. 1 (January): 44–56.



    Artificial Intelligence - What Is Computational Neuroscience?

     



    Computational neuroscience (CNS) is a branch of neuroscience that uses the notion of computing to the study of the brain.

    Eric Schwartz coined the phrase "computational neuroscience" in 1985 to replace the words "neural modeling" and "brain theory," which were previously used to describe different forms of nervous system study.

    The concept that nervous system effects may be perceived as examples of computations, since state transitions can be explained as relations between abstract attributes, is at the heart of CNS.

    In other words, explanations of effects in neurological systems are descriptions of information changed, stored, and represented, rather than casual descriptions of interaction of physically distinct elements.

    As a result, CNS aims to develop computational models to better understand how the nervous system works in terms of the information processing characteristics of the brain's parts.

    Constructing a model of how interacting neurons might build basic components of cognition is one example.

    A brain map, on the other hand, does not disclose the nervous system's computing process, but it may be utilized as a restriction for theoretical models.

    Information sharing, for example, has costs in terms of physical connections between communicating areas, in that locations that make connections often (in cases of high bandwidth and low latency) would be clustered together.

    The description of neural systems as carry-on computations is central to computational neuroscience, and it contradicts the claim that computational constructs are exclusive to the explanatory framework of psychology; that is, human cognitive capacities can be constructed and confirmed independently of how they are implemented in the nervous system.

    For example, when it became clear in 1973 that cognitive processes could not be understood by analyzing the results of one-dimensional questions/scenarios, a popular approach in cognitive psychology at the time, Allen Newell argued that only synthesis with computer simulation could reveal the complex interactions of the proposed component's mechanism and whether the proposed component's mechanism was correct.

    David Marr (1945–1980) proposed the first computational neuroscience framework.

    This framework tries to give a conceptual starting point for thinking about levels in the context of computing by nervous structure.

    It reflects the three-level structure utilized in computer science (abstract issue analysis, algorithm, and physical implementation).

    The model, however, has drawbacks since it is made up of three poorly linked layers and uses a rigid top-down approach that ignores all neurobiological facts as instances at the implementation level.

    As a result, certain events are thought to be explicable on just one or two levels.

    As a result, the Marr levels framework does not correspond to the levels of nervous system structure (molecules, synapses, neurons, nuclei, circuits, networks layers, maps, and systems), nor does it explain nervous system emergent type features.

    Computational neuroscience takes a bottom-up approach, beginning with neurons and illustrating how computational functions and their implementations with neurons result in dynamic interactions between neurons.

    Models of connectivity and dynamics, decoding models, and representational models are the three kinds of models that try to get computational understanding from brain-activity data.

    The correlation matrix, which displays pairwise functional connectivity between places and establishes the features of related areas, is used in connection models.

    Because they are generative models, they can generate data at the level of the measurements and are models of brain dynamics, analyses of effective connectivity and large-scale brain dynamics go beyond generic statistical models that are linear models used in action and information-based brain mapping.

    The goal of the decoding models is to figure out what information is stored in each brain area.

    When an area is designated as a "knowledge representing" one, its data becomes a functional entity that informs regions that receive these signals about the content.

    In the simplest scenario, decoding identifies which of the two stimuli elicited a recorded response pattern.

    The representation's content might be the sensory stimulus's identity, a stimulus feature (such as orientation), or an abstract variable required for a cognitive operation or action.

    Decoding and multivariate pattern analysis were utilized to determine the components that must be included in the brain computational model.

    Decoding, on the other hand, does not provide models for brain computing; rather, it discloses some elements without requiring brain calculation.

    Because they strive to characterize areas' reactions to arbitrary stimuli, representation models go beyond decoding.

    Encoding models, pattern component models, and representational similarity analysis are three forms of representational model analysis that have been presented.

    All three studies are based on multivariate descriptions of the experimental circumstances and test assumptions about representational space.

    In encoding models, the activity profile of each voxel across stimuli is predicted as a linear combination of the model's properties.

    The distribution of the activity profiles that define the representational space is treated as a multivariate normal distribution in pattern component models.

    The representational space is defined by the representational dissimilarities of the activity patterns evoked by the stimuli in representational similarity analysis.

    The qualities that indicate how the information processing cognitive function could operate are not tested in the brain models.

    Task performance models are used to describe cognitive processes in terms of algorithms.

    These models are put to the test using experimental data and, in certain cases, data from brain activity.

    Neural network models and cognitive models are the two basic types of models.

    Models of neural networks are created using varying degrees of biological information, ranging from neurons to maps.

    Multiple steps of linear-nonlinear signal modification are supported by neural networks, which embody the parallel distributed processing paradigm.

    To enhance job performance, models often incorporate millions of parameters (connection weights).

    Simple models will not be able to describe complex cognitive processes, hence a high number of parameters is required.

    The implementations of deep convolutional neural network models have been used to predict brain representations of new pictures in the ventral visual stream of primates.

    The representations in the first few layers of neural networks are comparable to those in the early visual cortex.

    Higher layers are similar to the inferior temporal cortical representation in that they both allow for the decoding of object location, size, and posture, as well as the object's categorization.

    Various research have shown that deep convolutional neural networks' internal representations provide the best current models of visual picture representations in the inferior temporal cortex in humans and animals.

    When a wide number of models were compared, those that were optimized for object categorization described the cortical representation the best.

    Cognitive models are artificial intelligence applications in computational neuroscience that target information processing that do not include any neurological components (neurons, axons, etc.).

    Production systems, reinforcement learning, and Bayesian cognitive models are the three kinds of models.

    They use logic and predicates, and they work with symbols rather than signals.

    There are various advantages of employing artificial intelligence in computational neuroscience research.

    1. First, although a vast quantity of information on the brain has accumulated through time, the true knowledge of how the brain functions remains unknown.
    2. Second, there are embedded effects created by networks of neurons, but how these networks of neurons operate is yet unknown.
    3. Third, although the brain has been crudely mapped, as has understanding of what distinct brain areas (mostly sensory and motor functions) perform, a precise map is still lacking.

    Furthermore, some of the information gathered via experiments or observations may be useless; the link between synaptic learning principles and computing is mostly unclear.

    The models of a production system are the first models for explaining reasoning and problem resolution.

    A "production" is a cognitive activity that occurs as a consequence of the "if-then" rule, in which "if" defines the set of circumstances under which the range of productions ("then" clause) may be carried out.

    When the prerequisites for numerous rules are satisfied, the model uses a conflict resolution algorithm to choose the best production.

    The production models provide a sequence of predictions that seem like a conscious stream of brain activity.

    The same approach is now being used to predict the regional mean fMRI (functional Magnetic Resonance Imaging) activation time in new applications.

    Reinforcement models are used in a variety of areas to simulate the accomplishment of optimum decision-making.

    The implementation in neurobiological systems is a basal ganglia in neurobiochemical systems.

    The agent might learn a "value function" that links each state to the predicted total reward.

    The agent may pick the most promising action if it can forecast which state each action will lead to and understands the values of those states.

    The agent could additionally pick up a "policy" that links each state to promised actions.

    Exploitation (which provides immediate gratification) and exploration must be balanced (which benefits learning and brings long-term reward).

    The Bayesian models show what the brain should really calculate in order to perform at its best.

    These models enable inductive inference, which is beyond the capability of neural network models and requires previous knowledge.

    The models have been used to explain cognitive biases as the result of past beliefs, as well as to comprehend fundamental sensory and motor processes.

    The representation of the probability distribution of neurons, for example, has been investigated theoretically using Bayesian models and compared to actual evidence.

    These practices illustrate that connecting Bayesian inference to real brain implementation is still difficult since the brain "cuts corners" in trying to be efficient, therefore approximations may explain departures from statistical optimality.

    The concept of a brain doing computations is central to computational neuroscience, so researchers are using modeling and analysis of information processing properties of nervous system elements to try to figure out how complex brain functions work.


    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.


    See also: 

    Bayesian Inference; Cognitive Computing.


    Further Reading


    Kaplan, David M. 2011. “Explanation and Description in Computational Neuroscience.” Synthese 183, no. 3: 339–73.

    Kriegeskorte, Nikolaus, and Pamela K. Douglas. 2018. “Cognitive Computational Neuroscience.” Nature Neuroscience 21, no. 9: 1148–60.

    Schwartz, Eric L., ed. 1993. Computational Neuroscience. Cambridge, MA: Massachusetts Institute of Technology.

    Trappenberg, Thomas. 2009. Fundamentals of Computational Neuroscience. New York: Oxford University Press.



    Artificial Intelligence - What Is Computational Creativity?

     



    Computational Creativity is a term used to describe a kind of creativity that is based on Computer-generated art is connected to computational creativity, although it is not reducible to it.

    According to Margaret Boden, "CG-art" is an artwork that "results from some computer program being allowed to operate on its own, with zero input from the human artist" (Boden 2010, 141).

    This definition is both severe and limiting, since it is confined to the creation of "art works" as defined by human observers.

    Computational creativity, on the other hand, is a broader phrase that encompasses a broader range of actions, equipment, and outputs.

    "Computational creativity is an area of Artificial Intelligence (AI) study... where we construct and engage with computational systems that produce products and ideas," said Simon Colton and Geraint A. Wiggins.

    Those "artefacts and ideas" might be works of art, as well as other things, discoveries, and/or performances (Colton and Wiggins 2012, 21).

    Games, narrative, music composition and performance, and visual arts are examples of computational creativity applications and implementations.

    Games and other cognitive skill competitions are often used to evaluate and assess machine skills.

    The fundamental criterion of machine intelligence, in fact, was established via a game, which Alan Turing dubbed "The Game of Imitation" (1950).

    Since then, AI progress and accomplishment have been monitored and evaluated via games and other human-machine contests.

    Chess has had a special status and privileged position among all the games in which computers have been involved, to the point where critics such as Douglas Hofstadter (1979, 674) and Hubert Dreyfus (1992) confidently asserted that championship-level AI chess would forever remain out of reach and unattainable.

    After beating Garry Kasparov in 1997, IBM's Deep Blue modified the game's rules.

    But chess was just the start.

    In 2015, AlphaGo, a Go-playing algorithm built by Google DeepMind, defeated Lee Sedol, one of the most famous human players of this notoriously tough board game, in four out of five games.

    Human observers, including as Fan Hui (2016), have praised AlphaGo's nimble play as "beautiful," "intuitive," and "innovative." 'Automated Insights' is a service provided by Automated Insights Natural Language Generation (NLG) techniques such as Wordsmith and Narrative Science's Quill are used to create human-readable tales from machine-readable data.

    Unlike basic news aggregators or template NLG systems, these computers "write" (or "produce," as the case may be) unique tales that are almost indistinguishable from human-created material in many cases.

    Christer Clerwall, for example, performed a small-scale research in 2014 in which human test subjects were asked to assess news pieces written by Wordsmith and a professional writer from the Los Angeles Times.

    The study's findings reveal that, although software-generated information is often seen as descriptive and dull, it is also regarded as more impartial and trustworthy (Clerwall 2014, 519).

    "Within 10 years, a digital computer would produce music regarded by critics as holding great artistic merit," Herbert Simon and Allen Newell predicted in their famous article "Heuristic Problem Solving" (1958). (Simon and Newell 1958, 7).

    This prediction has come true.

    Experiments in Musical Intelligence (EMI, or "Emmy") by David Cope is one of the most well-known works in the subject of "algorithmic composition." 

    Emmy is a computer-based algorithmic composer capable of analyzing existing musical compositions, rearranging their fundamental components, and then creating new, unique scores that sound like and, in some circumstances, are indistinguishable from Mozart, Bach, and Chopin's iconic masterpieces (Cope 2001).

    There are robotic systems in music performance, such as Shimon, a marimba-playing jazz-bot from Georgia Tech University, that can not only improvise with human musicians in real time, but also "is designed to create meaningful and inspiring musical interactions with humans, leading to novel musical experiences and outcomes" (Hoffman and Weinberg 2011).

    Cope's method, which he refers to as "recombinacy," is not restricted to music.

    It may be used and applied to any creative technique in which new works are created by reorganizing or recombining a set of finite parts, such as the alphabet's twenty-six letters, the musical scale's twelve tones, the human eye's sixteen million colors, and so on.

    As a result, other creative undertakings, like as painting, have adopted similar computational creativity method.

    The Painting Fool is an automated painter created by Simon Colton that seeks to be "considered seriously as a creative artist in its own right" (Colton 2012, 16).

    To far, the algorithm has generated thousands of "original" artworks, which have been shown in both online and physical art exhibitions.

    Obvious, a Paris-based collaboration comprised of the artists Hugo Caselles-Dupré, Pierre Fautrel, and Gauthier Vernie, uses a generative adversarial network (GAN) to create portraits of a fictitious family (the Belamys) in the manner of the European masters.

    Christies auctioned one of these pictures, "Portrait of Edmond Belamy," for $432,500 in October 2018.

    Designing ostensibly creative systems instantly runs into semantic and conceptual issues.

    Creativity is an enigmatic phenomena that is difficult to pinpoint or quantify.

    Are these programs, algorithms, and systems really "creative," or are they merely a sort of "imitation," as some detractors have labeled them? This issue is similar to John Searle's (1984, 32–38) Chinese Room thought experiment, which aimed to highlight the distinction between genuine cognitive activity, such as creative expression, and simple simulation or imitation.

    Researchers in the field of computational creativity have introduced and operationalized a rather specific formulation to characterize their efforts: "The philosophy, science, and engineering of computational systems that, by taking on specific responsibilities, exhibit behaviors that unbiased observers would deem creative" (Colton and Wig gins 2012, 21).

    The key word in this description is "responsibility." 

    "The term responsibilities highlights the difference between the systems we build and creativity support tools studied in the HCI [human-computer interaction] community and embedded in tools like Adobe's Photoshop, to which most observers would probably not attribute creative intent or behavior," Colton and Wiggins explain (Colton and Wiggins 2012, 21).

    "The program is only a tool to improve human creativity" (Colton 2012, 3–4) using a software application like Photoshop; it is an instrument utilized by a human artist who is and remains responsible for the creative choices and output created by the instrument.

    Computational creativity research, on the other hand, "seeks to develop software that is creative in and of itself" (Colton 2012, 4).

    On the one hand, one might react as we have in the past, dismissing contemporary technological advancements as simply another instrument or tool of human action—or what technology philosophers such as Martin Heidegger (1977) and Andrew Feenberg (1991) refer to as "the instrumental theory of technology." 

    This is, in fact, the explanation supplied by David Cope in his own appraisal of his work's influence and relevance.

    Emmy and other algorithmic composition systems, according to Cope, do not compete with or threaten to replace human composition.

    They are just instruments used in and for musical creation.

    "Computers represent just instruments with which we stretch our ideas and bodies," writes Cope.

    Computers, programs, and the data utilized to generate their output were all developed by humanity.

    Our algorithms make music that is just as much ours as music made by our greatest human inspirations" (Cope 2001, 139).

    According to Cope, no matter how much algorithmic mediation is invented and used, the musical composition generated by these advanced digital tools is ultimately the responsibility of the human person.

    The similar argument may be made for other supposedly creative programs, such as AlphaGo, a Go-playing algorithm, or The Painting Fool, a painting software.

    When AlphaGo wins a big tournament or The Painting Fool creates a spectacular piece of visual art that is presented in a gallery, there is still a human person (or individuals) who is (or can reply or answer for) what has been created, according to the argument.

    The attribution lines may get more intricate and drawn out, but there is always someone in a position of power behind the scenes, it might be claimed.

    In circumstances where efforts have been made to transfer responsibility to the computer, evidence of this already exists.

    Consider AlphaGo's game-winning move 37 versus Lee Sedol in game two.

    If someone wants to learn more about the move and its significance, AlphaGo is the one to ask.

    The algorithm, on the other hand, will remain silent.

    In actuality, it was up to the human programmers and spectators to answer on AlphaGo's behalf and explain the importance and effect of the move.

    As a result, as Colton (2012) and Colton et al. (2015) point out, if the mission of computational creativity is to succeed, the software will have to do more than create objects and behaviors that humans interpret as creative output.

    It must also take ownership of the task by accounting for what it accomplished and how it did it.

    "The software," Colton and Wiggins argue, "should be available for questioning about its motivations, processes, and products," eventually capable of not only generating titles for and explanations and narratives about the work but also responding to questions by engaging in critical dialogue with its audience (Colton and Wiggins 2012, 25). (Colton et al. 2015, 15).

    At the same time, these algorithmic incursions into what had previously been a protected and solely human realm have created possibilities.

    It's not only a question of whether computers, machine learning algorithms, or other applications can or cannot be held accountable for what they do or don't do; it's also a question of how we define, explain, and define creative responsibility in the first place.

    This suggests that there is a strong and weak component to this endeavor, which Mohammad Majid al-Rifaie and Mark Bishop refer to as strong and weak forms of computational creativity, reflecting Searle's initial difference on AI initiatives (Majid al-Rifaie and Bishop 2015, 37).

    The types of application development and demonstrations presented by people and companies such as DeepMind, David Cope, and Simon Colton are examples of the "strong" sort.

    However, these efforts have a "weak AI" component in that they simulate, operationalize, and stress test various conceptualizations of artistic responsibility and creative expression, resulting in critical and potentially insightful reevaluations of how we have defined these concepts in our own thinking.

    Nothing has made Douglas Hofstadter reexamine his own thinking about thinking more than the endeavor to cope with and make sense of David Cope's Emmy nomination (Hofstadter 2001, 38).

    To put it another way, developing and experimenting with new algorithmic capabilities does not necessarily detract from human beings and what (hopefully) makes us unique, but it does provide new opportunities to be more precise and scientific about these distinguishing characteristics and their limits.


    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.



    See also: 

    AARON; Automatic Film Editing; Deep Blue; Emily Howell; Generative Design; Generative Music and Algorithmic Composition.

    Further Reading

    Boden, Margaret. 2010. Creativity and Art: Three Roads to Surprise. Oxford, UK: Oxford University Press.

    Clerwall, Christer. 2014. “Enter the Robot Journalist: Users’ Perceptions of Automated Content.” Journalism Practice 8, no. 5: 519–31.

    Colton, Simon. 2012. “The Painting Fool: Stories from Building an Automated Painter.” In Computers and Creativity, edited by Jon McCormack and Mark d’Inverno, 3–38. Berlin: Springer Verlag.

    Colton, Simon, Alison Pease, Joseph Corneli, Michael Cook, Rose Hepworth, and Dan Ventura. 2015. “Stakeholder Groups in Computational Creativity Research and Practice.” In Computational Creativity Research: Towards Creative Machines, edited by Tarek R. Besold, Marco Schorlemmer, and Alan Smaill, 3–36. Amster￾dam: Atlantis Press.

    Colton, Simon, and Geraint A. Wiggins. 2012. “Computational Creativity: The Final Frontier.” In Frontiers in Artificial Intelligence and Applications, vol. 242, edited by Luc De Raedt et al., 21–26. Amsterdam: IOS Press.

    Cope, David. 2001. Virtual Music: Computer Synthesis of Musical Style. Cambridge, MA: MIT Press.

    Dreyfus, Hubert L. 1992. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press.

    Feenberg, Andrew. 1991. Critical Theory of Technology. Oxford, UK: Oxford University Press.

    Heidegger, Martin. 1977. The Question Concerning Technology, and Other Essays. Translated by William Lovitt. New York: Harper & Row.

    Hoffman, Guy, and Gil Weinberg. 2011. “Interactive Improvisation with a Robotic Marimba Player.” Autonomous Robots 31, no. 2–3: 133–53.

    Hofstadter, Douglas R. 1979. Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books.

    Hofstadter, Douglas R. 2001. “Staring Emmy Straight in the Eye—And Doing My Best Not to Flinch.” In Virtual Music: Computer Synthesis of Musical Style, edited by David Cope, 33–82. Cambridge, MA: MIT Press.

    Hui, Fan. 2016. “AlphaGo Games—English. DeepMind.” https://web.archive.org/web/20160912143957/

    https://deepmind.com/research/alphago/alphago-games-english/.

    Majid al-Rifaie, Mohammad, and Mark Bishop. 2015. “Weak and Strong Computational Creativity.” In Computational Creativity Research: Towards Creative Machines, edited by Tarek R. Besold, Marco Schorlemmer, and Alan Smaill, 37–50. Amsterdam: Atlantis Press.

    Searle, John. 1984. Mind, Brains and Science. Cambridge, MA: Harvard University Press.




    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...