Artificial Intelligence - What Is Artificial Intelligence, Alchemy, And Associationism?

 



Alchemy and Artificial Intelligence, a RAND Corporation paper prepared by Massachusetts Institute of Technology (MIT) philosopher Hubert Dreyfus and released as a mimeographed memo in 1965, critiqued artificial intelligence researchers' aims and essential assumptions.

The paper, which was written when Dreyfus was consulting for RAND, elicited a significant negative response from the AI community.

Dreyfus had been engaged by RAND, a nonprofit American global policy think tank, to analyze the possibilities for artificial intelligence research from a philosophical standpoint.

Researchers like as Herbert Simon and Marvin Minsky, who predicted in the late 1950s that robots capable of accomplishing whatever humans could do will exist within decades, made bright forecasts for the future of AI.

The objective for most AI researchers was not only to develop programs that processed data in such a manner that the output or outcome looked to be the result of intelligent activity.

Rather, they wanted to create software that could mimic human cognitive processes.

Experts in artificial intelligence felt that human cognitive processes might be used as a model for their algorithms, and that AI could also provide insight into human psychology.

The work of phenomenologists Maurice Merleau-Ponty, Martin Heidegger, and Jean-Paul Sartre impacted Dreyfus' thought.

Dreyfus contended in his report that the theory and aims of AI were founded on associationism, a philosophy of human psychology that includes a core concept: that thinking happens in a succession of basic, predictable stages.

Artificial intelligence researchers believed they could use computers to duplicate human cognitive processes because of their belief in associationism (which Dreyfus claimed was erroneous).

Dreyfus compared the characteristics of human thinking (as he saw them) to computer information processing and the inner workings of various AI systems.

The core of his thesis was that human and machine information processing processes are fundamentally different.

Computers can only be programmed to handle "unambiguous, totally organized information," rendering them incapable of managing "ill-structured material of everyday life," and hence of intelligence (Dreyfus 1965, 66).

On the other hand, Dreyfus contended that, according to AI research's primary premise, many characteristics of human intelligence cannot be represented by rules or associationist psychology.

Dreyfus outlined three areas where humans vary from computers in terms of information processing: fringe consciousness, insight, and ambiguity tolerance.

Chess players, for example, utilize the fringe awareness to decide which area of the board or pieces to concentrate on while making a move.

The human player differs from a chess-playing software in that the human player does not consciously or subconsciously examine the information or count out probable plays the way the computer does.

Only after the player has utilized their fringe awareness to choose which pieces to concentrate on can they consciously calculate the implications of prospective movements in a manner akin to computer processing.

The (human) problem-solver may build a set of steps for tackling a complicated issue by understanding its fundamental structure.

This understanding is lacking in problem-solving software.

Rather, as part of the program, the problem-solving method must be preliminarily established.

The finest example of ambiguity tolerance is in natural language comprehension, when a word or phrase may have an unclear meaning yet is accurately comprehended by the listener.

When reading ambiguous syntax or semantics, there are an endless amount of signals to examine, yet the human processor manages to choose important information from this limitless domain in order to accurately understand the meaning.

On the other hand, a computer cannot be trained to search through all conceivable facts in order to decipher confusing syntax or semantics.

Either the amount of facts is too huge, or the criteria for interpretation are very complex.

AI experts chastised Dreyfus for oversimplifying the difficulties and misrepresenting computers' capabilities.

RAND commissioned MIT computer scientist Seymour Papert to respond to the study, which he published in 1968 as The Artificial Intelligence of Hubert L.Dreyfus: A Budget of Fallacies.

Papert also set up a chess match between Dreyfus and Mac Hack, which Dreyfus lost, much to the amusement of the artificial intelligence community.

Nonetheless, some of his criticisms in this report and subsequent books appear to have foreshadowed intractable issues later acknowledged by AI researchers, such as artificial general intelligence (AGI), artificial simulation of analog neurons, and the limitations of symbolic artificial intelligence as a model of human reasoning.

Dreyfus' work was declared useless by artificial intelligence specialists, who stated that he misinterpreted their research.

Their ire had been aroused by Dreyfus's critiques of AI, which often used aggressive terminology.

The New Yorker magazine's "Talk of the Town" section included extracts from the story.

Dreyfus subsequently refined and enlarged his case in What Computers Can't Do: The Limits of Artificial Intelligence, published in 1972.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: Mac Hack; Simon, Herbert A.; Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Dreyfus, Hubert L. 1965. Alchemy and Artificial Intelligence. P-3244. Santa Monica, CA: RAND Corporation.

Dreyfus, Hubert L. 1972. What Computers Can’t Do: The Limits of Artificial Intelligence.New York: Harper and Row.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.

Papert, Seymour. 1968. The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies. Project MAC, Memo No. 154. Cambridge, MA: Massachusetts Institute of Technology.


Artificial Intelligence - What Is The AARON Computer Program?


 



Harold Cohen built AARON, a computer program that allows him to produce paintings.

The initial version was created "about 1972," according to Cohen.

Because AARON is not open source, its development came to a halt when Cohen died in 2016.

In 2014, AARON was still creating fresh photos, and its functioning was still visible in 2016.

AARON is not an abbreviation.

The name was chosen since it is the first letter of the alphabet, and Cohen anticipated that he would eventually build further programs, which he never did.

AARON has various versions during the course of its four decades of development, each with its own set of capabilities.

Earlier versions could only generate black-and-white line drawings, while later versions could also paint in color.

Some AARON versions were set up to make abstract paintings, while others were set up to create scenes with objects and people.

AARON's main goal was to generate not just computer pictures, but also physical, large-scale images or paintings.

The lines made by AARON, a program written in C at the time, were traced directly on the wall in Cohen's show at the San Francisco Museum of Modern Art.

The software was then paired with a machine that had a robotic arm and could apply paint on canvas in later creative episodes of AARON.

For example, the version of AARON on display at Boston's Computer Museum in 1995, which was written in LISP at the time and ran on a Silicon Graphics computer, generated a file containing a set of instructions.

After then, the file was transmitted to a PC that was running a C++ program.

This computer was equipped with a robotic arm.

The C++ code processed the commands and controlled the arm's movement, as well as the dye mixing and application to the canvas.

Cohen's drawing and painting devices were also significant advancements.

Industrial inkjet printers were employed in subsequent generations as well.

Because of the colors these new printers could create, Cohen considered this configuration of AARON to be the most advanced; he thought that the inkjet was the most important innovation since the industrial revolution when it came to colors.

While Cohen primarily concentrated on tactile pictures, Ray Kurzweil built a screensaver version of AARON around the year 2000.

By 2016, Cohen had developed his own version of AARON, which produced black-and-white pictures that the user could color using a big touch screen.

"Fingerpainting," he called it.

AARON, according to Cohen, is neither a "totally independent artist" nor completely creative.

He did feel, however, that AARON demonstrates one requirement of autonomy: emergence, which Cohen defines as "paintings that are really shocking and unique." Cohen never got too far into AARON's philosophical ramifications.

It's easy to infer that AARON's work as a colorist was his greatest accomplishment, based on the amount of time he devotes to it in practically all of the interviews conducted with him.

Computational Creativity and Generative Design are two more terms for the same thing.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Computational Creativity; Generative Design.


Further Reading

Cohen, Harold. 1995. “The Further Exploits of AARON, Painter.” Stanford Humanities Review 4, no. 2 (July): 141–58.

Cohen, Harold. 2004. “A Sorcerer’s Apprentice: Art in an Unknown Future.” Invited talk at Tate Modern, London. http://www.aaronshome.com/aaron/publications/tate-final.doc.

Cohen, Paul. 2016. “Harold Cohen and AARON.” AI Magazine 37, no. 4 (Winter): 63–66.

McCorduck, Pamela. 1990. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen. New York: W. H. Freeman.



Artificial Intelligence - What Is An AI Winter?

 



The term AI Winter was established during the American Association of Artificial Intelligence's annual conference in 1984.(now the Association for the Advancement of Artificial Intelligence or AAAI).

Marvin Minsky and Roger Schank, two top academics, used the phrase to describe the imminent bust in artificial intelligence research and development at the time.

Daniel Crevier, a Canadian AI researcher, has detailed how fear of an impending AI Winter caused a domino effect that started with skepticism in the AI research community, spread to the media, and eventually resulted in negative funding responses.

As a consequence, real AI research and development came to a halt.

The initial skepticism may now be ascribed mostly to the excessively optimistic promises made at the time, with AI's real outcomes being significantly less than expected.

Other reasons, such as a lack of computer capacity during the early days of AI research, led to the belief that an AI Winter was approaching.

This was especially true in the case of neural network research, which required a large amount of processing power.

Economic reasons, however, limited attention on more concrete investments, especially during overlapping times of economic crises.

AI Winters have occurred many times during the history of AI, with two of the most notable eras covering 1974 to 1980 and 1987 to 1993.

Although the dates of AI Winters are debatable and dependent on the source, times with overlapping patterns are associated with research abandonment and defunding.

The development of AI systems and technologies has progressed, similar to the hype and ultimate collapse of breakthrough technologies such as nanotechnology.

Not only has there been an unprecedented amount of money for basic research, but there has also been exceptional progress in the development of machine learning during the present boom time.

The reasons for the investment surge vary depending on the many stakeholders involved in artificial intelligence research and development.

For example, industry has staked a lot of money on the idea that discoveries in AI would result in dividends by changing whole market sectors.

Governmental agencies, such as the military, invest in AI research to improve the efficiency of both defensive and offensive technology and to protect troops from imminent damage.

Because AI Winters are triggered by a perceived lack of trust in what AI can provide, the present buzz around AI and its promises has sparked fears of another AI Winter.

On the other hand, others argue that current technology developments in applied AI research have secured future progress in this field.

This argument contrasts sharply with the so-called "pipeline issue," which claims that a lack of basic AI research will result in a limited number of applied outcomes.

One of the major elements of prior AI Winters has been the pipeline issue.

However, if the counterargument is accurate, a feedback loop between applied breakthroughs and basic research will generate enough pressure to keep the pipeline moving forward.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous Search for Artificial Intelligence. New York: Basic Books.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Muehlhauser, Luke. 2016. “What Should We Learn from Past AI Forecasts?” https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts.


Artificial Intelligence - What Is The Advanced Soldier Sensor Information Systems and Technology (ASSIST)?

 



Soldiers are often required to do missions that may take many hours and are quite stressful.

Soldiers are requested to write a report detailing the most significant events that occurred once a mission is completed.

This report is designed to collect information about the environment and local/foreign people in order to better organize future operations.

Soldiers often offer this report primarily based on their memories, still photographs, and GPS data from portable equipment.

There are probably numerous cases when crucial information is missing and not accessible for future mission planning due to the severe stress they face.

Soldiers were equipped with sensors that could be worn directly on their uniforms as part of the ASSIST (Advanced Soldier Sensor Information Systems and Technology) program, which addressed this problem.

Sensors continually recorded what was going on around the troops during the operation.

When the troops returned from their mission, the sensor data was indexed and an electronic record of the events that occurred while the ASSIST system was recording was established.

Soldiers might offer more accurate reports if they had this knowledge instead of depending simply on their memories.

Numerous functions were made possible by AI-based algorithms, including:

• "Capabilities for Image/Video Data Analysis"

• Object Detection/Image Classification—the capacity to detect and identify items (such as automobiles, persons, and license plates) using video, images, and/or other data sources.

• "Audio Data Analysis Capabilities"

• "Arabic Text Translation"—the ability to detect, recognize, and translate written Arabic text (e.g., in imagery data)

• "Change Detection"—the ability to detect changes in related data sources over time (e.g., identify differences in imagery of the same location at different times)

• Sound Recognition/Speech Recognition—the capacity to distinguish speech (e.g., keyword spotting and foreign language recognition) and identify sound events (e.g., explosions, gunfire, and cars) in audio data.

• Shooter Localization/Shooter Classification—the ability to recognize gunshots in the environment (e.g., via audio data processing), as well as the kind of weapon used and the shooter's position.

• "Capabilities for Soldier Activity Data Analysis"

• Soldier State Identification/Soldier Localization—the capacity to recognize a soldier's course of movement in a given area and characterize the soldier's activities (e.g., running, walking, and climbing stairs) To be effective, AI systems like this (also known as autonomous or intelligent systems) must be thoroughly and statistically analyzed to verify that they would work correctly and as intended in a military setting.

The National Institute of Standards and Technology (NIST) was entrusted with assessing these AI systems using three criteria:

1. The precision with which objects, events, and activities are identified and labeled

2. The system's capacity to learn and improve its categorization performance.

3. The system's usefulness in improving operational efficiency To create its performance measurements, NIST devised a two-part test technique.

Metrics 1 and 2 were assessed using component- and system-level technical performance evaluations, respectively, while meter 3 was assessed using system-level utility assessments.

The utility assessments were created to estimate the effect these technologies would have on warfighter performance in a range of missions and job tasks, while the technical performance evaluations were created to ensure the ongoing improvement of ASSIST system technical capabilities.

NIST endeavored to create assessment techniques that would give an appropriate degree of difficulty for system and soldier performance while defining the precise processes for each sort of evaluation.

The ASSIST systems were divided down into components that implemented certain capabilities at the component level.

For example, the system was divided down into an Arabic text identification component, an Arabic text extraction component (to localize individual text characters), and a text translation component to evaluate its Arabic translation capacity.

Each factor was evaluated on its own to see how it affected the system.

Each ASSIST system was assessed as a black box at the system level, with the overall performance of the system being evaluated independently of the individual component performance.

The total system received a single score, which indicated the system's ability to complete the overall job.

A test was also conducted at the system level to determine the system's usefulness in improving operational effectiveness for after-mission reporting.

Because all of the systems reviewed as part of this initiative were in the early phases of development, a formative assessment technique was suitable.

NIST was especially interested in determining the system's value for warfighters.

As a result, we were worried about the influence on their procedures and goods.

User-centered metrics were used to represent this viewpoint.

NIST set out to find measures that may assist answer questions like: What information do infantry troops seek and/or require after completing a field mission? From both the troops' and the S2's (Staff 2—Intelligence Officer) perspectives, how successfully are information demands met? What was ASSIST's contribution to mission reporting in terms of user-stated information requirements? This examination was carried out at the Aberdeen Test Center Military Operations in Urban Terrain (MOUT) location in Aberdeen, Maryland.

The location was selected for a variety of reasons:

• Ground truth—Aberdeen was able to deliver ground truth to within two centimeters of chosen locations.

This provided a strong standard against which the system output could be compared, enabling the assessment team to get a good depiction of what really transpired in the environment.

• Realism—The MOUT location has around twenty structures that were built up to seem like an Iraqi town.

• Testing infrastructure—The facility was outfitted with a number of cameras (both inside and outside) to help us better comprehend the surroundings during testing.

• Soldier availability—For the assessment, the location was able to offer a small squad of active-duty troops.

The MOUT site was enhanced with items, people, and background noises whose location and behavior were programmed to provide a more operationally meaningful test environment.

The goal was to provide an environment in which the various ASSIST systems could test their capabilities by detecting, identifying, and/or capturing various forms of data.

Foreign language speech detection and classification, Arabic text detection and recognition, detection of shots fired and vehicle sounds, classification of soldier states and tracking their locations (both inside and outside of buildings), and identifying objects of interest such as vehicles, buildings, people, and so on were all included in NIST's utility assessments.

Because the tests required the troops to respond according to their training and experience, the soldiers' actions were not scripted as they progressed through each exercise.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: Battlefield AI and Robotics; Cybernetics and AI.

Further Reading

Schlenoff, Craig, Brian Weiss, Micky Steves, Ann Virts, Michael Shneier, and Michael Linegang. 2006. “Overview of the First Advanced Technology Evaluations for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 125–32. Gaithersburg, MA: National Institute of Standards and Technology.

Steves, Michelle P. 2006. “Utility Assessments of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 165–71. Gaithersburg, MA: National Institute of Standards and Technology.

Washington, Randolph, Christopher Manteuffel, and Christopher White. 2006. “Using an Ontology to Support Evaluation of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 172–78. Gaithersburg, MA: National Institute of Standards and Technology.

Weiss, Brian A., Craig I. Schlenoff, Michael O. Shneier, and Ann Virts. 2006. “Technol￾ogy Evaluations and Performance Metrics for Soldier-Worn Sensors for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 157–64. Gaithersburg, MA: National Institute of Standards and Technology.




Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



Artificial Intelligence - How Is AI Being Applied To Air Traffic Control?

 


Air Traffic Control (ATC) is a ground-based air navigation service that directs airplanes on the ground and in regulated airspace.

Air traffic controllers also give advising services in uncontrolled airspace on occasion.

By coordinating the movement of commercial and private planes and guaranteeing a safe separation of traffic in the air and on the ground, controllers ensure the safe flow of air traffic.

They usually provide pilots with real-time traffic and weather notifications along with directing guidance.

The major goals of the ATC, according to the Federal Aviation Administration (FAA), are to manage and expedite air traffic flow, as well as to prevent aircraft crashes and provide real-time information and other navigational assistance for pilots.

The ATC is a service that is both risk adverse and safety crucial.

Air traffic controllers use a variety of technology, including computer systems, radars, and transmitters, in addition to their eye observation.

The volume and density of air travel has been increasing over the world.

The operational boundaries of modern ATC systems are being pushed as worldwide air traffic density increases.

To keep up with the rising need for accommodating future expansion in air traffic, air navigation and air traffic management systems must become increasingly complex.

Artificial intelligence (AI) provides a number of applications for safer, more efficient, and better management of rising air traffic.

According to the International Civil Aviation Organization's (ICAO) Global Air Navigation Plan (GANP), AI-based air traffic management systems may help address the operational issues posed by the growing volume and variety of air traffic.

Simulation systems with AI that can monitor and advise the activities of trainee controllers are already used in the training of human air traffic controllers.

In terms of operations, the ability of machine learning-based AI systems to ingest massive amounts of data may be used to solve the complexity and challenges of traffic management.

Such technologies may be used to assess traffic data for flight planning and route selection during the planning stages.

By detecting a wide range of flight patterns, AI can also provide reliable traffic predictions.

AI-based ATC systems may be used for route prediction and decision-making in en route operations, particularly in difficult scenarios with little data.

AI can help with taxiing methods and runway layouts.

Additionally, AI-assisted voice recognition technologies may help pilots and controllers communicate more effectively.

With such a wide range of applications, AI technologies may help human air traffic controllers improve their overall performance by providing them with detailed information and quick decision-making procedures.

It's also worth noting that, rather than replacing human air traffic controllers, AI-based solutions have shown to be useful in ensuring the safe and efficient flow of air traffic.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Intelligent Transportation.


Further Reading

Federal Aviation Administration. 2013. Aeronautical Information Manual: Official Guide to Basic Flight Information and ATC Procedures. Washington, DC: FAA. https://www.faa.gov/air_traffic/publications/.

International Civil Aviation Organization. 2018. “Potential of Artificial Intelligence (AI) in Air Traffic Management (ATM).” In Thirteenth Air Navigation Conference, 1–3. Montreal, Canada. https://www.icao.int/Meetings/anconf13/Documents/WP/wp_232_en.pdf.

Nolan, Michael S. 1999. Fundamentals of Air Traffic Control. Pacific Grove, CA: Brooks/Cole.





Controlling Quantum Coherence



One of the first basic quantum calculations utilizing individual molecules was accomplished in 1998 by researchers including Mark Kubinec of UC Berkeley. 

They utilized radio wave pulses to flip the spins of two nuclei in a molecule, with each spin's "up" or "down" orientation storing information in the same way as a "0" or "1" state in a traditional data bit would. 

The combined orientation of the two nuclei—that is, the molecule's quantum state—could only be maintained for short durations in carefully calibrated settings in the early days of quantum computers. 

In other words, the system's coherence was soon destroyed. 



Controlling quantum coherence is the last piece of the scalable quantum computer puzzle. 


Researchers are now working on novel methods to generate and maintain quantum coherence. 


As a result, ultra-sensitive measurement and information processing equipment will be able to operate in ambient or even severe circumstances. 

Joel Moore, a senior faculty scientist at Berkeley Lab and a professor at UC Berkeley, received funding from the Department of Energy in 2018 to establish and lead an Energy Frontier Research Center (EFRC) – the Center for Novel Pathways to Quantum Coherence in Materials (NPQC) – to further those efforts. 


"The EFRCs are a critical tool for DOE because they allow targeted inter-institutional partnerships to make fast progress on cutting-edge scientific issues that are beyond the reach of individual scientists," Moore said. 


Berkeley Lab, UC Berkeley, UC Santa Barbara, Argonne National Laboratory, and Columbia University scientists are leading the way in understanding and manipulating coherence in a range of solid-state systems via the NPQC. 

Their three-pronged strategy focuses on creating new quantum sensing platforms, building two-dimensional materials that host complex quantum states, and investigating methods to precisely regulate a material's electrical and magnetic characteristics via quantum processes. 



The materials science community has the key to solving these issues. 


Developing the capacity to control coherence in real-world settings requires a thorough knowledge of the materials that might be used to create alternative quantum bit (or "qubit"), sensing, or optical technologies. 

Further advances that will contribute to additional DOE expenditures throughout the Office of Science are based on basic findings. 

As the initiative approaches its fourth year, numerous scientific discoveries are setting the foundation for quantum information science advancements. 



Many of NPQC's accomplishments so far have been centered on quantum platforms based on particular faults in a material's structure known as spin defects. 


With the appropriate crystal backdrop, a spin defect may approach complete quantum coherence while also improving resilience and functionality. 

These flaws may be exploited to create high-precision sensor systems. 

Each spin defect reacts to minute changes in the environment, and coherent groups of defects may reach remarkable precision and accuracy. 

However, it's difficult to grasp how coherence develops in a system with multiple spins that interact with one another. 



To address this difficulty, NPQC scientists are turning to diamond, a common material that has shown to be excellent for quantum sensing. 


Each carbon atom in a diamond's crystal structure is linked to four other carbon atoms in nature. 


When one carbon atom is swapped with another or deleted entirely as the diamond's crystal structure develops, the resultant defect may act as an atomic system with a well-defined spin—an inherent type of angular momentum carried by electrons or other subatomic particles. 

Certain imperfections in diamond, like these particles, may have an orientation, or polarization, that is either "spin-up" or "spin-down." 

Norman Yao, a Berkeley Lab faculty scientist and an associate professor of physics at UC Berkeley, and his colleagues developed a 3D system with spins distributed across the volume by designing several distinct spin defects into a diamond lattice. 



The researchers used that setup to create a method for probing the "motion" of spin polarization at very small length scales. 


The researchers discovered that spin travels about in the quantum mechanical system in a similar manner as dye moves in a liquid, using a combination of experimental methods. 

As recently reported in the journal Nature, learning from dyes has shown to be a viable route toward comprehending quantum coherence. 



The multi-defect system not only offers a strong classical framework for understanding quantum dynamics, but it also provides an experimental platform for investigating how coherence works. 


The NPQC platform offers "a particularly controlled example of the interplay between disorder, long-ranged dipolar interactions between spins, and quantum coherence," according to Moore, the NPQC director and a member of the team who has previously researched various types of quantum dynamics.

 


The coherence periods of such spin defects are highly dependent on their immediate surroundings. 


Creating and mapping the strain sensitivity in the structure around individual flaws in diamond and other materials has been the focus of several NPQC discoveries. 

This may show how to manufacture flaws in 3D and 2D materials with the longest feasible coherence durations. 


But how could changes in the defect's coherence be related to changes imposed by pressures on the material itself? 


To find out, NPQC scientists are working on a method for generating distorted regions in a host crystal and measuring strain. 

"If you think of atoms in a lattice as a box spring, you get various outcomes depending on how you press on them," said Martin Holt, a principle scientist at NPQC and group leader in electron and X-ray microscopy at Argonne National Laboratory. 


He and his colleagues provide a direct picture of the distorted regions in a host crystal using the Advanced Photon Source and the Center for Nanoscale Materials, both user facilities at Argonne National Laboratory. 


Until recently, the direction of a defect in a sample was largely random. 


The pictures show which orientations are the most sensitive, indicating that high-pressure quantum sensing is a viable option. 

"It's amazing how you can take something as precious as a diamond and turn it into something useful. 

It's fantastic to have something that's simple enough to grasp fundamental physics yet sophisticated enough to perform advanced physics "Holt said. 




Another aim of this study is to be able to transmit a quantum state, such as a defect in diamond, from one place to another utilizing electrons in a coherent manner. 


Special quantum wires that emerge in atomically thin layers of certain materials are studied by NPQC experts at Berkeley Lab and Argonne Lab. 

The group headed by Feng Wang, a Berkeley Lab faculty senior scientist and UC Berkeley professor, and leader of NPQC's work in atomically thin materials, found superconductivity in one of these systems, a triple layer of carbon sheets. 

"The fact that the same materials may provide both protected one-dimensional conduction and superconductivity offers up some new options for preserving and transmitting quantum coherence," Wang said of the research, which was published in Nature in 2019. 



Multi-defect systems are essential for more than just basic science. 


  • They have the potential to be transformational technologies as well. 
  • NPQC researchers are investigating how spin defects may be utilized to regulate the material's electrical and magnetic characteristics in new two-dimensional materials that are opening the way for ultra-fast electronics and ultra-stable sensors. 


Recent discoveries have thrown up some unexpected results. 


According to Peter Fischer, a senior scientist and division deputy at Berkeley Lab's Materials Sciences Division, 


  • "Fundamental knowledge of nanoscale magnetic materials and their applications in spintronics has already ushered in massive changes in magnetic storage and sensor technology. 
  • Quantum coherence in magnetic materials may be the next step toward low-power devices, according to researchers."


The magnetic characteristics of a material are solely determined by the alignment of spins in neighboring atoms. 


Antiferromagnets contain neighboring spins that point in opposing directions and essentially cancel each other out, unlike the perfectly aligned spins in a normal refrigerator magnet or the magnets employed in traditional data storage. 


  • Antiferromagnets, as a consequence, do not "act" magnetically and are highly resistant to external perturbations. 
  • Researchers have been looking for methods to utilize them in spin-based electronics, where information is carried by spin rather than charge, for a long time. 

Finding a method to alter spin orientation while maintaining coherence is crucial. 


In 2019, NPQC researchers led by James Analytis, a Berkeley Lab faculty scientist and associate professor of physics at UC Berkeley, and postdoc Eran Maniv discovered that applying a small, single pulse of electrical current to tiny antiferromagnet flakes caused the spins to rotate and "switch" their orientation. 


As a consequence, the characteristics of the material may be fine-tuned very fast and accurately. 


  • "More experimental observations and some theoretical modeling will be required to understand the mechanics underlying this," Maniv added. 
  • "New materials may be able to provide light on how it works. This is the start of a new area of study.
  • The researchers are now attempting to identify the precise process that causes the switching in materials produced and described at Berkeley Lab's Molecular Foundry.




Recent research published in Science Advances and Nature Physics suggests that fine-tuning flaws in a layered material may offer a dependable way to regulate the spin pattern in new device platforms. 



Moore, the NPQC leader, stated, "This is a wonderful illustration of how having numerous flaws allows us to stabilize a switchable magnetic structure." 


  • NPQC will expand on this year's accomplishments in its second year of existence. 
  • Exploring how numerous flaws interact in two-dimensional materials, as well as researching novel types of one-dimensional structures that may emerge, are among the objectives. 
  • These lower-dimensional structures may be used as sensors to detect the smallest-scale characteristics of other materials. 



Focusing on how electric currents may control spin-derived magnetic characteristics will also help to bridge the gap between basic research and applied technology. 


Rapid success on these projects requires a unique blend of methods and experience that can only be developed in a big collaborative setting. 

"You don't build capabilities in a vacuum," Holt said. 

"The NPQC creates a dynamic research environment that propels science forward while also harnessing the work of each lab or site." 

Meanwhile, the research center offers a one-of-a-kind education at the cutting edge of science, as well as chances to train the scientific staff that will drive the future quantum industry. 



The NPQC introduces a new set of questions and objectives to the study of quantum materials' fundamental physics. 


Moore said,  

  • "The behavior of electrons in solids is governed by quantum mechanics, and this behavior provides the foundation for most of the contemporary technology we take for granted
  • However, we are now at the start of the second quantum revolution, in which characteristics such as coherence take center stage, and knowing how to improve these features offers up a new set of material-related issues for us to solve."



~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.





Quantum Computing Application To Detect Alien Life



While quantum computing may take many years to become commonplace in everyday life, the technology has already been enlisted to aid in the hunt for life in outer space. 



Zapata Computing, a quantum software firm, is collaborating with the University of Hull in the United Kingdom on research to assess Zapata's Orquestra quantum workflow platform, which will be used to improve a quantum application intended to identify signs of life in outer space. 


The assessment is not a controlled demonstration of characteristics, according to Dr David Benoit, Senior Lecturer in Molecular Physics and Astrochemistry at the University of Hull, but rather a study using real-world data. 

He said,

 "We're looking at how Orquestra works in realistic processes that utilize quantum computing to give typical real-life data." 

"Rather than a demonstration of skills, we're looking for actual usable data in this endeavor." 

Before the team releases an analysis of the study, the assessment will run for eight weeks. 

According to the parties, this will be the first of many partnerships between Zapata and the University of Hull for quantum astrophysics applications. 



The announcement comes as many quantum computing behemoths, including Google, IBM, Amazon, and Honeywell, were scheduled to attend a White House conference sponsored by the Biden administration to explore developing quantum computing applications. 


In certain instances, academics have resorted to quantum computing to finish tasks that would take too long for traditional computers to complete, and Benoit said the University of Hull is in a similar position. 

"The tests envisioned are still something that a traditional computer can perform," he said, "but, the computing time needed to get the answer has a factorial scale, meaning that bigger applications are likely to take days, months, or years to complete" (along with a very large amount of memory). 

The quantum equivalent is capable of solving such issues in a sub-factorial way (possibly quartic scaling), but this does not necessarily imply that it is quicker for all systems; rather, it means that the computing effort is significantly decreased for big systems. 



We're looking for a scalable method to do precise computations in our application, and quantum computers can help us achieve that. 


What is the scope of the job at hand? 


In 2016, MIT researchers proposed a list of more than 14,000 chemicals that may reveal indications of life in the atmospheres of far-away exoplanets, according to a statement from Zapata. 

However, nothing is presently understood about how these molecules vibrate and spin in response to neighboring stars' infrared light. 

Using new computer models of molecule rotations and vibrations, the University of Hull is attempting to create a library of observable biological fingerprints. 


Though quantum computing models have challenges in fault tolerance and error correction, Benoit claims that researchers are unconcerned about the performance of so-called Noisy Intermediate-Scale Quantum (NISQ) devices. 


"We consider the fact that the findings will be noisy as a beneficial thing since our approach really utilizes the statistical character of the noise/errors to try to get an accurate answer," he added. 



"Clearly, the better the mistake correction or the quieter the equipment, the better the result." 


However, utilizing Orquestra allows us to possibly switch platforms without having to re-implement significant portions of the code, which means we can easily compute with better hardware as it becomes available." 

Orquestra will enable researchers "produce important insights" from NISQ devices, according to Benoit, and researchers will be able to "create applications that utilize these NISQ devices today with the potential to exploit the more powerful quantum devices of the future." 


As a consequence, scientists should be able to do "very precise estimates of the fundamental variable determining atom-atom interactions — electrical correlation," which may enhance their capacity to identify the building elements of life in space. This is critical because even basic molecules like oxygen or nitrogen have complicated interactions that require very precise computations."


~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...