Showing posts sorted by date for query gadgets. Sort by relevance Show all posts
Showing posts sorted by date for query gadgets. Sort by relevance Show all posts

Artificial Intelligence - Who Is Elon Musk?

 




Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer magazine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.


Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.


• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker 

• SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) 

• Neuralink: a neurotechnology startup focusing on brain-computer connections 

• The Boring Business: an infrastructure and tunnel construction corporation

 • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.


Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; Superintelligence.


References & Further Reading:


Gates, Bill. (@BillGates). 2018. Twitter, June 26, 2018. https://twitter.com/BillGates/status/1011752221376036864.

Marr, Bernard. 2018. “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data.” Forbes, January 8, 2018. https://www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/.

McFarland, Matt. 2014. “Elon Musk: With Artificial Intelligence, We Are Summoning the Demon.” Washington Post, October 24, 2014. https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/.

Morris, David Z. 2017. “Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization.’” Fortune, July 15, 2017. https://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/.

Piper, Kelsey. 2018. “Why Elon Musk Fears Artificial Intelligence.” Vox Media, Novem￾ber 2, 2018. https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai.

Strauss, Neil. 2017. “Elon Musk: The Architect of Tomorrow.” Rolling Stone, November 15, 2017. https://www.rollingstone.com/culture/culture-features/elon-musk-the-architect-of-tomorrow-120850/.



Artificial Intelligence - Who Is Hans Moravec?

 




Hans Moravec(1948–) is well-known in the computer science community as the long-time head of Carnegie Mellon University's Robotics Institute and an unashamed techno logical optimist.

For the last twenty-five years, he has studied and produced artificially intelligent robots at the CMU lab, where he is still an adjunct faculty member.

Moravec spent almost 10 years as a research assistant at Stanford University's groundbreaking Artificial Intelligence Lab before coming to Carnegie Mellon.

Moravec is also noted for his paradox, which states that, contrary to popular belief, it is simple to program high-level thinking skills into robots—as with chess or Jeopardy!—but difficult to transmit sensorimo tor agility.

Human sensory and motor abilities have developed over millions of years and seem to be easy, despite their complexity.

Higher-order cognitive abilities, on the other hand, are the result of more recent cultural development.

Geometry, stock market research, and petroleum engineering are examples of disciplines that are difficult for people to learn but easier for robots to learn.

"The basic lesson of thirty-five years of AI research is that the hard issues are simple, and the easy ones are hard," writes Steven Pinker of Moravec's scientific career.

Moravec built his first toy robot out of scrap metal when he was eleven years old, and his light-following electronic turtle and a robot operated by punched paper tape earned him two high school science fair honors.

He proposed a Ship of Theseus-like analogy for the viability of artificial brains while still in high school.

Consider replacing a person's human neurons one by one with precisely manufactured equivalents, he said.

When do you think human awareness will vanish? Is anybody going to notice? Is it possible to establish that the person is no longer human? Later in his career, Moravec would suggest that human knowledge and training might be broken down in the same manner, into subtasks that machine intelligences could take over.

Moravec's master's thesis focused on the development of a computer language for artificial intelligence, while his PhD research focused on the development of a robot that could navigate obstacle courses utilizing spatial representation methods.

The area of interest (ROI) in a scene was identified by these robot vision systems.

Moravec's early computer vision robots were extremely sluggish by today's standards, taking around five hours to go from one half of the facility to the other.

To measure distance and develop an internal picture of physical impediments in the room, a remote computer carefully analysed continuous video-camera images recorded by the robot from various angles.

Moravec finally developed 3D occupancy grid technology, which allowed a robot to create an awareness of a cluttered area in a matter of seconds.

Moravec's lab took on a new challenge by converting a Pontiac TransSport minivan into one of the world's first road-ready autonomous cars.

The self-driving minivan reached speeds of up to 60 miles per hour.

DANTE II, a robot capable of going inside the crater of an active volcano on Mount Spurr in Alaska, was also constructed by the CMU Robotics Institute.

While DANTE II's immediate aim was to sample harmful fumarole gases, a job too perilous for humans, it was also planned to demonstrate technologies for robotic expeditions to distant worlds.

The volcanic explorer robot used artificial intelligence to navigate the perilous, boulder-strewn terrain on its own.

Because such rovers produced so much visual and other sensory data that had to be analyzed and managed, Moravec believes that experience with mobile robots spurred the development of powerful artificial intelligence and computer vision methods.

For the National Aeronautics and Space Administration (NASA), Moravec's team built fractal branching ultra-dexterous robots ("Bush robots") in the 1990s.

These robots, which were proposed but never produced due to the lack of necessary manufacturing technologies, comprised of a branching hierarchy of dynamic articulated limbs, starting with a main trunk and splitting down into smaller branches.

As a result, the Bush robot would have "hands" at all scales, from macroscopic to tiny.

The tiniest fingers would be nanoscale in size, allowing them to grip very tiny objects.

Moravec said the robot would need autonomy and depend on artificial intelligence agents scattered throughout the robot's limbs and branches because to the intricacy of manipulating millions of fingers in real time.

He believed that the robots may be made entirely of carbon nanotube material, employing the quick prototyping technology known as 3D printers.

Moravec believes that artificial intelligence will have a significant influence on human civilization.

To stress the role of AI in change, he coined the concept of the "landscape of human capability," which physicist Max Tegmark has later converted into a graphic depiction.

Moravec's picture depicts a three-dimensional environment in which greater altitudes reflect more challenging jobs in terms of human difficulty.

The point where the swelling waters meet the shore reflects the line where robots and humans both struggle with the same duties.

Art, science, and literature are now beyond of grasp for an AI, but the sea has already defeated mathematics, chess, and the game Go.

Language translation, autonomous driving, and financial investment are all on the horizon.

More controversially, in two popular books, Mind Children (1988) and Robot: Mere Machine to Transcendent Mind (1989), Moravec engaged in future conjecture based on what he understood of developments in artificial intelligence research (1999).

In 2040, he said, human intellect will be surpassed by machine intelligence, and the human species would go extinct.

Moravec evaluated the functional equivalence between 50,000 million instructions per second (50,000 MIPS) of computer power and a gram of brain tissue and came up with this figure.

He calculated that home computers in the early 2000s equaled only an insect's nervous system, but that if processing power doubled every eighteen months, 350 million years of human intellect development could be reduced to just 35 years of artificial intelligence advancement.

He estimated that a hundred million MIPS would be required to create human-like universal robots.

Moravec refers to these sophisticated robots as our "mind children" in the year 2040.

Humans, he claims, will devise techniques to delay biological civilization's final demise.

Moravec, for example, was the first to anticipate what is now known as universal basic income, which is delivered by benign artificial superintelligences.

In a completely automated society, a basic income system would provide monthly cash payments to all individuals without any type of employment requirement.

Moravec is more concerned about the idea of a renegade automated corporation breaking its programming and refusing to pay taxes into the human cradle-to-grave social security system than he is about technological unemployment.

Nonetheless, he predicts that these "wild" intelligences will eventually control the universe.

Moravec has said that his books Mind Children and Robot may have had a direct impact on the last third of Stanley Kubrick's original screenplay for A.I. Artificial Intelligence (later filmed by Steven Spielberg).

Moravecs, on the other hand, are self-replicating devices in the science fiction books Ilium and Olympos.

Moravec defended the same physical fundamentalism he expressed in his high school thoughts throughout his life.

He contends in his most transhumanist publications that the only way for humans to stay up with machine intelligences is to merge with them by replacing sluggish human cerebral tissue with artificial neural networks controlled by super-fast algorithms.

In his publications, Moravec has blended the ideas of artificial intelligence with virtual reality simulation.


He's come up with four scenarios for the development of consciousness.

(1) human brains in the physical world, 

(2) a programmed AI implanted in a physical robot, 

(3) a human brain immersed in a virtual reality simulation, and 

(4) an AI functioning inside the boundaries of virtual reality All of them are equally credible depictions of reality, and they are as "real" as we believe them to be.


Moravec is the creator and chief scientist of the Pittsburgh-based Seegrid Corporation, which makes autonomous Robotic Industrial Trucks that can navigate warehouses and factories without the usage of automated guided vehicle systems.

A human trainer physically pushes Seegrid's vehicles through a new facility once.

The robot conducts the rest of the job, determining the most efficient and safe pathways for future journeys, while the trainer stops at the appropriate spots for the truck to be loaded and unloaded.

Seegrid VGVs have transported over two million production miles and eight billion pounds of merchandise for DHL, Whirlpool, and Amazon.

Moravec was born in the Austrian town of Kautzen.

During World War II, his father was a Czech engineer who sold electrical products.

When the Russians invaded Czechoslovakia in 1944, the family moved to Austria.

In 1953, his family relocated to Canada, where he now resides.

Moravec earned a bachelor's degree in mathematics from Acadia University in Nova Scotia, a master's degree in computer science from the University of Western Ontario, and a doctorate from Stanford University, where he worked with John McCarthy and Tom Binford on his thesis.

The Office of Naval Study, the Defense Advanced Research Projects Agency, and NASA have all supported his research.

Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer maga zine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.

Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.

• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker • SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) • Neuralink: a neurotechnology startup focusing on brain-computer connections • The Boring Business: an infrastructure and tunnel construction corporation • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.

Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Superintelligence; Technological Singularity; Workplace Automation.



References & Further Reading:


Moravec, Hans. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press.

Moravec, Hans. 1999. Robot: Mere Machine to Transcendent Mind. Oxford, UK: Oxford University Press.

Moravec, Hans. 2003. “Robots, After All.” Communications of the ACM 46, no. 10 (October): 90–97.

Pinker, Steven. 2007. The Language Instinct: How the Mind Creates Language. New York: Harper.




Artificial Intelligence - Agriculture Using Intelligent Sensing.

  



From Neolithic tools that helped humans transition from hunter gatherers to farmers to the British Agricultural Revolution, which harnessed the power of the Industrial Revolution to increase yields (Noll 2015), technological innovation has always driven food production.

Today, agriculture is highly technical, as scientific discoveries continue to be integrated into production systems.

Intelligent Sensing Agriculture is one of the newest additions to a long history of integrating cutting-edge technology to the production, processing, and distribution of food.

These technological gadgets are generally used to achieve the dual aim of boosting crop yields while lowering agricultural system environmental effects.

Intelligent sensors are devices that, as part of their stated duty, may execute a variety of complicated operations.

These sensors should not be confused with "smart" sensors or instrument packages that can collect data from the physical environment (Cleaveland 2006).

Intelligent sensors are unique in that they not only detect but also react to varied circumstances in nuanced ways depending on the information they collect.

"In general, sensors are devices that measure a physical quantity and turn the result into a signal that can be read by an observer or instrument; however, intelligent sensors may analyze measured data" (Bialas 2010, 822).

Their capacity to govern their own processes in response to environmental stimuli is what distinguishes them as "intelligent." They collect fundamental elements from various factors (such as light, temperature, and humidity) and then develop intermediate responses to these aspects (Yamasaki 1996).

The capacity to do sophisticated learning, information processing, and adaptation all in one integrated package is required for this feature.

These sensor packages are employed in a broad variety of applications, from aerospace to health care, and their scope is growing.

While all of these applications are novel, the use of intelligent sensors in agriculture might have a broad variety of social advantages owing to the technology.

There is a pressing need to boost the productivity of existing productive agricultural fields.

In 2017, the world's population approached 7.6 billion people, according to the United Nations (2017).

The majority of the world's arable land, on the other hand, is already being used for food.

Currently, over half of the land in the United States is used to generate agricultural goods, whereas 40% of the land in the United Kingdom is utilized to create agricultural products (Thompson 2010).

Due to a scarcity of undeveloped land, agricultural production must skyrocket within the next 10 years, yet environmental effects must be avoided in order to boost overall sustainability and long-term productivity.

Intelligent sensors aid in maximizing the use of all available resources, lowering agricultural expenses, and limiting the use of hazardous inputs (Pajares 2011).

"When nutrients in the soil, humidity, solar radiation, weed density, and a wide range of other factors and data affecting production are known," Pajares says, "the situation improves, and the use of chemical products such as fertilizers, herbicides, and other pollutants can be significantly reduced" (Pajares 2011, 8930).

The majority of intelligent sensor applications in this context may be classified as "precise agriculture," which is described as "information-intensive crop management that use technology to watch, react, and quantify crucial factors." When combined with computer networks, this data enables for field administration from afar.

Combinations of several kinds of sensors (such as temperature and image-based devices) enable for monitoring and control regardless of distance.

Intelligent sensors gather in-field data to aid agricultural production management in a variety of ways.

The following are some examples of specialized applications: Unmanned Aerial Vehicles (UAVs) with a suite of sensors detect fires (Pajares 2011); LIDAR sensors paired with GPS identify trees and estimate forest biomass; and capacitance probes measure soil moisture while reflectometers determine crop moisture content.

Other sensor types may identify weeds, evaluate soil pH, quantify carbon metabolism in peatlands, regulate irrigation systems, monitor temperatures, and even operate machinery like sprayers and tractors.

When equipped with sophisticated sensors, robotic devices might be utilized to undertake many of the tasks presently performed by farmers.

Modern farming is being revolutionized by intelligent sensors, and as technology progresses, chores will become more automated.

Agricultural technology, on the other hand, have a long history of public criticism.

One criticism of the use of intelligent sensors in agriculture is that it might have negative societal consequences.

While these devices improve agricultural systems' efficiency and decrease environmental problems, they may have a detrimental influence on rural populations.

Technological advancements have revolutionized the way farmers manage their crops and livestock since the invention of the first plow.

Intelligent sensors may allow tractors, harvesters, and other equipment to operate without the need for human involvement, potentially altering the way food is produced.

This might lower the number of people required in the agricultural industry, and consequently the number of jobs available in rural regions, where agricultural production is mostly conducted.

Furthermore, this technology may be too costly for farmers, increasing the likelihood of small farms failing.

The so-called "technology treadmill" is often blamed for such failures.

This term describes a situation in which a small number of farmers adopt a new technology and profit because their production costs are lower than their competitors'.

Increased earnings are no longer possible when more producers embrace this technology and prices decline.

It becomes important to use this new technology in order to compete in a market where others are doing so.

Farmers who do not implement the technology are eventually forced out of business, while those who do thrive.

The use of clever sensors may help to keep the technological treadmill going.

Regard less, the sensors have a broad variety of social, economic, and ethical effects that will need to be examined, as the technology advances.

 


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Workplace Automation.



Further Reading:



Bialas, Andrzej. 2010. “Intelligent Sensors Security.” Sensors 10, no. 1: 822–59.

Cleaveland, Peter. 2006. “What Is a Smart Sensor?” Control Engineering, January 1, 2006. https://www.controleng.com/articles/what-is-a-smart-sensor/.

Noll, Samantha. 2015. “Agricultural Science.” In A Companion to the History of American Science, edited by Mark Largent and Georgina Montgomery. New York: Wiley-Blackwell.

Pajares, Gonzalo. 2011. “Advances in Sensors Applied to Agriculture and Forestry.” Sensors 11, no. 9: 8930–32.

Thompson, Paul B. 2009. “Philosophy of Agricultural Technology.” In Philosophy of Technology and Engineering Sciences, edited by Anthonie Meijers, 1257–73. Handbook of the Philosophy of Science. Amsterdam: North-Holland.

Thompson, Paul B. 2010. The Agrarian Vision: Sustainability and Environmental Ethics. Lexington: University Press of Kentucky.

United Nations, Department of Economic and Social Affairs. 2017. World Population Prospects: The 2017 Revision. New York: United Nations.

Yamasaki, Hiro. 1996. “What Are the Intelligent Sensors.” In Handbook of Sensors and Actuators, vol. 3, edited by Hiro Yamasaki, 1–17. Amsterdam: Elsevier Science B.V.



Artificial Intelligence - What Is A Group Symbol Associator?



Firmin Nash, director of the South West London Mass X-Ray Service, devised the Group Symbol Associator, a slide rule-like device that enabled a clinician to correlate a patient's symptoms against 337 predefined symptom-disease complexes and establish a diagnosis in the early 1950s.

It resolves cognitive processes in automated medical decision-making by using multi-key look-up from inverted files.

The Group Symbol Associator has been dubbed a "cardboard brain" by Derek Robinson, a professor at the Ontario College of Art & Design's Integrated Media Program.

Hugo De Santo Caro, a Dominican monk who finished his index in 1247, used an inverted scriptural concordance similar to this one.

Marsden Blois, an artificial intelligence in medicine professor at the University of California, San Francisco, rebuilt the Nash device in software in the 1980s.

Blois' diagnostic aid RECONSIDER, which is based on the Group Symbol Associator, performed as good as or better than other expert systems, according to his own testing.

Nash dubbed the Group Symbol Associator the "Logoscope" because it employed propositional calculus to analyze different combinations of medical symptoms.

The Group Symbol Associator is one of the early efforts to apply digital computers to diagnostic issues, in this instance by adapting an analog instrument.

Along the margin of Nash's cardboard rule, disease groupings chosen from mainstream textbooks on differential diagnosis are noted.

Each patient symptom or property has its own cardboard symptom stick with lines opposing the locations of illnesses that share that property.

There were a total of 82 sign and symptom sticks in the Group Symbol Associator.

Sticks that correspond to the state of the patient are chosen and entered into the rule.



Diseases with a higher number of symptom lines are thought to be diagnoses.

Nash's slide rule is simply a matrix with illnesses as columns and properties as rows.

Wherever qualities are predicted in each illness, a mark (such as a "X") is inserted into the matrix.

Rows that describe symptoms that the patient does not have are removed.

The most probable or "best match" diagnosis is shown by columns with a mark in every cell.

When seen as a matrix, the Nash device reconstructs information in the same manner as peek-a-boo card retrieval systems did in the 1940s to manage knowledge stores.

The Group Symbol Associator is similar to Leo J. Brannick's analog computer for medical diagnosis, Martin Lipkin and James Hardy's McBee punch card system for diagnosing hematological diseases, Keeve Brodman's Cornell Medical Index Health Questionnaire, Vladimir K.

Zworykin's symptom spectra analog computer, and other "peek-a-boo" card systems and devices.

The challenge that these devices are trying to solve is locating or mapping illnesses that are suited for the patient's mix of standardized features or attributes (signs, symptoms, laboratory findings, etc.).

Nash claimed to have condensed a physician's memory of hundreds of pages of typical diagnostic tables to a little machine around a yard long.

Nash claimed that his Group Symbol Associator obeyed the "rule of mechanical experience conservation," which he coined.



"Will man crumble under the weight of the wealth of experience he has to bear and pass on to the next generation if our books and brains are reaching relative inadequacy?" he wrote.

I don't believe so.

Power equipment and labor-saving gadgets took on the physical strain.

Now is the time to usher in the age of thought-saving technologies" (Nash 1960b, 240).

Nash's equipment did more than just help him remember things.

He asserted that the machine was involved in the diagnostic procedure' logical analysis.

"Not only does the Group Symbol Associator represent the final results of various diagnostic classificatory thoughts, but it also displays the skeleton of the whole process as a simultaneous panorama of spectral patterns that correlate with changing degrees of completeness," Nash said.

"For each diagnostic occasion, it creates a map or pattern of the issue and functions as a physical jig to guide the mental process" (Paycha 1959, 661).

On October 14, 1953, a patent application for the invention was filed with the Patent Office in London.

At the 1958 Mechanization of Thought Processes Conference at the National Physical Laboratory (NPL) in the Teddington region of London, Nash conducted the first public demonstration of the Group Symbol Associator.

The NPL meeting in 1958 is notable for being just the second to be held on the topic of artificial intelligence.

In the late 1950s, the Mark III Model of the Group Symbol Associator became commercially available.

Nash hoped that when doctors were away from their offices and books, they would bring Mark III with them.

"The GSA is tiny, affordable to create, ship, and disseminate," Nash noted.

It is simple to use and does not need any maintenance.

Even in outposts, ships, and other places, a person might have one" (Nash 1960b, 241).

Nash also published instances of xerography (dry photocopying)-based "logoscopic photograms" that obtained the same outcomes as his hardware device.

Medical Data Systems of Nottingham, England, produced the Group Symbol Associator in large quantities.

Yamanouchi Pharmaceutical Company distributed the majority of the Mark V devices in Japan.

In 1959, Nash's main opponent, François Paycha, a French ophthalmologist, explained the practical limits of Nash's Group Symbol Associator.

He pointed out that in the identification of corneal diseases, where there are roughly 1,000 differentiable disorders and 2,000 separate indications and symptoms, such a gadget would become highly cumbersome.

The instrument was examined in 1975 by R. W. Pain of the Royal Adelaide Hospital in South Australia, who found it to be accurate in just a quarter of instances.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Computer-Assisted Diagnosis.


Further Reading:


Eden, Murray. 1960. “Recapitulation of Conference.” IRE Transactions on Medical Electronics ME-7, no. 4 (October): 232–38.

Nash, F. A. 1954. “Differential Diagnosis: An Apparatus to Assist the Logical Faculties.” Lancet 1, no. 6817 (April 24): 874–75.

Nash, F. A. 1960a. “Diagnostic Reasoning and the Logoscope.” Lancet 2, no. 7166 (December 31): 1442–46.

Nash, F. A. 1960b. “The Mechanical Conservation of Experience, Especially in Medicine.” IRE Transactions on Medical Electronics ME-7, no. 4 (October): 240–43.

Pain, R. W. 1975. “Limitations of the Nash Logoscope or Diagnostic Slide Rule.” Medical Journal of Australia 2, no. 18: 714–15.

Paycha, François. 1959. “Medical Diagnosis and Cybernetics.” In Mechanisation of Thought Processes, vol. 2, 635–67. London: Her Majesty’s Stationery Office


Optical Computing Systems To Speed Up AI And Machine Learning.




Artificial intelligence and machine learning are influencing our lives in a variety of minor but significant ways right now. 

For example, AI and machine learning programs propose content from streaming services like Netflix and Spotify that we would appreciate. 

These technologies are expected to have an even greater influence on society in the near future, via activities such as driving completely driverless cars, allowing sophisticated scientific research, and aiding medical breakthroughs. 

However, the computers that are utilized for AI and machine learning use a lot of power. 


The need for computer power associated with these technologies is now doubling every three to four months. 


Furthermore, cloud computing data centers employed by AI and machine learning applications use more electricity each year than certain small nations. 

It's clear that this level of energy usage cannot be sustained. 

A research team lead by the University of Washington has created new optical computing hardware for AI and machine learning that is far quicker and uses much less energy than traditional electronics. 

Another issue addressed in the study is the 'noise' inherent in optical computing, which may obstruct computation accuracy. 

The team showcases an optical computing system for AI and machine learning in a new research published Jan. 


Science Advances that not only mitigates noise but also utilizes part of it as input to assist increase the creative output of the artificial neural network inside the system. 


Changming Wu, a UW doctorate student in electrical and computer engineering, stated, "We've constructed an optical computer that is quicker than a typical digital computer." 

"Moreover, this optical computer can develop new objects based on random inputs provided by optical noise, which most researchers have attempted to avoid." 

Optical computing noise is primarily caused by stray light particles, or photons, produced by the functioning of lasers inside the device as well as background heat radiation. 

To combat noise, the researchers linked their optical computing core to a Generative Adversarial Network, a sort of machine learning network. 

The researchers experimented with a variety of noise reduction strategies, including utilizing part of the noise created by the optical computing core as random inputs for the GAN. 


The researchers, for example, gave the GAN the job of learning how to handwrite the number "7" in a human-like manner. 


The number could not simply be printed in a predetermined typeface on the optical computer. 

It had to learn the task in the same way that a kid would, by studying visual examples of handwriting and practicing until it could accurately write the number. 

Because the optical computer lacked a human hand for writing, its "handwriting" consisted of creating digital pictures with a style close to but not identical to the examples it had examined. 

"Instead of teaching the network to read handwritten numbers, we taught it to write numbers using visual examples of handwriting," said senior author Mo Li, an electrical and computer engineering professor at the University of Washington. 

"We also demonstrated that the GAN can alleviate the detrimental effect of optical computing hardware sounds by utilizing a training technique that is resilient to mistakes and noises, with the support of our Duke University computer science teammates. 

Furthermore, the network treats the sounds as random input, which is required for the network to create output instances." 

The GAN practiced writing "7" until it could do it effectively after learning from handwritten examples of the number seven from a normal AI-training picture collection. 

It developed its own writing style along the way and could write numbers from one to ten in computer simulations. 


The next stage will be to scale up the gadget using existing semiconductor manufacturing methods. 


To attain wafer-scale technology, the team wants to employ an industrial semiconductor foundry rather than build the next iteration of the device in a lab. 

A larger-scale gadget will boost performance even further, allowing the study team to undertake more sophisticated activities such as making artwork and even films in addition to handwriting production. 

"This optical system represents a computer hardware architecture that can enhance the creativity of artificial neural networks used in AI and machine learning," Li explained. 

"More importantly, it demonstrates the viability of this system at a large scale where noise and errors can be mitigated and even harnessed." "AI applications are using so much energy that it will be unsustainable in the future. 

This technique has the potential to minimize energy usage, making AI and machine learning more environmentally friendly—as well as incredibly quick, resulting in greater overall performance." Although many people are unaware of it, artificial intelligence (AI) and machine learning are now a part of our regular life online. 

Intelligent ranking algorithms, for example, help search engines like Google, video streaming services like Netflix utilize machine learning to customize movie suggestions, and cloud computing data centers employ AI and machine learning to help with a variety of services. 



The requirements for AI are many, diverse, and difficult. 



As these needs rise, so does the need to improve AI performance while also lowering its energy usage. 

The energy costs involved with AI and machine learning on a broad scale may be startling. 

Cloud computing data centers, for example, use an estimated 200 terawatt hours per year — enough energy to power a small nation — and this consumption is expected to expand enormously in the future years, posing major environmental risks. 

Now, a team lead by associate professor Mo Li of the University of Washington Department of Electrical and Computer Engineering (UW ECE) has developed a method in partnership with academics from the University of Maryland that might help speed up AI while lowering energy and environmental expenses. 

The researchers detailed an optical computing core prototype that employs phase-change material in a publication published in Nature Communications on January 4, 2021. 

(a substance similar to what CD-ROMs and DVDs use to record information). 

Their method is quick, energy-efficient, and capable of speeding up AI and machine learning neural networks. 

The technique is also scalable and immediately relevant to cloud computing, which employs AI and machine learning to power common software applications like search engines, streaming video, and a plethora of apps for phones, desktop computers, and other devices. 

"The technology we designed is geared to execute artificial neural network algorithms, which are a backbone method for AI and machine learning," Li said. 

"This breakthrough in research will make AI centers and cloud computing significantly more energy efficient and speedier." 

The team is one of the first in the world to employ phase-change material in optical computing to allow artificial neural networks to recognize images. 


Recognizing a picture in a photo is simple for humans, but it requires a lot of computing power for AI. 


Image recognition is a benchmark test of a neural network's computational speed and accuracy since it requires a lot of computation. 

This test was readily passed by the team's optical computing core, which was running an artificial neural network. 

"Optical computing initially surfaced as a concept in the 1980s, but it eventually died in the shadow of microelectronics," said Changming Wu, a graduate student in Li's group. 

"It has now been updated due to the end of Moore's law [the discovery that the number of transistors in a dense, integrated circuit doubles every two years], developments in integrated photonics, and the needs of AI computing."

 That's a lot of fun." Optical computing is quick because it transmits data at incredible rates using light created by lasers rather than the considerably slower electricity utilized in typical digital electronics. 


The prototype built by the study team was created to speed up the computational speed of an artificial neural network, which is measured in billions and trillions of operations per second. 


Future incarnations of their technology, according to Li, have the potential to move much quicker. 

"This is a prototype, and we're not utilizing the greatest speed possible with optics just yet," Li said. 

"Future generations have the potential to accelerate by at least an order of magnitude." Any program powered by optical computing over the cloud — such as search engines, video streaming, and cloud-enabled gadgets — will operate quicker, enhancing performance, in the ultimate real-world use of this technology. 

Li's research team took their prototype a step further by sensing light emitted via phase-change material to store data and conduct computer operations. 

Unlike transistors in digital electronics, which need a constant voltage to represent and maintain the zeros and ones required in binary computing, phase-change material does not require any energy. 


When phase-change material is heated precisely by lasers, it shifts between a crystalline and an amorphous state, much like a CD or DVD. 


The material then retains that condition, or "phase," as well as the information that phase conveys (a zero or one), until the laser heats it again. 

"There are other competing schemes to construct optical neural networks," Li explained, "but we believe that using phase-changing material has a unique advantage in terms of energy efficiency because the data is encoding in a non-volatile way, meaning that the device does not consume a constant amount of power to store the data." 

"Once the info is written there, it stays there indefinitely." You don't need to provide electricity to keep it in place." 

This energy savings is important because it is multiplied by millions of computer servers in hundreds of data centers throughout the globe, resulting in a huge decrease in energy consumption and environmental effect. 



By patterning the phase-change material used in their optical computing core into nanostructures, the team was able to improve it even further. 


These tiny structures increase the material's durability and stability, as well as its contrast (the ability to discriminate between zero and one in binary code) and computing capacity and accuracy. 

The optical computer core of the prototype was also completely integrated with phase-change material, thanks to Li's research team. 

"We're doing all we can to incorporate optics here," Wu said. 

"We layer the phase-change material on top of a waveguide, which is a thin wire that we cut into the silicon chip to channel light. 

You may conceive of it as a light-emitting electrical wire or an optical fiber etched into the chip." 

Li's research group claims that the technology they created is one of the most scalable methods to optical computing technologies now available, with the potential to be used to massive systems like networked cloud computing servers in data centers across the globe. 

"Our design architecture is scalable to a much, much bigger network," Li added, "and can tackle hard artificial intelligence tasks ranging from massive, high-resolution image identification to video processing and video image recognition."

"We feel our system is the most promising and scalable to that degree." 

Of course, this will need large-scale semiconductor production. 

Our design and the prototype's substance are both extremely compatible with semiconductor foundry procedures."


Looking forward, Li said he could see optical computing devices like the one his team produced boosting current technology's processing capacity and allowing the next generation of artificial intelligence. 


To take the next step in that direction, his research team will collaborate closely with UW ECE associate professor Arka Majumdar and assistant professor Sajjad Moazeni, both specialists in large-scale integrated photonics and microelectronics, to scale up the prototype they constructed. 


And, once the technology has been scaled up enough, it will lend itself to future integration with energy-intensive data centers, speeding up the performance of cloud-based software applications while lowering energy consumption. 

"The computers in today's data centers are already linked via optical fibers. 

This enables ultra-high bandwidth transmission, which is critical," Li said. 

"Because fiber optics infrastructure is already in place, it's reasonable to do optical computing in such a setup." It's fantastic, and I believe the moment has come for optical computing to resurface."


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Optical Computing, Optical Computing Core, AI, Machine Learning, AI Systems.


Further Reading:


Changming Wu et al, Harnessing optoelectronic noises in a photonic generative network, Science Advances (2022). DOI: 10.1126/sciadv.abm2956. www.science.org/doi/10.1126/sciadv.abm2956





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...