Artificial Intelligence - Who Was Raj Reddy Or Dabbala Rajagopal "Raj" Reddy?

 


 


Dabbala Rajagopal "Raj" Reddy (1937–) is an Indian American who has made important contributions to artificial intelligence and has won the Turing Award.

He holds the Moza Bint Nasser Chair and University Professor of Computer Science and Robotics at Carnegie Mellon University's School of Computer Science.

He worked on the faculties of Stanford and Carnegie Mellon universities, two of the world's leading colleges for artificial intelligence research.

In the United States and in India, he has received honors for his contributions to artificial intelligence.

In 2001, the Indian government bestowed upon him the Padma Bhushan Award (the third highest civilian honor).

In 1984, he was also given the Legion of Honor, France's highest honor, which was created in 1802 by Napoleon Bonaparte himself.

In 1958, Reddy obtained his bachelor's degree from the University of Madras' Guindy Engineering College, and in 1960, he received his master's degree from the University of New South Wales in Australia.

In 1966, he came to the United States to get his doctorate in computer science at Stanford University.

He was the first in his family to get a university degree, which is typical of many rural Indian households.

He went to the academy in 1966 and joined the faculty of Stanford University as an Assistant Professor of Computer Science, where he stayed until 1969, after working in the industry as an Applied Science Representative at IBM Australia from 1960 to 1963.

He began working at Carnegie Mellon as an Associate Professor of Computer Science in 1969 and will continue to do so until 2020.

He rose up the ranks at Carnegie Mellon, eventually becoming a full professor in 1973 and a university professor in 1984.

In 1991, he was appointed as the head of the School of Computer Science, a post he held until 1999.

Many schools and institutions were founded as a result of Reddy's efforts.

In 1979, he launched the Robotics Institute and served as its first director, a position he held until 1999.

He was a driving force behind the establishment of the Language Technologies Institute, the Human Computer Interaction Institute, the Center for Automated Learning and Discovery (now the Machine Learning Department), and the Institute for Software Research at CMU during his stint as dean.

From 1999 to 2001, Reddy was a cochair of the President's Information Technology Advisory Committee (PITAC).

The President's Council of Advisors on Science and Technology (PCAST) took over PITAC in 2005.

Reddy was the president of the American Association for Artificial Intelligence (AAAI) from 1987 to 1989.

The AAAI has been renamed the Association for the Advancement of Artificial Intelligence, recognizing the worldwide character of the research community, which began with pioneers like Reddy.

The former logo, acronym (AAAI), and purpose have been retained.

Artificial intelligence, or the study of giving intelligence to computers, was the subject of Reddy's research.

He worked on voice control for robots, speech recognition without relying on the speaker, and unlimited vocabulary dictation, which allowed for continuous speech dictation.

Reddy and his collaborators have made significant contributions to computer analysis of natural sceneries, job oriented computer architectures, universal access to information (a project supported by UNESCO), and autonomous robotic systems.

Reddy collaborated on Hearsay II, Dragon, Harpy, and Sphinx I/II with his coworkers.

The blackboard model, one of the fundamental concepts that sprang from this study, has been extensively implemented in many fields of AI.

Reddy was also interested in employing technology for the sake of society, and he worked as the Chief Scientist at the Centre Mondial Informatique et Ressource Humaine in France.

He aided the Indian government in the establishment of the Rajiv Gandhi University of Knowledge Technologies, which focuses on low-income rural youth.

He is a member of the International Institute of Information Technology (IIIT) in Hyderabad's governing council.

IIIT is a non-profit public-private partnership (N-PPP) that focuses on technological research and applied research.

He was on the board of directors of the Emergency Management and Research Institute, a nonprofit public-private partnership that offers public emergency medical services.

EMRI has also aided in the emergency management of its neighboring nation, Sri Lanka.

In addition, he was a member of the Health Care Management Research Institute (HMRI).

HMRI provides non-emergency health-care consultation to rural populations, particularly in Andhra Pradesh, India.

In 1994, Reddy and Edward A. Feigenbaum shared the Turing Award, the top honor in artificial intelligence, and Reddy became the first person of Indian/Asian descent to receive the award.

In 1991, he received the IBM Research Ralph Gomory Fellow Award, the Okawa Foundation's Okawa Prize in 2004, the Honda Foundation's Honda Prize in 2005, and the Vannevar Bush Award from the United States National Science Board in 2006.

Reddy has received fellowships from the Institute of Electronic and Electrical Engineers (IEEE), the Acoustical Society of America, and the American Association for Artificial Intelligence, among other prestigious organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Natural Language Processing and Speech Understanding.


References & Further Reading:


Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.” AI Magazine 9, no. 4 (Winter): 9–21.

Reddy, Raj. 1996. “To Dream the Possible Dream.” Communications of the ACM 39, no. 5 (May): 105–12.






Artificial Intelligence - AI Product Liability.

 



Product liability is a legal framework that holds the seller, manufacturer, distributor, and others in the distribution chain liable for damage caused by their goods to customers.

Victims are entitled to financial compensation from the accountable corporation.

The basic purpose of product liability legislation is to promote societal safety by discouraging wrongdoers from developing and distributing unsafe items to the general public.

Users and third-party spectators may also sue if certain conditions are satisfied, such as foreseeability of the harm.

Because product liability is governed by state law rather than federal law in the United States, the applicable legislation in each case may change depending on the location of the harm.

In the past, victims had to establish that the firm responsible was negligent, which meant that its acts did not reach the acceptable level of care, in order to prevail in court and be reimbursed for their injuries.



Four components must be shown in order to establish negligence.


  • First and foremost, the corporation must owe the customer a legal duty of care.
  • Second, that responsibility was violated, implying that the producer failed to fulfill the requisite level.
  • Third, the breach of duty resulted in the injury, implying that the manufacturer's activities resulted in the damage.
  • Finally, the victims must have had genuine injuries.



One approach to get compensated for product injury is to show that the corporation was negligent.



Product liability lawsuits may also be established by demonstrating that the corporation failed to uphold its guarantees to customers about the product's quality and dependability.


Express warranties may specify how long the product is covered by the warranty, as well as which components of the product are covered and which are not.

Implied guarantees that apply to all items include promises that the product will function as advertised and for the purpose for which the customer acquired it.

In the great majority of product liability cases, the courts will apply strict liability, which means that the corporation will be held accountable regardless of guilt if the standards are satisfied.

This is because the courts have determined that customers would have a tough time proving the firm is irresponsible since the company has greater expertise and resources.

Instead of proving that a duty was breached, consumers must show that the product contained an unreasonably dangerous defect; the defect caused the injury while the product was being used for its intended purpose; and the product was not substantially altered from the condition in which it was sold to consumers.


Design flaws, manufacturing flaws, and marketing flaws, sometimes known as failure to warn, are the three categories of defects that may be claimed for product responsibility.


When there are defects in the design of the product itself at the planning stage, this is referred to as a design defect.

If there was a foreseeable danger that the product might cause harm when used by customers when it was being created, the corporation would be liable.


When there are issues throughout the production process, such as the use of low-quality materials or shoddy craftsmanship, it is referred to as a manufacturing fault.


The final product falls short of the design's otherwise acceptable quality.

Failure to notify flaws occurs when a product involves an inherent hazard, regardless of how well it was designed or made, yet the corporation failed to provide customers with warnings that the product may be harmful.

While product liability law was created to cope with the advent of more complicated technologies that may cause consumer damage, it's unclear if the present legislation can apply to AI or whether it has to be updated to completely safeguard consumers.




When it comes to AI, there are various areas where the law will need to be clarified or changed.


Product liability requires the presence of a product, and it is not always apparent whether software or an algorithm is a product or a service.


Product liability law would apply if they were classed as such.

When it comes to services, consumers must depend on typical negligence claims.

Consumers' capacity to sue a manufacturer under product liability will be determined by the specific AI technology that caused the injury and what the court concludes in each case.

When AI technology is able to learn and behave independently of its initial programming, new problems arise.

Because the AI's behaviors may not have been predictable in certain situations, it's unclear if a damage can still be linked to the product's design or production.

Furthermore, since AI depends on probability-based predictions and will, at some time, make a decision that causes harm even if it is the optimal course of action, it may not be fair for the maker to bear the risk when the AI is likely to produce harm by design.



In response to these difficult concerns, some commentators have recommended that AI be held to a different legal standard than conventional goods, such as strict responsibility.


They propose, for example, that medical AI technology be regarded as if it were a reasonable human doctor or medical student, and that autonomous automobiles be treated as if they were a reasonable human driver.

Artificial intelligence products would still be liable for customer harm, but the threshold they would have to reach would be that of a reasonable person in the identical circumstance.

Only if a human in the identical scenario would have been unable to avoid inflicting the damage would the AI be held accountable for the injuries.

This raises the issue of whether the designers or manufacturers would be held vicariously accountable since they had the right, capacity, and obligation to govern the AI, or if the AI would be considered a legal person responsible for paying the victims on its own.



As AI technology advances, it will become more difficult to distinguish between traditional and more sophisticated products.

However, because there are currently no alternatives in the law, product liability will continue to be the legal framework for determining who is responsible and under what circumstances consumers must be financially compensated when AI causes injuries.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems; Calo, Ryan; Driverless Vehicles and Liability; Trolley Problem.



References & Further Reading:



Kaye, Timothy S. 2015. ABA Fundamentals: Products Liability Law. Chicago: American Bar Association.

Owen, David. 2014. Products Liability in a Nutshell. St. Paul, MN: West Academic Publishing.

Turner, Jacob. 2018. Robot Rules: Regulating Artificial Intelligence. Cham, Switzerland: Palgrave Macmillan.

Weaver, John Frank. 2013. Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws. Santa Barbara, CA: Praeger.






Artificial Intelligence - Predictive Policing.

 





Predictive policing is a term that refers to proactive police techniques that are based on software program projections, particularly on high-risk areas and periods.

Since the late 2000s, these tactics have been progressively used in the United States and in a number of other nations throughout the globe.

Predictive policing has sparked heated debates about its legality and effectiveness.

Deterrence work in policing has always depended on some type of prediction.





Furthermore, from its inception in the late 1800s, criminology has included the study of trends in criminal behavior and the prediction of at-risk persons.

As early as the late 1920s, predictions were used in the criminal justice system.

Since the 1970s, an increased focus on geographical components of crime research, particularly spatial and environmental characteristics (such as street lighting and weather), has helped to establish crime mapping as a useful police tool.





Since the 1980s, proactive policing techniques have progressively used "hot-spot policing," which focuses police resources (particularly patrols) in regions where crime is most prevalent.

Predictive policing is sometimes misunderstood to mean that it prevents crime before it happens, as in the science fiction film Minority Report (2002).

Unlike conventional crime analysis approaches, they depend on predictive modeling algorithms powered by software programs that statistically analyze police data and/or apply machine-learning algorithms.





Perry et al. (2013) identified three sorts of projections that they can make: 

(1) locations and times when crime is more likely to occur; 

(2) persons who are more likely to conduct crimes; and 

(3) the names of offenders and victims of crimes.


"Predictive policing," on the other hand, generally relates mainly to the first and second categories of predictions.






Two forms of modeling are available in predictive policing software tools.

The geospatial ones show when and where crimes are likely to occur (in which area or even block), and they lead to the mapping of crime "hot spots." Individual-based modeling is the second form of modeling.

Variables like age, criminal histories, gang involvement, or the chance of a person being engaged in a criminal activity, particularly a violent one, are used in programs that give this sort of modeling.

These forecasts are often made in conjunction with the adoption of proactive police measures (Ridgeway 2013).

Police patrols and restrictions in crime "hot areas" are naturally included in geospatial modeling.

Individuals having a high risk of becoming involved in criminal behavior are placed under observation or reported to the authorities in the case of individual-based modeling.

Since the late 2000s, police agencies have been progressively using software tools from technology businesses that assist them create projections and implement predictive policing methods.

With the deployment of PredPol in 2011, the Santa Cruz Police Department became the first in the United States to employ such a strategy.





This software tool, which was inspired by earthquake aftershock prediction techniques, offers daily (and occasionally hourly) maps of "hot zones." It was first restricted to property offenses, but it was subsequently expanded to encompass violent crimes.

More than sixty police agencies throughout the United States already employ PredPol.

In 2012, the New Orleans Police Department was one of the first to employ Palantir to perform predictive policing.

Since then, many more software programs have been created, including CrimeScan, which analyzes seasonal and weekday trends in addition to crime statistics, and Hunchlab, which employs machine learning techniques and adds weather patterns.

Some police agencies utilize software tools that enable individual-based modeling in addition to geographic modeling.

The Chicago Police Department, for example, has relied on the Strategic Subject List (SSL) since 2013, which is generated by an algorithm that assesses the likelihood of persons being engaged in a shooting as either perpetrators or victims.

Individuals with the highest risk ratings are referred to the police for preventative action.




Predictive policing has been used in countries other than the United States.


PredPol was originally used in the United Kingdom in the early 2010s, and the Crime Anticipation System, which was first utilized in Amsterdam, was made accessible to all Dutch police departments in May 2017.

Several concerns have been raised about the accuracy of predictions produced by software algorithms employed in predictive policing.

Some argue that software systems are more objective than human crime data analyzers and can anticipate where crime will occur more accurately.

Predictive policing, from this viewpoint, may lead to a more efficient allocation of police resources (particularly police patrols) and is cost-effective, especially when software is used instead of paying human crime data analysts.

On the contrary, opponents argue that software program forecasts embed systemic biases since they depend on police data that is itself heavily skewed due to two sorts of faults.

To begin with, crime records appropriately represent law enforcement efforts rather than criminal activity.

Arrests for marijuana possession, for example, provide information on the communities and people targeted by police in their anti-drug efforts.

Second, not all victims report crimes to the police, and not all crimes are documented in the same way.

Sexual crimes, child abuse, and domestic violence, for example, are generally underreported, and U.S. citizens are more likely than non-U.S. citizens to report a crime.

For all of these reasons, some argue that predictions produced by predictive police software algorithms may merely tend to repeat prior policing behaviors, resulting in a feedback loop: In areas where the programs foresee greater criminal activity, policing may be more active, resulting in more arrests.

To put it another way, predictive police software tools may be better at predicting future policing than future criminal activity.

Furthermore, others argue that predictive police forecasts are racially prejudiced, given how historical policing has been far from colorblind.

Furthermore, since race and location of residency in the United States are intimately linked, the use of predictive policing may increase racial prejudices against nonwhite communities.

However, evaluating the effectiveness of predictive policing is difficult since it creates a number of methodological difficulties.

In fact, there is no statistical proof that it has a more beneficial impact on public safety than previous or other police approaches.

Finally, others argue that predictive policing is unsuccessful at decreasing crime since police patrols just dispense with criminal activity.

Predictive policing has sparked several debates.

The constitutionality of predictive policy's implicit preemptive action, for example, has been questioned since the hot-spot policing that commonly comes with it may include stop-and-frisks or unjustified stopping, searching, and questioning of persons.

Predictive policing raises ethical concerns about how it may infringe on civil freedoms, particularly the legal notion of presumption of innocence.

In reality, those on lists like the SSL should be allowed to protest their inclusion.

Furthermore, police agencies' lack of openness about how they use their data has been attacked, as has software firms' lack of transparency surrounding their algorithms and predictive models.

Because of this lack of openness, individuals are oblivious to why they are on lists like the SSL or why their area is often monitored.

Members of civil rights groups are becoming more concerned about the use of predictive policing technologies.

Predictive Policing Today: A Shared Statement of Civil Rights Concerns was published in 2016 by a coalition of seventeen organizations, highlighting the technology's racial biases, lack of transparency, and other serious flaws that lead to injustice, particularly for people of color and nonwhite neighborhoods.

In June 2017, four journalists sued the Chicago Police Department under the Freedom of Details Act, demanding that the department provide all information on the algorithm used to create the SSL.

While police departments are increasingly implementing software programs that predict crime, their use may decline in the future due to their mixed results in terms of public safety.

Two police agencies in the United Kingdom (Kent) and Louisiana (New Orleans) have terminated their contracts with predictive policing software businesses in 2018.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.



References & Further Reading:



Collins, Francis S., and Harold Varmus. 2015. “A New Initiative on Precision Medicine.” New England Journal of Medicine 372, no. 2 (February 26): 793–95.

Haskins, Julia. 2018. “Wanted: 1 Million People to Help Transform Precision Medicine: All of Us Program Open for Enrollment.” Nation’s Health 48, no. 5 (July 2018): 1–16.

Madara, James L. 2016 “AMA Statement on Precision Medicine Initiative.” February 25, 2016. Chicago, IL: American Medical Association.

Morrison, S. M. 2019. “Precision Medicine.” Lister Hill National Center for Biomedical Communications. U.S. National Library of Medicine. Bethesda, MD: National Institutes of Health, Department of Health and Human Services.





Artificial Intelligence - The Precision Medicine Initiative.

 





Precision medicine, or preventative and treatment measures that account for individual variability, is not a new concept.

For more than a century, blood type has been used to guide blood transfusions.




However, the recent development of large-scale biologic databases (such as the human genome sequence), powerful methods for characterizing patients (such as proteomics, metabolomics, genomics, diverse cellular assays, and even mobile health technology), and computational tools for analyzing large sets of data have significantly improved the prospect of expanding this application to more broad uses (Collins and Varmus 2015, 793).

The Precision Medicine Initiative (PMI), which was launched by President Barack Obama in 2015, is a long-term research endeavor including the National Institutes of Health (NIH) and a number of other public and commercial research organizations.

The initiative's goal, as stated, is to learn how a person's genetics, environment, and lifestyle can help determine viable disease prevention, treatment, and mitigation strategies.





It consists of both short- and long-term objectives.

The short-term objectives include advancing precision medicine in cancer research.

Scientists at the National Cancer Institute (NCI), for example, want to employ a better understanding of cancer's genetics and biology to develop new, more effective treatments for diverse kinds of the illness.

The long-term objectives of PMI are to introduce precision medicine to all aspects of health and health care on a wide scale.

To that goal, the National Institutes of Health (NIH) created the All of Us Research Program in 2018, which enlists the help of at least one million volunteers from throughout the country.



Participants will provide genetic information, biological samples, and other health-related information.

Contributors will be able to view their health information, as well as research that incorporates their data, throughout the study to promote open data sharing.

Researchers will utilize the information to look at a variety of illnesses in order to better forecast disease risk, understand how diseases develop, and develop better diagnostic and treatment options (Morrison 2019, 6).

The PMI is designed to provide doctors with the information and assistance they need to incorporate personalized medicine services into their practices in order to accurately focus therapy and enhance health outcomes.

It will also work to enhance patient access to their medical records and assist physicians in using electronic technologies to make health information more accessible, eliminate inefficiencies in health-care delivery, cut costs, and improve treatment quality (Madara 2016, 1).

While the initiative explicitly states that participants will not get a direct medical benefit as a result of their participation, it also states that their participation may lead to medical breakthroughs that will benefit future generations.



It will generate substantially more effective health treatments that assure quality and equality in support of efforts to both prevent illness and decrease premature mortality by extending evidence-based disease models to include individuals from historically underrepresented communities (Haskins 2018, 1).


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.




See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.



References & Further Reading:



Collins, Francis S., and Harold Varmus. 2015. “A New Initiative on Precision Medicine.” New England Journal of Medicine 372, no. 2 (February 26): 793–95.

Haskins, Julia. 2018. “Wanted: 1 Million People to Help Transform Precision Medicine: All of Us Program Open for Enrollment.” Nation’s Health 48, no. 5 (July 2018): 1–16.

Madara, James L. 2016 “AMA Statement on Precision Medicine Initiative.” February 25, 2016. Chicago, IL: American Medical Association.

Morrison, S. M. 2019. “Precision Medicine.” Lister Hill National Center for Biomedical Communications. U.S. National Library of Medicine. Bethesda, MD: National Institutes of Health, Department of Health and Human Services.




Artificial Intelligence - AI And Post-Scarcity.

 





Post-scarcity is a controversial idea about a future global economy in which a radical abundance of products generated at low cost utilizing sophisticated technologies replaces conventional human labor and wage payment.

Engineers, futurists, and science fiction writers have proposed a wide range of alternative economic and social structures for a post-scarcity world.

Typically, these models rely on hyperconnected systems of artificial intelligence, robotics, and molecular nanofactories and manufacturing to overcome scarcity—an pervasive aspect of current capitalist economy.

In many scenarios, sustainable energy comes from nuclear fusion power plants or solar farms, while materials come from asteroids mined by self-replicating smart robots.







Other post-industrial conceptions of socioeconomic structure, such as the information society, knowledge economy, imagination age, techno-utopia, singularitarianism, and nanosocialism, exist alongside post-scarcity as a material and metaphorical term.

Experts and futurists have proposed a broad variety of dates for the transition from a post-industrial capitalist economy to a post-scarcity economy, ranging from the 2020s to the 2070s and beyond.

The "Fragment on Machines" unearthed in Karl Marx's (1818–1883) unpublished notebooks is a predecessor of post-scarcity economic theory.

Advances in machine automation, according to Marx, would diminish manual work, cause capitalism to collapse, and usher in a socialist (and ultimately communist) economic system marked by leisure, artistic and scientific inventiveness, and material prosperity.





The modern concept of a post-scarcity economy can be traced back to political economist Louis Kelso's (1913–1991) mid-twentieth-century descriptions of conditions in which automation causes a near-zero drop in the price of goods, personal income becomes superfluous, and self-sufficiency and perpetual vacations become commonplace.

Kelso advocated for more equitable allocation of social and political power through democratizing capital ownership distribution.

This is significant because in a post-scarcity economy, individuals who hold capital will also own the technologies that allow for plenty.

For example, entrepreneur Mark Cuban has predicted that the first trillionaire would be in the artificial intelligence industry.

Artificial intelligence serves as a constant and pervasive analytics platform in the post-scarcity economy, harnessing machine productivity.



AI directs the robots and other machinery that transform raw materials into completed products and run other critical services like transportation, education, health care, and water supply.

At practically every work-related endeavor, field of industry, and line of business, smart technology ultimately outperform humans.

Traditional professions and employment marketplaces are becoming extinct.

The void created by the disappearance of wages and salaries is filled by a government-sponsored universal basic income or guaranteed minimum income.

The outcomes of such a situation may be utopian, dystopian, or somewhere in the between.

Post-scarcity AI may be able to meet practically all human needs and desires, freeing individuals up to pursue creative endeavors, spiritual contemplation, hedonistic urges, and the pursuit of joy.

Alternatively, the aftermath of an AI takeover might be a worldwide disaster in which all of the earth's basic resources are swiftly consumed by self-replicating robots that multiply exponentially.

K. Eric Drexler (1955–), a pioneer in nanotechnology, coined the phrase "gray goo event" to describe this kind of worst-case ecological calamity.

An intermediate result might entail major changes in certain economic areas but not others.

According to Andrew Ware of the University of Cambridge's Centre for the Study of Existential Risk (CSER), AI will have a huge impact on agriculture, altering soil and crop management, weed control, and planting and harvesting (Ware 2018).

According to a survey of data compiled by the McKinsey Global Institute, managerial, professional, and administrative tasks are among the most difficult for an AI to handle—particularly in the helping professions of health care and education (Chui et al. 2016).

Science fiction writers fantasize of a society when clever machines churn out most material items for pennies on the dollar.

The matter duplicator in Murray Leinster's 1935 short tale "The Fourth Dimensional Demonstrator" is an early example.

Leinster invents a duplicator-unduplayer that takes use of the fact that the four-dimensional world (the three-dimensional physical universe plus time) has some thickness.

The technology snatches fragments from the past and transports them to the present.

Pete Davidson, who inherits the equipment from his inventor uncle, uses it to reproduce a banknote put on the machine's platform.

The note stays when the button is pressed, but it is joined by a replica of the note that existed seconds before the button was pressed.

Because the duplicate of the bill has the same serial number, this may be determined.



Davidson uses the equipment to comic effect, duplicating gold and then (accidentally) removing pet kangaroos, girlfriends, and police officers from the fourth dimension.

With Folded Hands (1947) by Jack Williamson introduces the Humanoids, a species of thinking black mechanicals who serve as domestics, doing all of humankind's labor and adhering to their responsibility to "serve and obey, and defend men from danger" (Williamson 1947, 7).

The robots seem to be well-intentioned, but they are slowly removing all meaningful work from human humans in the village of Two Rivers.

The Humanoids give every convenience, but they also eliminate any human risks, such as sports and alcohol, as well as any motivation to accomplish things for themselves.

Home doorknobs are even removed by the mechanicals since people should not have to make their own entries and exits.

People get anxious, afraid, and eventually bored.

For a century or more, science fiction writers have envisaged economies joined together by post-scarcity and vast possibility.

When an extraterrestrial species secretly dumps a score of matter duplicating machines on the planet, Ralph Williams' novella "Business as Usual, During Alterations" (1958) investigates human greed.

Each of the electrical machines, which have two metal pans and a single red button, is the same.

"A press of the button fulfills your heart's wish," reads a written caution on the duplicator.

It's also a chip embedded in human society's underpinnings.

It will be brought down by a few billion of these chips.

It's all up to you" (Williams 1968, 288).

Williams' narrative is set on the day the gadget emerges, and it takes place in Brown's Department Store.

John Thomas, the manager, has exceptional vision, understanding that the robots would utterly disrupt retail by eliminating both scarcity and the value of items.

Rather of attempting to create artificial scarcity, Thomas comes up with the concept of duplicating the duplicators and selling them on credit to clients.

He also reorients the business to offer low-cost items that can be duplicated in the pan.

Instead of testing humanity's selfishness, the extraterrestrial species is presented with an abundant economy based on a completely different model of production and distribution, where distinctive and varied items are valued above uniform ones.

The phrase "Business as Usual, During Changes" appears on occasion in basic economics course curricula.

In the end, William's story is similar to the long-tail distributions of more specialist products and services described by authors on the economic and social implications of high technology like Clay Shirky, Chris Anderson, and Erik Brynjolfsson.

In 1964, Leinster returned with The Duplicators, a short book. In this novel, the planet Sord Three's human civilization has lost much of its technological prowess, as well as all electrical devices, and has devolved into a rough approximation of feudal society.

Humans are only able to utilize their so-called dupliers to produce necessary items like clothing and silverware.

Dupliers have hoppers where vegetable matter is deposited and raw ingredients are harvested to create other, more complicated commodities, but they pale in comparison to the originals.

One of the characters speculates that this may be due to a missing ingredient or components in the feedstock.

It's also self-evident that when poor samples are repeated, the duplicates will be weaker.

The heavy weight of numerous, but poor products bears down on the whole community.

Electronics, for example, are utterly gone since machines cannot recreate them.

When the story's protagonist, Link Denham, arrives on the planet in unduplicated attire, they are taken aback.

"And dupliers released to mankind would amount to treason," Denham speculates in the story, referring to the potential untold wealth as well as the collapse of human civilization throughout the galaxy if the dupliers become known and widely used off the planet: "And dupliers released to mankind would amount to treason." If a gadget exists that can accomplish every kind of job that the world requires, people who are the first to own it are wealthy beyond their wildest dreams.

However, pride will turn wealth into a marketable narcotic.

Men will no longer work since their services are no longer required.

Men will go hungry because there is no longer any need to feed them" (Leinster 1964, 66–67).

Native "uffts," an intelligent pig-like species trapped in slavery as servants, share the planet alongside humans.

The uffts are adept at gathering the raw materials needed by the dupliers, but they don't have direct access to them.

They are completely reliant on humans for some of the commodities they barter for, particularly beer, which they like.

Link Denham utilizes his mechanical skill to unlock the secrets of the dupliers, allowing them to make high-value blades and other weapons, and finally establishes himself as a kind of Connecticut Yankee in King Arthur's Court.

Humans and uffts equally devastate the environment as they feed more and more vegetable stuff into the dupliers to manufacture the enhanced products, too stupid to take full use of Denham's rediscovery of the appropriate recipes and proportions.

This bothers Denham, who had hoped that the machines could be used to reintroduce modern agricultural implements to the planet, after which they could be used solely for repairing and creating new electronic goods in a new economic system he devised, dubbed "Householders for the Restoration of the Good Old Days" by the local humans.

The good times are ended soon enough, as humans plan the re-subjugation of the native uffts, prompting them to form a Ufftian Army of Liberation.

Link Denham deflects the uffts at first with generous helpings of bureaucratic bureaucracy, then liberates them by developing beer-brewing equipment privately, ending their need on the human trade.

The Diamond Age is a Hugo Award-winning bildungsroman about a society governed by nanotechnology and artificial intelligence, written by Neal Stephenson in 1995.

The economy is based on a system of public matter compilers, which are essentially molecular assemblers that act as fabricating devices and function similarly to K. Eric Drexler's proposed nanomachines in Engines of Creation (1986), which "guide chemical reactions by positioning reactive molecules with atomic precision" (Drexler 1986, 38).

All individuals are free to utilize the matter compilers, and raw materials and energy are given from the Source, a massive hole in the earth, through the Feed, a centralized utility system.

"Whenever Nell's clothing were too small, Harv would toss them in the deke bin and have the M.C. sew new ones for her." 

Tequila would use the M.C. to create Nell a beautiful outfit with lace and ribbons if they were going somewhere where they would see other parents with other girls" (Stephenson 1995, 53).

Nancy Kress's short tale "Nano Comes to Clifford Falls" (2006) examines the societal consequences of nanotechnology, which gives every citizen's desire.

It recycles the old but dismal cliche of humans becoming lazy and complacent when presented with technology solutions, but this time it adds the twist that males in a society suddenly free of poverty are at risk of losing their morals.

"Printcrime" (2006), a very short article initially published in the magazine Nature by Cory Doctorow, who, by no coincidence, releases free works under a liberal Creative Commons license.

The tale follows Lanie, an eighteen-year-old girl who remembers the day ten years ago when the cops arrived to her father's printer-duplicator, which he was employing to illegally create pricey, artificially scarce drugs.

One of his customers basically "shopped" him, alerting him of his activities.

Lanie's father had just been released from jail in the second part of the narrative.

He's immediately inquiring where he can "get a printer and some goop," acknowledging that printing "rubbish" in the past was a mistake, but then whispers to Lanie, "I'm going to produce more printers." There are a lot more printers.

There's one for everyone. That is deserving of incarceration.

That's worth a lot." Makers (2009), also by Cory Doctorow, is about a do-it-yourself (DIY) maker subculture that hacks technology, financial systems, and living arrangements to "find means of remaining alive and happy even while the economy is going down the toilet," as the author puts it (Doctorow 2009).

The impact of a contraband carbon nanotube printing machine on the world's culture and economy is the premise of pioneering cyberpunk author Bruce Sterling's novella Kiosk (2008).

Boroslav, the protagonist, is a popup commercial kiosk operator in a poor world nation, most likely a future Serbia.

He begins by obtaining a standard quick prototyping 3D printer.

Children buy cards to program the gadget and manufacture waxy, nondurable toys or inexpensive jewelry.

Boroslav eventually ends himself in the hands of a smuggled fabricator who can create indestructible objects in just one hue.

Those who return their items to be recycled into fresh raw material are granted refunds.

He is later discovered to be in possession of a gadget without the necessary intellectual property license, and in exchange for his release, he offers to share the device with the government for research purposes.

However, before handing up the gadget, he uses the fabricator to duplicate it and conceal it in the jungles until the moment is right for a revolution.

The expansive techno-utopian Culture series of books (1987–2012) by author Iain M. Banks involves superintelligences living alongside humans and aliens in a galactic civilization marked by space socialism and a post-scarcity economy.

Minds, benign artificial intelligences, manage the Culture with the assistance of sentient drones.

The sentient living creatures in the novels do not work since the Minds are superior and offer all the citizens need.

As the biological population indulges in hedonistic indulgences and faces the meaning of life and fundamental ethical dilemmas in a utilitarian cosmos, this reality precipitates all kinds of conflict.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.




See also: 


Ford, Martin; Technological Singularity; Workplace Automation.



References & Further Reading:



Aguilar-Millan, Stephen, Ann Feeney, Amy Oberg, and Elizabeth Rudd. 2010. “The Post-Scarcity World of 2050–2075.” Futurist 44, no. 1 (January–February): 34–40.

Bastani, Aaron. 2019. Fully Automated Luxury Communism. London: Verso.

Chase, Calum. 2016. The Economic Singularity: Artificial Intelligence and the Death of Capitalism. San Mateo, CA: Three Cs.

Chui, Michael, James Manyika, and Mehdi Miremadi. 2016. “Where Machines Could Replace Humans—And Where They Can’t (Yet).” McKinsey Quarterly, July 2016. http://pinguet.free.fr/wheremachines.pdf.

Doctorow, Cory. 2006. “Printcrime.” Nature 439 (January 11). https://www.nature.com/articles/439242a.

Doctorow, Cory. 2009. “Makers, My New Novel.” Boing Boing, October 28, 2009. https://boingboing.net/2009/10/28/makers-my-new-novel.html.

Drexler, K. Eric. 1986. Engines of Creation: The Coming Era of Nanotechnology. New York: Doubleday.

Kress, Nancy. 2006. “Nano Comes to Clifford Falls.” Nano Comes to Clifford Fall and Other Stories. Urbana, IL: Golden Gryphon Press.

Leinster, Murray. 1964. The Duplicators. New York: Ace Books.

Pistono, Federico. 2014. Robots Will Steal Your Job, But That’s OK: How to Survive the Economic Collapse and Be Happy. Lexington, KY: Createspace.

Saadia, Manu. 2016. Trekonomics: The Economics of Star Trek. San Francisco: Inkshares.

Stephenson, Neal. 1995. The Diamond Age: Or, a Young Lady’s Illustrated Primer. New York: Bantam Spectra.

Ware, Andrew. 2018. “Can Artificial Intelligence Alleviate Resource Scarcity?” Inquiry Journal 4 (Spring): n.p. https://core.ac.uk/reader/215540715.

Williams, Ralph. 1968. “Business as Usual, During Alterations.” In 100 Years of Science Fiction, edited by Damon Knight, 285–307. New York: Simon and Schuster.

Williamson, Jack. 1947. “With Folded Hands.” Astounding Science Fiction 39, no. 5 (July): 6–45.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...