Showing posts with label Workplace Automation. Show all posts
Showing posts with label Workplace Automation. Show all posts

AI - Why Software Is Eating The World.

 




Marc Andreessen, developer of the Mosaic web browser, wealthy entrepreneur, and famous Silicon Valley venture investor, wrote an essay titled "Why Software is Eating the World" (Andreessen 2011).


The Wall Street Journal published the piece on August 20, 2011.

Andreessen outlines the transition from a hardware-based to a software-based economy in it, and the article has been hailed as foresighted throughout the years, with the title becoming an aphorism.

However, the charismatic author of the highly respected piece is responsible for the majority of its effect.


Andreessen has been dubbed one of the most important intellectuals in Silicon Valley.


The Silicon Valley is a 150-square-mile shelf south of San Francisco that is often regarded as the world's technology innovation center.

Andreessen is the personification of Silicon Valley's techno-optimism and belief in disruptive innovation, which is defined as armies of start-ups developing technology that create new markets, disrupt current industries, and eventually remove incumbents.



The background of Andreessen's piece is the economic process that economist Joseph Schumpeter dubbed "creative destruction." 


After graduating from the University of Illinois in Urbana-Champaign with a bachelor's degree in computer science in 1994, Andreessen co-created the Mosaic web browser with Eric Bina, a user-friendly browser that could run on a wide variety of computers.


Andreessen and Jim Clarke launched Mosaic Communications Corporation, an internet start-up business in Mountain View, California, in 1994 to capitalize on Mosaic's economic potential.


The firm was renamed Netscape Communications, and the web browser was renamed Netscape Navigator; it went public in 1995, and Netscape's IPO is widely seen as the unofficial start of the dot-com era (1995–2000).

AOL bought the firm for $4.2 billion later.

Andreessen cofounded Loudcloud (later renamed Opsware), a pioneering cloud computing startup that provided software as a service, as well as computing and hosting services to internet and e-commerce businesses, in 1998.

Hewlett-Packard bought Opsware for $1.6 billion in 2007.


Andreessen and Ben Horowitz, his longtime business partner at both Netscape and Loudcloud, founded a venture capital company in 2009.


Andreessen Horowitz, or a16z (the "a" in Andreessen and the "z" in Horowitz are separated by sixteen letters), has since invested in firms including Airbnb, Box, Facebook, Groupon, Instagram, Lift, Skype, and Zynga.

A16z was created to invest in forward-thinking entrepreneurs with bold ideas, disruptive technology, and the ability to change the course of history.

Andreessen has been on the boards of Facebook, Hewlett-Packard, and eBay over his career.

He is an ardent supporter of artificial intelligence, and A16z has invested in a slew of AI-driven start-ups as a result.



"Software Eating the World" has been interpreted in popular and scholarly literature in terms of digitalization: 


A postmodern economy will be chewed up by the rise of the internet and the spread of smartphones, tablet computers, and other disruptive electronic devices in industry after industry, from media to financial services to health care.


In an essay headlined "Software is Not Eating the World," VentureBeat contributor Dylan Tweney gave an alternate viewpoint in October 2011, emphasizing the continued relevance of the hardware that underpins computer systems.

He said, "You'll pay Apple, RIM, or Nokia for your phone." 

"You'll continue to pay Intel for the chips, and Intel will continue to pay Applied Materials for the multimillion-dollar machines that produce those chips" (Tweney 2011).



To be clear, the persistence of conventional activities, such as tangible items and storefronts, and the rise of software-driven decision-making are not mutually exclusive.


In fact, technology may be the lifeblood of conventional business.

Andreessen noted out in his post that, in the not-too-distant future, a company's stock worth would be determined by the quality of its software rather than how many things it sells.


"Software is also consuming a large portion of the value chain of sectors that are commonly thought to reside largely in the physical world." 

"Software operates today's automobiles, regulates safety features, entertains passengers, leads drivers to their destinations, and links each car to mobile, satellite, and GPS networks," he said.



"The move toward hybrid and electric automobiles will further speed the software shift—electric cars are entirely controlled by computers." 


And Google and the main auto makers are already working on software-powered driverless vehicles" (Tweney 2011).

In other words, the visual appeal of great goods, the magnetic attraction of great brands, and the benefits of expanded portfolio assets will not be replaced by a software-based economy because companies will continue to produce great products, brands, and businesses as they did in the past.


Software, on the other hand, will eventually supplant goods, brands, and financial strategies as the primary source of value generation for businesses.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Workplace Automation.


References & Further Reading:


Andreessen, Marc. 2011. “Why Software Is Eating the World.” The Wall Street Journal, August 20, 2011. https://www.wsj.com/articles/SB10001424053111903480904576512250915629460.

Christensen, Clayton M. 2016. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Third edition. Boston, MA: Harvard Business School Press.

Tweney, Dylan. 2011. “Dylan’s Desk: Software Is Not Eating the World.” VentureBeat, October 5, 2011. https://venturebeat.com/2011/10/05/dylans-desk-hardware/



Artificial Intelligence - AI And Post-Scarcity.

 





Post-scarcity is a controversial idea about a future global economy in which a radical abundance of products generated at low cost utilizing sophisticated technologies replaces conventional human labor and wage payment.

Engineers, futurists, and science fiction writers have proposed a wide range of alternative economic and social structures for a post-scarcity world.

Typically, these models rely on hyperconnected systems of artificial intelligence, robotics, and molecular nanofactories and manufacturing to overcome scarcity—an pervasive aspect of current capitalist economy.

In many scenarios, sustainable energy comes from nuclear fusion power plants or solar farms, while materials come from asteroids mined by self-replicating smart robots.







Other post-industrial conceptions of socioeconomic structure, such as the information society, knowledge economy, imagination age, techno-utopia, singularitarianism, and nanosocialism, exist alongside post-scarcity as a material and metaphorical term.

Experts and futurists have proposed a broad variety of dates for the transition from a post-industrial capitalist economy to a post-scarcity economy, ranging from the 2020s to the 2070s and beyond.

The "Fragment on Machines" unearthed in Karl Marx's (1818–1883) unpublished notebooks is a predecessor of post-scarcity economic theory.

Advances in machine automation, according to Marx, would diminish manual work, cause capitalism to collapse, and usher in a socialist (and ultimately communist) economic system marked by leisure, artistic and scientific inventiveness, and material prosperity.





The modern concept of a post-scarcity economy can be traced back to political economist Louis Kelso's (1913–1991) mid-twentieth-century descriptions of conditions in which automation causes a near-zero drop in the price of goods, personal income becomes superfluous, and self-sufficiency and perpetual vacations become commonplace.

Kelso advocated for more equitable allocation of social and political power through democratizing capital ownership distribution.

This is significant because in a post-scarcity economy, individuals who hold capital will also own the technologies that allow for plenty.

For example, entrepreneur Mark Cuban has predicted that the first trillionaire would be in the artificial intelligence industry.

Artificial intelligence serves as a constant and pervasive analytics platform in the post-scarcity economy, harnessing machine productivity.



AI directs the robots and other machinery that transform raw materials into completed products and run other critical services like transportation, education, health care, and water supply.

At practically every work-related endeavor, field of industry, and line of business, smart technology ultimately outperform humans.

Traditional professions and employment marketplaces are becoming extinct.

The void created by the disappearance of wages and salaries is filled by a government-sponsored universal basic income or guaranteed minimum income.

The outcomes of such a situation may be utopian, dystopian, or somewhere in the between.

Post-scarcity AI may be able to meet practically all human needs and desires, freeing individuals up to pursue creative endeavors, spiritual contemplation, hedonistic urges, and the pursuit of joy.

Alternatively, the aftermath of an AI takeover might be a worldwide disaster in which all of the earth's basic resources are swiftly consumed by self-replicating robots that multiply exponentially.

K. Eric Drexler (1955–), a pioneer in nanotechnology, coined the phrase "gray goo event" to describe this kind of worst-case ecological calamity.

An intermediate result might entail major changes in certain economic areas but not others.

According to Andrew Ware of the University of Cambridge's Centre for the Study of Existential Risk (CSER), AI will have a huge impact on agriculture, altering soil and crop management, weed control, and planting and harvesting (Ware 2018).

According to a survey of data compiled by the McKinsey Global Institute, managerial, professional, and administrative tasks are among the most difficult for an AI to handle—particularly in the helping professions of health care and education (Chui et al. 2016).

Science fiction writers fantasize of a society when clever machines churn out most material items for pennies on the dollar.

The matter duplicator in Murray Leinster's 1935 short tale "The Fourth Dimensional Demonstrator" is an early example.

Leinster invents a duplicator-unduplayer that takes use of the fact that the four-dimensional world (the three-dimensional physical universe plus time) has some thickness.

The technology snatches fragments from the past and transports them to the present.

Pete Davidson, who inherits the equipment from his inventor uncle, uses it to reproduce a banknote put on the machine's platform.

The note stays when the button is pressed, but it is joined by a replica of the note that existed seconds before the button was pressed.

Because the duplicate of the bill has the same serial number, this may be determined.



Davidson uses the equipment to comic effect, duplicating gold and then (accidentally) removing pet kangaroos, girlfriends, and police officers from the fourth dimension.

With Folded Hands (1947) by Jack Williamson introduces the Humanoids, a species of thinking black mechanicals who serve as domestics, doing all of humankind's labor and adhering to their responsibility to "serve and obey, and defend men from danger" (Williamson 1947, 7).

The robots seem to be well-intentioned, but they are slowly removing all meaningful work from human humans in the village of Two Rivers.

The Humanoids give every convenience, but they also eliminate any human risks, such as sports and alcohol, as well as any motivation to accomplish things for themselves.

Home doorknobs are even removed by the mechanicals since people should not have to make their own entries and exits.

People get anxious, afraid, and eventually bored.

For a century or more, science fiction writers have envisaged economies joined together by post-scarcity and vast possibility.

When an extraterrestrial species secretly dumps a score of matter duplicating machines on the planet, Ralph Williams' novella "Business as Usual, During Alterations" (1958) investigates human greed.

Each of the electrical machines, which have two metal pans and a single red button, is the same.

"A press of the button fulfills your heart's wish," reads a written caution on the duplicator.

It's also a chip embedded in human society's underpinnings.

It will be brought down by a few billion of these chips.

It's all up to you" (Williams 1968, 288).

Williams' narrative is set on the day the gadget emerges, and it takes place in Brown's Department Store.

John Thomas, the manager, has exceptional vision, understanding that the robots would utterly disrupt retail by eliminating both scarcity and the value of items.

Rather of attempting to create artificial scarcity, Thomas comes up with the concept of duplicating the duplicators and selling them on credit to clients.

He also reorients the business to offer low-cost items that can be duplicated in the pan.

Instead of testing humanity's selfishness, the extraterrestrial species is presented with an abundant economy based on a completely different model of production and distribution, where distinctive and varied items are valued above uniform ones.

The phrase "Business as Usual, During Changes" appears on occasion in basic economics course curricula.

In the end, William's story is similar to the long-tail distributions of more specialist products and services described by authors on the economic and social implications of high technology like Clay Shirky, Chris Anderson, and Erik Brynjolfsson.

In 1964, Leinster returned with The Duplicators, a short book. In this novel, the planet Sord Three's human civilization has lost much of its technological prowess, as well as all electrical devices, and has devolved into a rough approximation of feudal society.

Humans are only able to utilize their so-called dupliers to produce necessary items like clothing and silverware.

Dupliers have hoppers where vegetable matter is deposited and raw ingredients are harvested to create other, more complicated commodities, but they pale in comparison to the originals.

One of the characters speculates that this may be due to a missing ingredient or components in the feedstock.

It's also self-evident that when poor samples are repeated, the duplicates will be weaker.

The heavy weight of numerous, but poor products bears down on the whole community.

Electronics, for example, are utterly gone since machines cannot recreate them.

When the story's protagonist, Link Denham, arrives on the planet in unduplicated attire, they are taken aback.

"And dupliers released to mankind would amount to treason," Denham speculates in the story, referring to the potential untold wealth as well as the collapse of human civilization throughout the galaxy if the dupliers become known and widely used off the planet: "And dupliers released to mankind would amount to treason." If a gadget exists that can accomplish every kind of job that the world requires, people who are the first to own it are wealthy beyond their wildest dreams.

However, pride will turn wealth into a marketable narcotic.

Men will no longer work since their services are no longer required.

Men will go hungry because there is no longer any need to feed them" (Leinster 1964, 66–67).

Native "uffts," an intelligent pig-like species trapped in slavery as servants, share the planet alongside humans.

The uffts are adept at gathering the raw materials needed by the dupliers, but they don't have direct access to them.

They are completely reliant on humans for some of the commodities they barter for, particularly beer, which they like.

Link Denham utilizes his mechanical skill to unlock the secrets of the dupliers, allowing them to make high-value blades and other weapons, and finally establishes himself as a kind of Connecticut Yankee in King Arthur's Court.

Humans and uffts equally devastate the environment as they feed more and more vegetable stuff into the dupliers to manufacture the enhanced products, too stupid to take full use of Denham's rediscovery of the appropriate recipes and proportions.

This bothers Denham, who had hoped that the machines could be used to reintroduce modern agricultural implements to the planet, after which they could be used solely for repairing and creating new electronic goods in a new economic system he devised, dubbed "Householders for the Restoration of the Good Old Days" by the local humans.

The good times are ended soon enough, as humans plan the re-subjugation of the native uffts, prompting them to form a Ufftian Army of Liberation.

Link Denham deflects the uffts at first with generous helpings of bureaucratic bureaucracy, then liberates them by developing beer-brewing equipment privately, ending their need on the human trade.

The Diamond Age is a Hugo Award-winning bildungsroman about a society governed by nanotechnology and artificial intelligence, written by Neal Stephenson in 1995.

The economy is based on a system of public matter compilers, which are essentially molecular assemblers that act as fabricating devices and function similarly to K. Eric Drexler's proposed nanomachines in Engines of Creation (1986), which "guide chemical reactions by positioning reactive molecules with atomic precision" (Drexler 1986, 38).

All individuals are free to utilize the matter compilers, and raw materials and energy are given from the Source, a massive hole in the earth, through the Feed, a centralized utility system.

"Whenever Nell's clothing were too small, Harv would toss them in the deke bin and have the M.C. sew new ones for her." 

Tequila would use the M.C. to create Nell a beautiful outfit with lace and ribbons if they were going somewhere where they would see other parents with other girls" (Stephenson 1995, 53).

Nancy Kress's short tale "Nano Comes to Clifford Falls" (2006) examines the societal consequences of nanotechnology, which gives every citizen's desire.

It recycles the old but dismal cliche of humans becoming lazy and complacent when presented with technology solutions, but this time it adds the twist that males in a society suddenly free of poverty are at risk of losing their morals.

"Printcrime" (2006), a very short article initially published in the magazine Nature by Cory Doctorow, who, by no coincidence, releases free works under a liberal Creative Commons license.

The tale follows Lanie, an eighteen-year-old girl who remembers the day ten years ago when the cops arrived to her father's printer-duplicator, which he was employing to illegally create pricey, artificially scarce drugs.

One of his customers basically "shopped" him, alerting him of his activities.

Lanie's father had just been released from jail in the second part of the narrative.

He's immediately inquiring where he can "get a printer and some goop," acknowledging that printing "rubbish" in the past was a mistake, but then whispers to Lanie, "I'm going to produce more printers." There are a lot more printers.

There's one for everyone. That is deserving of incarceration.

That's worth a lot." Makers (2009), also by Cory Doctorow, is about a do-it-yourself (DIY) maker subculture that hacks technology, financial systems, and living arrangements to "find means of remaining alive and happy even while the economy is going down the toilet," as the author puts it (Doctorow 2009).

The impact of a contraband carbon nanotube printing machine on the world's culture and economy is the premise of pioneering cyberpunk author Bruce Sterling's novella Kiosk (2008).

Boroslav, the protagonist, is a popup commercial kiosk operator in a poor world nation, most likely a future Serbia.

He begins by obtaining a standard quick prototyping 3D printer.

Children buy cards to program the gadget and manufacture waxy, nondurable toys or inexpensive jewelry.

Boroslav eventually ends himself in the hands of a smuggled fabricator who can create indestructible objects in just one hue.

Those who return their items to be recycled into fresh raw material are granted refunds.

He is later discovered to be in possession of a gadget without the necessary intellectual property license, and in exchange for his release, he offers to share the device with the government for research purposes.

However, before handing up the gadget, he uses the fabricator to duplicate it and conceal it in the jungles until the moment is right for a revolution.

The expansive techno-utopian Culture series of books (1987–2012) by author Iain M. Banks involves superintelligences living alongside humans and aliens in a galactic civilization marked by space socialism and a post-scarcity economy.

Minds, benign artificial intelligences, manage the Culture with the assistance of sentient drones.

The sentient living creatures in the novels do not work since the Minds are superior and offer all the citizens need.

As the biological population indulges in hedonistic indulgences and faces the meaning of life and fundamental ethical dilemmas in a utilitarian cosmos, this reality precipitates all kinds of conflict.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.




See also: 


Ford, Martin; Technological Singularity; Workplace Automation.



References & Further Reading:



Aguilar-Millan, Stephen, Ann Feeney, Amy Oberg, and Elizabeth Rudd. 2010. “The Post-Scarcity World of 2050–2075.” Futurist 44, no. 1 (January–February): 34–40.

Bastani, Aaron. 2019. Fully Automated Luxury Communism. London: Verso.

Chase, Calum. 2016. The Economic Singularity: Artificial Intelligence and the Death of Capitalism. San Mateo, CA: Three Cs.

Chui, Michael, James Manyika, and Mehdi Miremadi. 2016. “Where Machines Could Replace Humans—And Where They Can’t (Yet).” McKinsey Quarterly, July 2016. http://pinguet.free.fr/wheremachines.pdf.

Doctorow, Cory. 2006. “Printcrime.” Nature 439 (January 11). https://www.nature.com/articles/439242a.

Doctorow, Cory. 2009. “Makers, My New Novel.” Boing Boing, October 28, 2009. https://boingboing.net/2009/10/28/makers-my-new-novel.html.

Drexler, K. Eric. 1986. Engines of Creation: The Coming Era of Nanotechnology. New York: Doubleday.

Kress, Nancy. 2006. “Nano Comes to Clifford Falls.” Nano Comes to Clifford Fall and Other Stories. Urbana, IL: Golden Gryphon Press.

Leinster, Murray. 1964. The Duplicators. New York: Ace Books.

Pistono, Federico. 2014. Robots Will Steal Your Job, But That’s OK: How to Survive the Economic Collapse and Be Happy. Lexington, KY: Createspace.

Saadia, Manu. 2016. Trekonomics: The Economics of Star Trek. San Francisco: Inkshares.

Stephenson, Neal. 1995. The Diamond Age: Or, a Young Lady’s Illustrated Primer. New York: Bantam Spectra.

Ware, Andrew. 2018. “Can Artificial Intelligence Alleviate Resource Scarcity?” Inquiry Journal 4 (Spring): n.p. https://core.ac.uk/reader/215540715.

Williams, Ralph. 1968. “Business as Usual, During Alterations.” In 100 Years of Science Fiction, edited by Damon Knight, 285–307. New York: Simon and Schuster.

Williamson, Jack. 1947. “With Folded Hands.” Astounding Science Fiction 39, no. 5 (July): 6–45.


Artificial Intelligence - Speech Recognition And Natural Language Processing

 


Natural language processing (NLP) is a branch of artificial intelligence that entails mining human text and voice in order to produce or reply to human enquiries in a legible or natural manner.

To decode the ambiguities and opacities of genuine human language, NLP has needed advances in statistics, machine learning, linguistics, and semantics.

Chatbots will employ natural language processing to connect with humans across text-based and voice-based interfaces in the future.

Interactions between people with varying talents and demands will be supported by computer assistants.

By making search more natural, they will enable natural language searches of huge volumes of information, such as that found on the internet.

They may also incorporate useful ideas or nuggets of information into a variety of circumstances, including meetings, classes, and informal discussions.



They may even be able to "read" and react in real time to the emotions or moods of human speakers (so-called "sentient analysis").

By 2025, the market for NLP hardware, software, and services might be worth $20 billion per year.

Speech recognition, often known as voice recognition, has a long history.

Harvey Fletcher, a physicist who pioneered research showing the link between voice energy, frequency spectrum, and the perception of sound by a listener, initiated research into automated speech recognition and transcription at Bell Labs in the 1930s.

Most voice recognition algorithms nowadays are based on his research.

Homer Dudley, another Bell Labs scientist, received patents for a Vodor voice synthesizer that imitated human vocalizations and a parallel band pass vocodor that could take sound samples and put them through narrow band filters to identify their energy levels by 1940.

By putting the recorded energy levels through various filters, the latter gadget might convert them back into crude approximations of the original sounds.

Bell Labs researchers had found out how to make a system that could do more than mimic speech by the 1950s.

During that decade, digital technology had progressed to the point that the system could detect individual spoken word portions by comparing their frequencies and energy levels to a digital sound reference library.

In essence, the system made an informed guess about the expression being expressed.

The pace of change was gradual.

Bell Labs robots could distinguish around 10 syllables uttered by a single person by the mid-1950s.

Researchers at MIT, IBM, Kyoto University, and University College London were working on recognizing computers that employed statistics to detect words with numerous phonemes toward the end of the decade.

Phonemes are sound units that are perceived as separate from one another by listeners.



Additionally, progress was being made on systems that could recognize the voice of many speakers.

Allen Newell headed the first professional automated speech recognition group, which was founded in 1971.

The research team split their time between acoustics, parametrics, phonemics, lexical ideas, sentence processing, and semantics, among other levels of knowledge generation.

Some of the issues examined by the group were investigated via funds from the Defense Advanced Research Project Agency in the 1970s (DARPA).

DARPA was intrigued in the technology because it might be used to handle massive amounts of spoken data generated by multiple government departments and transform that data into insights and strategic solutions to challenges.

Techniques like dynamic temporal warping and continuous voice recognition have made progress.

Computer technology progressed significantly, and numerous mainframe and minicomputer manufacturers started to perform research in natural language processing and voice recognition.

The Speech Understanding Research (SUR) project at Carnegie Mellon University was one of the DARPA-funded projects.



The SUR project, directed by Raj Reddy, produced numerous groundbreaking speech recognition systems, including Hearsay, Dragon, Harpy, and Sphinx.

Harpy is notable in that it employs the beam search approach, which has been a standard in such systems for decades.

Beam search is a heuristic search technique that examines a network by extending the most promising node among a small number of possibilities.

Beam search is an improved version of best-first search that uses less memory.

It's a greedy algorithm in the sense that it uses the problem-solving heuristic of making the locally best decision at each step in the hopes of obtaining a global best choice.

In general, graph search algorithms have served as the foundation for voice recognition research for decades, just as they have in the domains of operations research, game theory, and artificial intelligence.

By the 1980s and 1990s, data processing and algorithms had advanced to the point where researchers could use statistical models to identify whole strings of words, even phrases.

The Pentagon remained the field's leader, but IBM's work had progressed to the point where the corporation was on the verge of manufacturing a computerized voice transcription device for its corporate clients.

Bell Labs had developed sophisticated digital systems for automatic voice dialing of telephone numbers.

Other applications that seemed to be within reach were closed captioned transcription of television broadcasts and personal automatic reservation systems.

The comprehension of spoken language has dramatically improved.

The Air Travel Information System was the first commercial system to emerge from DARPA funding (ATIS).

New obstacles arose, such as "disfluencies," or natural pauses, corrections, casual speech, interruptions, and verbal fillers like "oh" and "um" that organically formed from conversational speaking.

Every Windows 95 operating system came with the Speech Application Programming Interface (SAPI) in 1995.

SAPI (which comprised subroutine definitions, protocols, and tools) made it easier for programmers and developers to include speech recognition and voice synthesis into Windows programs.

Other software developers, in particular, were given the option to construct and freely share their own speech recognition engines thanks to SAPI.

It gave NLP technology a big boost in terms of increasing interest and generating wider markets.

The Dragon line of voice recognition and dictation software programs is one of the most well-known mass-market NLP solutions.

The popular Dragon NaturallySpeaking program aims to provide automatic real-time, large-vocabulary, continuous-speech dictation with the use of a headset or microphone.

The software took fifteen years to create and was initially published in 1997.

It is still widely regarded as the gold standard for personal computing today.

One hour of digitally recorded speech takes the program roughly 4–8 hours to transcribe, although dictation on screen is virtually instantaneous.

Similar software is packaged with voice dictation functions in smart phones, which converts regular speech into text for usage in text messages and emails.

The large amount of data accessible on the cloud, as well as the development of gigantic archives of voice recordings gathered from smart phones and electronic peripherals, have benefited industry tremendously in the twenty-first century.

Companies have been able to enhance acoustic and linguistic models for voice processing as a result of these massive training data sets.

To match observed and "classified" sounds, traditional speech recognition systems employed statistical learning methods.

Since the 1990s, more Markovian and hidden Markovian systems with reinforcement learning and pattern recognition algorithms have been used in speech processing.

Because of the large amounts of data available for matching and the strength of deep learning algorithms, error rates have dropped dramatically in recent years.

Despite the fact that linguists argue that natural languages need flexibility and context to be effectively comprehended, these approximation approaches and probabilistic functions are exceptionally strong in deciphering and responding to human voice inputs.

The n-gram, a continuous sequence of n elements from a given sample of text or voice, is now the foundation of computational linguistics.

Depending on the application, the objects might be pho nemes, syllables, letters, words, or base pairs.

N-grams are usually gathered from text or voice.

In terms of proficiency, no other method presently outperforms this one.

For their virtual assistants, Google and Bing have indexed the whole internet and incorporate user query data in their language models for voice search applications.

Today's systems are starting to identify new terms from their datasets on the fly, which is referred to as "lifelong learning" by humans, although this is still a novel technique.

Companies working in natural language processing will desire solutions that are portable (not reliant on distant servers), deliver near-instantaneous response, and provide a seamless user experience in the future.

Richard Socher, a deep learning specialist and the founder and CEO of the artificial intelligence start-up MetaMind, is working on a strong example of next-generation NLP.

Based on massive chunks of natural language information, the company's technology employs a neural networking architecture and reinforcement learning algorithms to provide responses to specific and highly broad inquiries.

Salesforce, the digital marketing powerhouse, just purchased the startup.

Text-to-speech analysis and advanced conversational interfaces in automobiles will be in high demand in the future, as will speech recognition and translation across cultures and languages, automatic speech understanding in noisy environments like construction sites, and specialized voice systems to control office and home automation processes and internet-connected devices.

To work on, any of these applications to enhance human speech will need the collection of massive data sets of natural language.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Natural Language Generation; Newell, Allen; Workplace Automation.


References & Further Reading:


Chowdhury, Gobinda G. 2003. “Natural Language Processing.” Annual Review of Information Science and Technology 37: 51–89.

Jurafsky, Daniel, and James H. Martin. 2014. Speech and Language Processing. Second edition. Upper Saddle River, NJ: Pearson Prentice Hall.

Mahavan, Radhika. n.d. “Natural Language Processing: Current Applications and Future Possibilities.” https://www.techemergence.com/nlp-current-applications-and-future-possibilities/.

Manning, Christopher D., and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press.

Metz, Cade. 2015. “AI’s Next Frontier: Machines That Understand Language.” Wired, June 24, 2015. https://www.wired.com/2015/06/ais-next-frontier-machines-understand-language/.

Nusca, Andrew. 2011. “Say Command: How Speech Recognition Will Change the World.” 

ZDNet, November 2, 2011. https://www.zdnet.com/article/say-command-how-speech-recognition-will-change-the-world/.





Artificial Intelligence - Natural Language Generation Or NLG.

 




Natural Language Generation, or NLG, is the computer process by which information that cannot be easily comprehended by humans is converted into a message that is optimized for human comprehension, as well as the name of the AI area dedicated to its research and development.



In computer science and AI, the phrase "natural language" refers to what most people simply refer to as language, the mechanism by which humans interact with one another and, increasingly, with computers and robots.



Natural language is the polar opposite of "machine language," or programming language, which was created for the purpose of programming and controlling computers.

The data processed by NLG technology is some sort of data, such as scores and statistics from a sporting event, and the message created from this data may take different forms (text or voice), such as a sports game news broadcast.

The origins of NLG may be traced back to the mid-twentieth century, when computers were first introduced.

Entering data into early computers and then deciphering the results was complex, time-consuming, and needed highly specialized skills.

These difficulties with machine input and output were seen by researchers and developers as communication issues.



Communication is also essential for gaining knowledge and information, as well as exhibiting intelligence.

The answer suggested by researchers was to work toward adapting human-machine communication to the most "natural" form of communication, that is, people's own languages.

Natural Language Processing is concerned with how robots can understand human language, while Natural Language Generation is concerned with the creation of communications customized to people.

Some researchers in this field, like those working in artificial intelligence, are interested in developing systems that generate messages from data, while others are interested in studying the human process of language and message formation.

NLG is a subfield of Computational Linguistics, as well as being a branch of artificial intelligence.

The rapid expansion of NLG technologies has been facilitated by the proliferation of technology for producing, collecting, and linking enormous swaths of data, as well as advancements in processing power.



NLG has a wide range of applications in a variety of sectors, including journalism and media.

Large international and national news organizations throughout the globe have begun to use automated news-writing tools based on NLG technology into their news production.

Journalists utilize the program in this context to create informative reports from diverse datasets, such as lists of local crimes, corporate earnings reports, and synopses of athletic events.

Companies and organizations may also utilize NLG systems to create automated summaries of their own or external data.

Computational narrative and the development of automated narrative generating systems that concentrate on the production of fictitious stories and characters for use in media and entertainment, such as video games, as well as education and learning, are two related areas of study.



NLG is likely to improve further in the future, allowing future technologies to create more sophisticated and nuanced messages over a wider range of convention texts.

NLG's development and use are still in their early stages, thus it's unclear what the entire influence of NLG-based technologies will be on people, organizations, industries, and society.

Current concerns include whether NLG technologies will have a beneficial or detrimental impact on the workforce in the sectors where they are being implemented, as well as the legal and ethical ramifications of having computers rather than people generate factual and fiction.

There are also bigger philosophical questions around the connection between communication, language usage, and how humans have defined what it means to be human socially and culturally.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 



Natural Language Processing and Speech Understanding; Turing Test; Work￾place Automation.


References & Further Reading:


Guzman, Andrea L. 2018. “What Is Human-Machine Communication, Anyway?” In Human-Machine Communication: Rethinking Communication, Technology, and Ourselves, edited by Andrea L. Guzman, 1–28. New York: Peter Lang.

Lewis, Seth C., Andrea L. Guzman, and Thomas R. Schmidt. 2019. “Automation, Journalism, and Human-Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News.” Digital Journalism 7, no. 4: 409–27.

Licklider, J. C. R. 1968. “The Computer as Communication Device.” In In Memoriam: J. C. R. Licklider, 1915–1990, edited by Robert W. Taylor, 21–41. Palo Alto, CA: Systems Research Center.

Marconi, Francesco, Alex Siegman, and Machine Journalist. 2017. The Future of Aug￾mented Journalism: A Guide for Newsrooms in the Age of Smart Machines. New York: Associated Press. https://insights.ap.org/uploads/images/the-future-of-augmented-journalism_ap-report.pdf.

Paris, Cecile L., William R. Swartout, and William C. Mann, eds. 1991. Natural Language Generation in Artificial Intelligence and Computational Linguistics. Norwell, MA: Kluwer Academic Publishers.

Riedl, Mark. 2017. “Computational Narrative Intelligence: Past, Present, and Future.” Medium, October 25, 2017. https://medium.com/@mark_riedl/computational-narrative-intelligence-past-present-and-future-99e58cf25ffa.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...