Showing posts sorted by relevance for query gadgets. Sort by date Show all posts
Showing posts sorted by relevance for query gadgets. Sort by date Show all posts

Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Artificial Intelligence - AI And Robotics In The Battlefield.

 



Because of the growth of artificial intelligence (AI) and robots and their application to military matters, generals on the contemporary battlefield are seeing a possible tactical and strategic revolution.

Unmanned aerial vehicles (UAVs), also known as drones, and other robotic devices played a key role in the wars in Afghanistan (2001–) and Iraq (2003–2011).

It is possible that future conflicts will be waged without the participation of humans.

Without human control or guidance, autonomous robots will fight in war on land, in the air, and beneath the water.

While this vision remains in the realm of science fiction, battlefield AI and robotics raise a slew of practical, ethical, and legal issues that military leaders, technologists, jurists, and philosophers must address.

When many people think about AI and robotics on the battlefield, the first image that springs to mind is "killer robots," armed machines that indiscriminately destroy everything in their path.

There are, however, a variety of applications for battlefield AI that do not include killing.

In recent wars, the most notable application of such technology has been peaceful in character.

UAVs are often employed for surveillance and reconnaissance.

Other robots, like as iRobot's PackBot (the same firm that makes the vacuum-cleaning Roomba), are employed to locate and assess improvised explosive devices (IEDs), making their safe disposal easier.

Robotic devices can navigate treacherous terrain, such as Afghanistan's caves and mountain crags, as well as areas too dangerous for humans, such as under a vehicle suspected of being rigged with an IED.

Unmanned Underwater Vehicles (UUVs) are also used to detect mines underwater.

IEDs and explosives are so common on today's battlefields that these robotic gadgets are priceless.

Another potential life-saving capacity of battlefield robots that has yet to be realized is in the realm of medicine.

Robots can safely collect injured troops on the battlefield in areas that are inaccessible to their human counterparts, without jeopardizing their own lives.

Robots may also transport medical supplies and medications to troops on the battlefield, as well as conduct basic first aid and other emergency medical operations.

AI and robots have the greatest potential to change the battlefield—whether on land, sea, or in the air—in the arena of deadly power.

The Aegis Combat System (ACS) is an example of an autonomous system used by several militaries across the globe aboard destroyers and other naval combat vessels.

Through radar and sonar, the system can detect approaching threats, such as missiles from the surface or air, mines, or torpedoes from the water.

The system is equipped with a powerful computer system and can use its own munitions to eliminate identified threats.

Despite the fact that Aegis is activated and supervised manually, it has the potential to operate autonomously in order to counter threats faster than humans could.

In addition to partly automated systems like the ACS and UAVs, completely autonomous military robots capable of making judgments and acting on their own may be developed in the future.

The most significant feature of AI-powered robotics is the development of lethal autonomous weapons (LAWs), sometimes known as "killer robots." On a sliding scale, robot autonomy exists.

At one extreme of the spectrum are robots that are designed to operate autonomously, but only in reaction to a specific stimulus and in one direction.

This degree of autonomy is shown by a mine that detonates autonomously when stepped on.

Remotely operated machines, which are unmanned yet controlled remotely by a person, are also available at the lowest end of the range.

Semiautonomous systems occupy the midpoint of the spectrum.

These systems may be able to work without the assistance of a person, but only to a limited extent.

A robot commanded to launch, go to a certain area, and then return at a specific time is an example of such a system.

The machine does not make any "decisions" on its own in this situation.

Semiautonomous devices may also be configured to accomplish part of a task before waiting for further inputs before moving on to the next step.

Full autonomy is the last step.

Fully autonomous robots are designed with a purpose and are capable of achieving it entirely on their own.

This might include the capacity to use deadly force without direct human guidance in warfare circumstances.

Robotic gadgets that are lethally equipped, AI-enhanced, and totally autonomous have the ability to radically transform the current warfare.

Military ground troops made up of both humans and robots, or entirely of robots with no humans, would expand armies.

Small, armed UAVs would not be constrained by the requirement for human operators, and they might be assembled in massive swarms to overwhelm bigger, but less mobile troops.

Such technological advancements will entail equally dramatic shifts in tactics, strategy, and even the notion of combat.

This technology will become less expensive as it becomes more widely accessible.

This might disturb the present military power balance.

Even minor governments, and maybe even non-state organizations like terrorist groups, may be able to develop their own robotic army.

Fully autonomous LAWs bring up a slew of practical, ethical, and legal issues.

One of the most pressing practical considerations is safety.

A completely autonomous robot with deadly armament that malfunctions might represent a major threat to everyone who comes in contact with it.

Fully autonomous missiles might theoretically wander off course and kill innocent people due to a mechanical failure.

Unpredictable technological faults and malfunctions may occur in any kind of apparatus.

Such issues offer a severe safety concern to individuals who deploy deadly robotic gadgets as well as unwitting bystanders.

Even if there are no possible problems, restrictions in program ming may result in potentially disastrous errors.

Programming robots to discriminate between combatants and noncombatants, for example, is a big challenge, and it's simple to envisage misidentification leading to unintentional fatalities.

The greatest concern, though, is that robotic AI may grow too quickly and become independent of human control.

Sentient robots might turn their armament on humans, like in popular science fiction movies and literature, and in fulfillment of eminent scientist Stephen Hawking's grim forecast that the development of AI could end in humanity's annihilation.

Laws may also lead to major legal issues.

The rules of war apply to human beings.

Robots cannot be held accountable for prospective law crimes, whether criminally, civilly, or in any other manner.

As a result, there's a chance that war crimes or other legal violations may go unpunished.

Here are some serious issues to consider: Can the programmer or engineer of a robot be held liable for the machine's actions? Could a person who gave the robot its "command" be held liable for the robot's unpredictability or blunders on a mission that was otherwise self-directed? Such considerations must be thoroughly considered before any completely autonomous deadly equipment is deployed.

Aside from legal issues of duty, a slew of ethical issues must be addressed.

The conduct of war necessitates split-second moral judgments.

Will self-driving robots be able to tell the difference between a kid and a soldier, or between a wounded and helpless soldier and an active combatant? Will a robotic military force always be seen as a cold, brutal, and merciless army of destruction, or can a robot be designed to behave kindly when the situation demands it? Because combat is riddled with moral dilemmas, LAWs involved in war will always be confronted with them.

Experts wonder that dangerous autonomous robots can ever be trusted to do the right thing.

Moral action requires not just rationality—which robots may be capable of—but also emotions, empathy, and wisdom.

These later items are much more difficult to implement in code.

Many individuals have called for an absolute ban on research in this field because to legal, ethical, and practical problems highlighted by the potential of ever more powerful AI-powered robotic military hardware.

Others, on the other hand, believe that scientific advancement cannot be halted.

Rather than prohibiting such study, they argue that scientists and society as a whole should seek realistic answers to the difficulties.

Some argue that keeping continual human supervision and control over robotic military units may address many of the ethical and legal issues.

Others argue that direct supervision is unlikely in the long term because human intellect will be unable to match the pace with which computers think and act.

As the side that gives its robotic troops more autonomy gains an overwhelming advantage over those who strive to preserve human control, there will be an inevitable trend toward more and more autonomy.

They warn that fully autonomous forces will always triumph.

Despite the fact that it is still in its early stages, the introduction of more complex AI and robotic equipment to the battlefield has already resulted in significant change.

AI and robotics on the battlefield have the potential to drastically transform the future of warfare.

It remains to be seen if and how this technology's technical, practical, legal, and ethical limits can be addressed.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons 
Systems.


Further Reading

Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics, 
Law, and Technology 2, no. 1: n.p. https://www.degruyter.com/view/journals/selt/2/1/article-selt.2008.2.1.1036.xml.xml.

Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.” 
Military Review 98, no. 4 (July–August): 18–29.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New 
York: W. W. Norton.

Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st 
Century. London: Penguin.

Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.



AI - Smart Hotels And Smart Hotel Rooms.



In a competitive tourist sector, high-tech and artificial intelligence are being used by luxury hotels to deliver the greatest experience for their visitors and grow their market share.


The experience economy, as it is known in the hospitality management business, is shaping artificial intelligence in hotels.



An experience is created by three major players: a product, a service, and a consumer.


The artifacts presented in the marketplaces are known as products.

Services are the concrete and intangible benefits of a single product, or a collection of goods, as marketed by frontline staff via a procedure.

The end user of these items or services is the client.

Customers are looking for items and services that will meet their requirements.

Hoteliers, on the other hand, must develop extraordinary events that transform manufactured goods and services into real experiences for their consumers in order to emotionally connect with them.


In this approach, experiences become a fungible activity in the market with the goal of retaining clients.



Robotics, data analysis, voice activation, face recognition, virtual and augmented reality, chatbots, and the internet of things are all examples of artificial intelligence in the luxury hotel business (IoT).

Smart rooms are created for hotel guests by providing automated technology that naturally solves their typical demands.


Guests may utilize IoT to control the lights, curtains, speakers, and television in their rooms through a connected tablet.


  • When a person is awake and moving about, a nightlight system may detect this.
  • Wellness gadgets that deliver sensory experiences are available in certain rooms for disabled visitors.
  • Smart rooms may capture personal information from customers and keep it in customer profiles in order to give better service during subsequent visits.



In terms of smart room technology, the Hilton and Marriott worldwide luxury hotel companies are industry leaders.


One of Hilton's initial goals is to provide guests the ability to operate their room's features using their smartphone.


  • Guests may customize their stay according to their preferences utilizing familiar technologies in this manner.
  • Lights, TVs, the temperature, and the entertainment (streaming) service are all adjustable in typical Hilton smart rooms (Ting 2017).
  • A second goal is to provide services via mobile phone apps.
  • During their stay, guests may put their own preferences.
  • They may, for example, choose digital artwork or images from the room's display.
  • Voice activation services are presently being developed for Hilton smart bedrooms (Burge 2017).


Marriott's smart rooms were created in collaboration with Legrand's Eliot technology and Samsung's Artik guest experience platform.


Marriott has deployed cloud-based hotel IoT technologies (Ting 2017).

Two prototype rooms for testing new smart systems have come from this partnership.



The first is a room with smart showers, mirrors, art frames, and speakers that is totally networked.

  • Guests may use voice commands to operate the lighting, air conditioning, curtains, paintings, and television.
  • A touchscreen shower is available, allowing visitors to write on the smart glass of the shower.
  • Shower notes may be turned into papers and sent to a specific address (Business Traveler 2018).
  • The quantity of oxygen in this Marriott room is controlled by sensors that monitor the number of people in the suite.
  • These sensors also help visitors wake up in the middle of the night by displaying the time to get out of bed and lighting the path to the restroom (Ting 2017).
  • A loyalty account allows guests to select their particular preferences ahead to arrival.



A second, lower-tech area is linked through tablet and just has the Amazon Dot voice-controlled smart speaker.


  • The television remote may be used to change the room's characteristics.
  • The benefit of this room is that it has very few implementation requirements (Ting 2017).
  • Hoteliers point to a number of benefits of smart rooms in addition to convenience and customization.
  • Smart rooms help to protect the environment by lowering energy consumption expenses.
  • They may also save money on wages by reducing the amount of time housekeeping and management spend with visitors.



Smart rooms have their own set of constraints.


It may be tough to grasp certain smart technology.


  • For starters, the learning curve for overnight visitors is rather short.
  • Second, the infrastructure and technology required for these rooms continues to be prohibitively costly.
  • Even if there are long-term cost and energy benefits, the initial investment expenses are significant.


Finally, there's the issue of data security.


Hotels must continue to evolve to meet the needs of new generations of paying customers.


Technology is deeply interwoven in the everyday behaviors of millennials and post-millennials.

Their smart phones, video games, and tablets are transforming the meaning of experience in a virtual world.


Luxury tourism already includes high-priced goods and services that are supported by cutting-edge technology.

The quality of future hotel smart room experiences will be influenced by visitor income levels and personal technological capabilities, creating new competitive marketplaces.



Customers expect high-tech comfort and service from hotels.


Hotel operators gain from smart rooms as well, since they serve as a source of large data.

Companies are rapidly collecting, storing, and using all accessible information on their customers in order to provide unique goods and services.

This technique aids businesses in creating twenty-first-century markets in which technology is as important as hotel guests and management.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Smart Cities and Homes.


References & Further Reading:


Burge, Julia. 2017. “Hilton Announces ‘Connected Room,’ The First Mobile-Centric Hotel Room, To Begin Rollout in 2018.” Hilton Press Center, December 7, 2017. https://newsroom.hilton.com/corporate/news/hilton-announces-connected-room-the-first-mobilecentric-hotel-room-to-begin-rollout-in-2018.

Business Traveler. 2018. “Smart Rooms.” Business Traveler (Asia-Pacific Edition), 11.

Imbardelli, A. Patrick. 2019. “Smart Guestrooms Can Transform Hotel Brands.” Hotel Management 234, no. 3 (March): 40.

Pine, B. Joseph, II, and James H. Gilmore. 1998. “Welcome to the Experience Economy.” Harvard Business Review 76, no. 4 (July–August): 97–105.

Swaminathan, Sundar. 2017. Oracle Hospitality Hotel 2025 Industry Report. Palm Beach Gardens, FL: International Luxury Hotel Association.

Ting, Deanna. 2017. “Hilton and Marriott Turn to the Internet of Things to Transform the Hotel Room Experience.” Skift, November 14, 2017. https://skift.com/2017/11/14/hilton-and-marriott-turn-to-the-internet-of-things-to-transform-the-hotel-room-experience/.


Artificial Intelligence - What Is A Group Symbol Associator?



Firmin Nash, director of the South West London Mass X-Ray Service, devised the Group Symbol Associator, a slide rule-like device that enabled a clinician to correlate a patient's symptoms against 337 predefined symptom-disease complexes and establish a diagnosis in the early 1950s.

It resolves cognitive processes in automated medical decision-making by using multi-key look-up from inverted files.

The Group Symbol Associator has been dubbed a "cardboard brain" by Derek Robinson, a professor at the Ontario College of Art & Design's Integrated Media Program.

Hugo De Santo Caro, a Dominican monk who finished his index in 1247, used an inverted scriptural concordance similar to this one.

Marsden Blois, an artificial intelligence in medicine professor at the University of California, San Francisco, rebuilt the Nash device in software in the 1980s.

Blois' diagnostic aid RECONSIDER, which is based on the Group Symbol Associator, performed as good as or better than other expert systems, according to his own testing.

Nash dubbed the Group Symbol Associator the "Logoscope" because it employed propositional calculus to analyze different combinations of medical symptoms.

The Group Symbol Associator is one of the early efforts to apply digital computers to diagnostic issues, in this instance by adapting an analog instrument.

Along the margin of Nash's cardboard rule, disease groupings chosen from mainstream textbooks on differential diagnosis are noted.

Each patient symptom or property has its own cardboard symptom stick with lines opposing the locations of illnesses that share that property.

There were a total of 82 sign and symptom sticks in the Group Symbol Associator.

Sticks that correspond to the state of the patient are chosen and entered into the rule.



Diseases with a higher number of symptom lines are thought to be diagnoses.

Nash's slide rule is simply a matrix with illnesses as columns and properties as rows.

Wherever qualities are predicted in each illness, a mark (such as a "X") is inserted into the matrix.

Rows that describe symptoms that the patient does not have are removed.

The most probable or "best match" diagnosis is shown by columns with a mark in every cell.

When seen as a matrix, the Nash device reconstructs information in the same manner as peek-a-boo card retrieval systems did in the 1940s to manage knowledge stores.

The Group Symbol Associator is similar to Leo J. Brannick's analog computer for medical diagnosis, Martin Lipkin and James Hardy's McBee punch card system for diagnosing hematological diseases, Keeve Brodman's Cornell Medical Index Health Questionnaire, Vladimir K.

Zworykin's symptom spectra analog computer, and other "peek-a-boo" card systems and devices.

The challenge that these devices are trying to solve is locating or mapping illnesses that are suited for the patient's mix of standardized features or attributes (signs, symptoms, laboratory findings, etc.).

Nash claimed to have condensed a physician's memory of hundreds of pages of typical diagnostic tables to a little machine around a yard long.

Nash claimed that his Group Symbol Associator obeyed the "rule of mechanical experience conservation," which he coined.



"Will man crumble under the weight of the wealth of experience he has to bear and pass on to the next generation if our books and brains are reaching relative inadequacy?" he wrote.

I don't believe so.

Power equipment and labor-saving gadgets took on the physical strain.

Now is the time to usher in the age of thought-saving technologies" (Nash 1960b, 240).

Nash's equipment did more than just help him remember things.

He asserted that the machine was involved in the diagnostic procedure' logical analysis.

"Not only does the Group Symbol Associator represent the final results of various diagnostic classificatory thoughts, but it also displays the skeleton of the whole process as a simultaneous panorama of spectral patterns that correlate with changing degrees of completeness," Nash said.

"For each diagnostic occasion, it creates a map or pattern of the issue and functions as a physical jig to guide the mental process" (Paycha 1959, 661).

On October 14, 1953, a patent application for the invention was filed with the Patent Office in London.

At the 1958 Mechanization of Thought Processes Conference at the National Physical Laboratory (NPL) in the Teddington region of London, Nash conducted the first public demonstration of the Group Symbol Associator.

The NPL meeting in 1958 is notable for being just the second to be held on the topic of artificial intelligence.

In the late 1950s, the Mark III Model of the Group Symbol Associator became commercially available.

Nash hoped that when doctors were away from their offices and books, they would bring Mark III with them.

"The GSA is tiny, affordable to create, ship, and disseminate," Nash noted.

It is simple to use and does not need any maintenance.

Even in outposts, ships, and other places, a person might have one" (Nash 1960b, 241).

Nash also published instances of xerography (dry photocopying)-based "logoscopic photograms" that obtained the same outcomes as his hardware device.

Medical Data Systems of Nottingham, England, produced the Group Symbol Associator in large quantities.

Yamanouchi Pharmaceutical Company distributed the majority of the Mark V devices in Japan.

In 1959, Nash's main opponent, François Paycha, a French ophthalmologist, explained the practical limits of Nash's Group Symbol Associator.

He pointed out that in the identification of corneal diseases, where there are roughly 1,000 differentiable disorders and 2,000 separate indications and symptoms, such a gadget would become highly cumbersome.

The instrument was examined in 1975 by R. W. Pain of the Royal Adelaide Hospital in South Australia, who found it to be accurate in just a quarter of instances.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Computer-Assisted Diagnosis.


Further Reading:


Eden, Murray. 1960. “Recapitulation of Conference.” IRE Transactions on Medical Electronics ME-7, no. 4 (October): 232–38.

Nash, F. A. 1954. “Differential Diagnosis: An Apparatus to Assist the Logical Faculties.” Lancet 1, no. 6817 (April 24): 874–75.

Nash, F. A. 1960a. “Diagnostic Reasoning and the Logoscope.” Lancet 2, no. 7166 (December 31): 1442–46.

Nash, F. A. 1960b. “The Mechanical Conservation of Experience, Especially in Medicine.” IRE Transactions on Medical Electronics ME-7, no. 4 (October): 240–43.

Pain, R. W. 1975. “Limitations of the Nash Logoscope or Diagnostic Slide Rule.” Medical Journal of Australia 2, no. 18: 714–15.

Paycha, François. 1959. “Medical Diagnosis and Cybernetics.” In Mechanisation of Thought Processes, vol. 2, 635–67. London: Her Majesty’s Stationery Office


Cyber Security - What Methods Do Hackers Use To Hack In 2022?



    We devote a lot of effort to attempting to explain to different organizations and people the many sorts of hackers that exist and how people are likely to come into touch with them, as this is the root cause from a social engineering perspective. 

    The sorts of hackers to be on the lookout for are listed below, along with some information on how they could attempt to take advantage of you or your company.


    Which Major Hacker Organizations Should We Be Aware Of?


    1. Nation States:

    We won't include any names for political reasons, but you can probably guess which nations are engaged in global cyberwarfare and attempting to hack into pretty about everywhere they believe they can gain an advantage.

    A list of target names, nations, and industries will be managed by highly sophisticated industrial style espionage, sabotage, and ransom attack type activities in accordance with the nation state's present agenda.

    Please keep in mind, though, that western governments won't be totally blameless in this.


    2. Organized Crime:

    Most of us are probably most familiar with organized crime, which consists of groups or people whose only goal is to steal money from anybody they can hack. Rarely is it personal or political; they usually simply ask "where they can get money from."


    3. Hacktivists: 

    While it might be difficult to forecast the kind of targets that these organizations will attack, in reality, they are self-described cyber warriors that attack political, organizational, or private targets in order to promote their "activist" agendas.


    What Are The Most Likely Ways That You Could Be Hacked?


    1. Device Exploits: 

    This is one of the most typical methods of hacking. Basically, all that occurs is that you will get a link to click on that seems safe but really tries to execute some local malware to attack a weakness on your computer.

    Therefore, you are vulnerable since you haven't properly updated Windows Updates (or any other device you're using), handled vulnerabilities in the software you've placed on your devices, or misconfigured software that you've installed (I.e all macros enabled in your Microsoft Office or something like that).

    Once the attacker has "got you," which is often done with a remote access trojan of some kind, they will look for another place to hide inside your network, prolonging their capacity to take advantage of you. 

    They will often search for whatever on your network they can obtain a remote shell on since they will effectively know that the way they got you in the first place (through your computer) can be readily fixed (i.e a printer or an old switch or something).


    2. IP address exploits:  

    Discovering your office's, data center's, or home's exterior endpoints is another frequent method of hacking. 

    Your IP addresses are initially determined using a variety of techniques; sadly, this is relatively readily done via internet lookups or rather often by simple social engineering.

    It would be simple for someone to call your workplace and claim to be from your IP service provider in an effort to persuade you to reveal your office's IP address. 

    For nation governments and bigger organized criminal organizations, they will simply efficiently maintain databases of known ports and known susceptible software operating on those ports while continuously scanning through millions of IP addresses depending on the nations and regions they are interested in.

    Millions upon millions of IP addresses, ports, and known vulnerabilities are posted on Shodan, which is essentially a "Hacker Search Engine," and are available for anyone to see and query at any time. 

    In reality, anybody with access to the Shodan API may quickly search across the whole Shodan database, gaining instant access to millions of entries.


    3. Cloud / SaaS Phishing: 

    Multi-factor authentication is thus beginning to fend against this issue, however many organizational accounts continue to exist throughout the globe without it enabled.

    In actuality, you or a member of your team might be the target of an attack on your Office 365, Google G-Suite, or even your online accounting platform. 

    In many cases, you will simply get a link to something that seems absolutely innocuous or nice in order to "re-enter" your login information for a crucial platform (something you wouldn't want the bad guys to have access to).

    Once within the platform, the bad guys may do a wide range of things to attempt to take advantage of you; a popular tactic is to send emails pretending to be a senior staff member in order to transfer money to an account.

    The hackers will continue to keep an eye on you in an attempt to uncover new methods to cause havoc in your digital life. They may even just discreetly send communications for a senior member of staff to another external anonymous account.

    In reality, anybody may strike you at any moment. However, how you should approach your defense will rely on your cyber security risk profile (i.e., what you could have that adversaries might attempt to exploit). 

    To begin with, it's wise to maintain tabs on anyone you suspect of wanting to hack you and their motivations.


    What Are Some Examples Techniques Used By Hackers?



    You are more likely to be targeted by a "Nation State" if you work for a government contractor on specialized intellectual property. 

    This doesn't have to be drugs or weapons; it might be anything that a Nation State would want to duplicate or own for itself.


    You're far more likely to be targeted by organized crime if you're the CEO of a corporation or the finance department (which granted can also be a Nation State). 

    You are probably aware that in phishing campaigns and other situations where bad guys use LinkedIn and Google to scrape information about people's job titles and seniorities in order to figure out how to target their attacks more precisely to the most valuable targets, hackers will target business leaders more frequently.


    If you're the CEO of a large company, national security hackers will attempt to target your children's or family's gadgets in an effort to gain access to your house for espionage or other similar operations. 

    This is why it makes sense to have a closed network at home/private spaces that are only for the gadgets of family/children.


    At the lower end of the spectrum, all of us are sometimes targeted by hackers using phishing emails. 

    As indicated above, emails sent requesting us to click on links are also used to attempt to run remote access trojans in order to allow the bad guys access to your workstations, so we need to be aware that this isn't only for our credentials (i.e., that Multi-Factor authentication may save us from). 

    Once a back door has been built, gangs may manually disseminate ransomware using this.


    Hacktivists are likely to attack you if you work as an executive for a corporation that pollutes foreign rivers and ecosystems.


    Therefore, the main goal of this blog isn't to spook people or incite worry, but rather, we believe that having a basic awareness of the many kinds of adversaries out there may help individuals frame how they should be thinking about their own security.


    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read and learn more Cyber Security Systems here.



    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...