Artificial Intelligence - What Is The State Of Biometric Security And Privacy?

 


Biometrics is a phrase derived from the Greek roots bio (life) and metrikos (measurement).

It is used to examine data in the biological sciences using statistical or mathematical techniques.

In recent years, the phrase has been used in a more precise, high-tech sense to refer to the science of identifying people based on biological or behavioral features, as well as the artificial intelligence technologies that are employed to do so.

For ages, scientists have been measuring human physical characteristics or behaviors in order to identify them afterwards.

The first documented application of biometrics may be found in the works of Portuguese historian Joao de Barros (1496–1570).

De Barros reported how Chinese merchants stamped and recorded children's hands and footprints with ink.

Biometric methods were first used in criminal justice settings in the late nineteenth century.

Alphonse Bertillon (1853–1914), a police clerk in Paris, started gathering bodily measurements (head circumference, finger length, etc.) of prisoners in jail to keep track of repeat criminals, particularly those who used aliases or altered features of their appearance to prevent detection.

Bertillonage was the name given to his system.

After the 1890s, when it became clear that many people had identical dimensions, it went out of favor.

Richard Edward Henry (1850–1931), of Scotland Yard, created a significantly more successful biometric technique based on fingerprinting in 1901.

On the tips of people's fingers and thumbs, he measured and categorized loops, whorls, and arches, as well as subcategories of these components.

Fingerprinting is still one of the most often utilized biometric identifiers by law enforcement authorities across the globe.

Fingerprinting systems are expanding in tandem with networking technology, using vast national and international databases as well as computer matching.

In the 1960s and 1970s, the Federal Bureau of Investigation collaborated with the National Bureau of Standards to automate fingerprint identification.

This included scanning existing paper fingerprint cards and creating minutiae feature extraction algorithms and automatic classifiers for comparing electronic fingerprint data.

Because of the high expense of electronic storage, the scanned pictures of fingerprints, as well as the categorization data and minutiae, were not kept in digital form.

In 1980, the FBI made the M40 fingerprint matching technology operational.

In 1999, the Integrated Automated Fingerprint Identification System (IAFIS) became live.

In 2014, the FBI's Next Generation Identification system, an outgrowth of IAFIS, was used to record palm print, iris, and face identification.

While biometric technology is often seen as a way to boost security at the price of privacy, it may also be utilized to assist retain privacy in specific cases.

Many sorts of health-care employees in hospitals need access to a shared database of patient information.

The Health Insurance Portability and Accountability Act emphasizes the need of preventing unauthorized individuals from accessing this sensitive data (HIPAA).

For example, the Mayo Clinic in Florida was a pioneer in biometric access to medical records.

In 1997, the clinic started utilizing digital fingerprinting to limit access to patient information.

Today, voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor combine with big data and artificial intelligence recognition software to rap idly identify or authenticate individuals based on voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor.

The reliability of DNA fingerprinting has evolved to the point that it is widely recognized by courts.

Even in the absence of further evidence, criminals have been convicted based on DNA findings, while falsely incarcerated prisoners have been exonerated.

While biometrics is frequently employed by law enforcement agencies, courts, and other government agencies, it has also come under fire from the public for infringing on individual privacy rights.

Biometric artificial intelligence software research has risen in tandem with actual and perceived criminal and terrorist concerns at universities, government agencies, and commercial enterprises.

National Bank United used technology developed by biometric experts Visionics and Keyware Technologies to install iris recognition identification systems on three ATMs in Texas as an experiment in 1999.

At Super Bowl XXXV in Tampa, Florida, Visage Corporation presented the FaceFINDER System, an automatic face recognition device.

As fans entered the stadium, the technology scanned their faces and matched them to a database of 1,700 known criminals and terrorists.

Officials claimed to have identified a limited number of offenders, but there have been no big arrests or convictions as a result of such identifications.

At the time, the indiscriminate use of automatic face recognition sparked a lot of debate.

The Snooper Bowl was even dubbed after the game.

Following the terrorist events of September 11, 2001, a public policy discussion in the United States focused on the adoption of biometric technology for airport security.

Following 9/11, polls revealed that Americans were prepared to give up significant portions of their privacy in exchange for increased security.

Biometric technology were already widely used in other nations, such as the Netherlands.

The Privium program for passenger iris scan verification has been in effect at Schiphol Airport since 2001.

In 2015, the Transportation Security Administration (TSA) of the United States started testing biometric techniques for identification verification.

In 2019, Delta Air Lines, in collaboration with US Customs and Border Protection, provided customers at Atlanta's Maynard Jackson International Terminal the option of face recognition boarding.

Passengers can get their boarding cards, self-check baggage bags, and navigate TSA checkpoints and gates without interruption thanks to the technology.

Only 2% of travelers choose to opt out during the first launch.

Biometric authentication systems are currently being used by financial institutions in routine commercial transactions.

They are already widely used to secure personal smart phone access.

As smart home gadgets linked to the internet need support for safe financial transactions, intelligent security will become increasingly more vital.

Opinions on biometrics often shift in response to changing circumstances and settings.

People who support the use of face recognition technology at airports to make air travel safer may be opposed to digital fingerprinting at their bank.

Some individuals believe that private companies' use of biometric technology dehumanizes them, treating them as goods rather than persons and following them in real time.

Community policing is often recognized as an effective technique to create connections between law enforcement personnel and the communities they police at the local level.

However, other opponents argue that biometric monitoring shifts the emphasis away from community formation and toward governmental socio-technical control.

The importance of context, on the other hand, cannot be overstated.

Biometrics in the workplace may be seen as a leveler, since it subjects white-collar employees to the same level of scrutiny as blue-collar workers.

For usage in cloud security systems, researchers are starting to build video analytics AI software and smart sensors.

In real-time monitoring of workplaces, public spaces, and residences, these systems can detect known persons, items, sounds, and movements.

They may also be programmed to warn users when they are in the presence of strangers.

Artificial intelligence algorithms that were once used to create biometric systems are now being utilized to thwart them.

GANs, for example, are generative adversarial networks that replicate human users of network technology and applications.

GANs have been used to build fictitious people's faces using biometric training data.

GANs are often made up of a creator system that creates each new picture and a critic system that iteratively compares the fake face to the original photograph.

In 2020, the firm Icons8 claimed that it could make a million phony headshots in a single day using just seventy human models.

The firm distributes stock images of the headshots made using its proprietary StyleGAN technology.

A university, a dating app, and a human resources agency have all been clients.

Rosebud AI distributes GAN-generated photographs to online shopping sites and small companies who can't afford to pay pricey models and photographers.

Deepfake technology has been used to perpetrate hoaxes and misrepresentations, make fake news clips, and conduct financial fraud.

It uses machine learning algorithms to create convincing but counterfeit videos.

Facebook profiles with deepfake profile photographs have been used to boost political campaigns on social media.

Deepfake hacking is possible on smartphones with face recognition locks.

Deepfake technology may also be used for good.

Such technology has been utilized in films to make performers seem younger in flashbacks or other similar scenarios.

Digital technology was also employed in films like Rogue One: A Star Wars Story (2016) to incorporate the late Peter Cushing (1913–1994), who portrayed the same role from the original 1977 Star Wars picture.

Face-swapping is available to recreational users via a number of software apps.

Users may submit a selfie and adjust their hair and facial expression with FaceApp.

In addition, the computer may mimic the aging of a person's features.

Zao is a deepfake program that takes a single picture and replaces the faces of stars from movies and television shows in hundreds of video.

Deepfake algorithms are now being used to identify the deepfakes' own videos.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Biometric Technology.


Further Reading


Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, 

Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” NIPS ’14: Proceedings of the 27th International Conference on Neural Information Processing Systems 2 (December): 2672–80.

Hopkins, Richard. 1999. “An Introduction to Biometrics and Large-Scale Civilian Identification.” International Review of Law, Computers & Technology 13, no. 3: 337–63.

Jain, Anil K., Ruud Bolle, and Sharath Pankanti. 1999. Biometrics: Personal Identification in Networked Society. Boston: Kluwer Academic Publishers.

Januškevič, Svetlana N., Patrick S.-P. Wang, Marina L. Gavrilova, Sargur N. Srihari, and Mark S. Nixon. 2007. Image Pattern Recognition: Synthesis and Analysis in Biometrics. Singapore: World Scientific.

Nanavati, Samir, Michael Thieme, and Raj Nanavati. 2002. Biometrics: Identity Verification in a Networked World. New York: Wiley.

Reichert, Ramón, Mathias Fuchs, Pablo Abend, Annika Richterich, and Karin Wenz, eds. 2018. Rethinking AI: Neural Networks, Biometrics and the New Artificial Intelligence. Bielefeld, Germany: Transcript-Verlag.

Woodward, John D., Jr., Nicholas M. Orlans, and Peter T. Higgins. 2001. Biometrics: Identity Assurance in the Information Age. New York: McGraw-Hill.




Artificial Intelligence - What Are AI Berserkers?

 


Berserkers are intelligent killing robots initially described by science fiction and fantasy novelist Fred Saberhagen (1930–2007) in his 1962 short tale "Without a Thought." Berserkers later emerged as frequent antagonists in many more of Saberhagen's books and novellas.

Berserkers are a sentient, self-replicating race of space-faring robots with the mission of annihilating all life.

They were built as an ultimate doomsday weapon in a long-forgotten interplanetary conflict between two extraterrestrial cultures (i.e., one intended as a threat or deterrent more than actual use).

The facts of how the Berserkers were released are lost to time, since they seem to have killed off their creators as well as their foes and have been ravaging the Milky Way galaxy ever since.

They come in a variety of sizes, from human-scale units to heavily armored planetoids (cf.

Death Star), and are equipped with a variety of weaponry capable of sterilizing worlds.

Any sentient species that fights back, such as humans, is a priority for the Berserkers.

They construct factories in order to duplicate and better themselves, but their basic objective of removing life remains unchanged.

It's uncertain how far they evolve; some individual units end up questioning or even changing their intentions, while others gain strategic brilliance (e.g., Brother Assassin, "Mr.Jester," Rogue Berserker, Shiva in Steel).

While the Berserkers' ultimate purpose of annihilating all life is evident, their tactical activities are uncertain owing to unpredictability in their cores caused by radioactive decay.

Their name is derived from Norse mythology's Berserkers, powerful human warriors who battled in a fury.

Berserkers depict a worst-case scenario for artificial intelligence: murdering robots that think, learn, and reproduce in a wild and emotionless manner.

They demonstrate the deadly arrogance of providing AI with strong weapons, harmful purpose, and unrestrained self-replication in order to transcend its creators' comprehension and control.

If Berserkers are ever developed and released, they may represent an inexhaustible danger to living creatures over enormous swaths of space and time.

They're quite hard to get rid of after they've been unbottled.

This is owing to their superior defenses and weaponry, as well as their widespread distribution, ability to repair and multiply, autonomous functioning (i.e., without centralized control), capacity to learn and adapt, and limitless patience to lay in wait.

The discovery of Berserkers is so horrifying in Saberhagen's books that human civilizations are terrified of constructing their own AI for fear that it may turn against its creators.

Some astute humans, on the other hand, find a fascinating Berserker counter-weapon: Qwib-Qwibs, self-replicating robots designed to eliminate all Berserkers rather than all life ("Itself Surprised" by Roger Zelazny).

Humans have also utilized cyborgs as an anti-Berserker technique, pushing the boundaries of what constitutes biological intelligence (Berserker Man, Ber serker Prime, Berserker Kill).

Berserkers also exemplifies artificial intelligence's potential for inscrutability and strangeness.

Even while Berserkers can communicate with each other, their huge brains are generally unintelligible to sentient organic lifeforms fleeing or battling them, and they are difficult to study owing to their proclivity to self-destruct if caught.

What can be deduced from their reasoning is that they see life as a plague, a material illness that must be eradicated.

In consequence, the Berserkers lack a thorough understanding of biological intellect and have never been able to adequately duplicate organic life, despite several tries.

They do, however, sometimes enlist human defectors (dubbed "goodlife") to aid the Berserkers in their struggle against "badlife" (i.e., any life that resists extermination).

Nonetheless, Berserkers and humans think in almost irreconcilable ways, hindering attempts to reach a common understanding between life and nonlife.

The seeming contrasts between human and machine intellect are at the heart of most of the conflict in the tales (e.g., artistic appreciation, empathy for animals, a sense of humor, a tendency to make mistakes, the use of acronyms for mnemonics, and even fake encyclopedia entries made to detect pla giarism).

Berserkers have been known to be defeated by non-intelligent living forms such as plants and mantis shrimp ("Pressure" and "Smasher").

Berserkers may be seen of as a specific example of the von Neumann probe, which was invented by mathematician and physicist John von Neumann (1903–1957): self-replicating space-faring robots that might be deployed over the galaxy to efficiently investigate it In the Berserker tales, the Turing Test, developed by mathematician and computer scientist Alan Turing (1912–1954), is both investigated and upended.

In "Inhuman Error," human castaways compete with a Berserker to persuade a rescue crew that they are human, while in "Without a Thought," a Berserker tries to figure out whether its game opponent is human.

The Fermi paradox—the concept that if intelligent extraterrestrial civilizations exist, we should have heard from them by now—is also explained by Berserkers.

It's possible that extraterrestrial civilizations haven't contacted Earth because they were destroyed by Berserker-like robots or are hiding from them.

Berserkers, or anything like them, have featured in a number of science fiction books in addition to Saberhagen's (e.g., works by Greg Bear, Gregory Benford, David Brin, Ann Leckie, and Martha Wells; the Terminator series of movies; and the Mass Effect series of video games).

All of these instances demonstrate how the potential for existential risks posed by AI may be investigated in the lab of fiction.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

de Garis, Hugo; Superintelligence; The Terminator.


Further Reading


Saberhagen, Fred. 2015a. Berserkers: The Early Tales. Albuquerque: JSS Literary Productions.

Saberhagen, Fred. 2015b. Berserkers: The Later Tales. Albuquerque: JSS Literary Productions.

Saberhagen’s Worlds of SF and Fantasy. http://www.berserker.com.

The TAJ: Official Fan site of Fred Saberhagen’s Berserker® Universe. http://www.berserkerfan.org.




Artificial Intelligence - What Is The Asilomar Conference On Beneficial AI?

 


The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.

"A robot may not damage a human being or, by inactivity, enable a human being to come to harm; A robot must follow human instructions unless such orders would contradict with the First Law; A robot must safeguard its own existence unless such protection would clash with the First or Second Law" (Asimov 1950, 40).

In subsequent books, Asimov added a Fourth Law or Zeroth Law, which is often quoted as "A robot may not hurt mankind, or, by inactivity, enable humanity to come to harm," and is detailed in Robots and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).

Asimov's zeroth rule sparked debate on how to judge whether or not something is harmful to mankind.

This was the topic of the 2017 Asilomar Conference on Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose twenty-three principles to protect mankind in the future of AI.

The conference's sponsor, the Future of Life Institute, has posted the principles on its website and has received 3,814 signatures from AI experts and other multidisciplinary supporters.

There are three basic kinds of principles: research questions, ethics and values, and long-term concerns.

These research guidelines are intended to guarantee that the aims of artificial intelligence continue to be helpful to people.

They're meant to help investors decide where to put their money in AI research.

To achieve useful AI, Asilomar signatories con incline that research agendas should encourage and preserve openness and conversation between AI researchers, policymakers, and developers.

Researchers interested in the development of artificial intelligence systems should work together to prioritize safety.

Proposed concepts relating to ethics and values are aimed to prevent damage and promote direct human control over artificial intelligence systems.

Parties to the Asilomar principles believe that AI should reflect human values such as individual rights, freedoms, and diversity acceptance.

Artificial intelligences, in particular, should respect human liberty and privacy, and should only be used to empower and enrich humanity.

Human social and civic norms must be adhered to by AI.

The Asilomar signatories believe that AI creators should be held accountable for their work.

One aspect that stands out is the likelihood of an autonomous weapons arms race.

Because of the high stakes, the designers of the Asilomar principles incorporated principles that addressed longer-term challenges.

They advised prudence, meticulous planning, and human supervision.

Superintelligences must be produced for the wider welfare of mankind, and not merely to further the aims of one industry or government.

The Asilomar Conference's twenty-three principles have sparked ongoing discussions about the need for beneficial AI and specific safeguards for the future of AI and humanity.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.



Further Reading


Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.

Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.

Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.





Nanotech - Nano Resolution Color Imaging To Help Create Nano Electronics



Researchers at UC Riverside have developed a method for squeezing tungsten lamp light into a 6-nanometer area at the end of a silver nanowire. 

Rather of having to settle with molecular vibrations, scientists may now achieve color imaging at a "unprecedented" level. 

The researchers tweaked an existing "superfocusing" technology (which was previously used to detect vibrations) to detect signals throughout the visible spectrum. 

Light travels along a conical path, similar to that of a flashlight. 

The device records the impact of an object on the form and color of the beam as the nanowire's tip passes over it (including through a spectrometer). 


The scientists can make color photographs of carbon nanotubes that would otherwise seem gray by using two sections of spectra for every 6nm pixel. 



Scientists have created new materials for next-generation electronics that are so small that they are not only unidentifiable when tightly packed, but they also don't reflect enough light for even the most powerful optical microscopes to reveal minute features like colors. 

Carbon nanotubes, for example, appear grey under an optical microscope. 

Scientists find it difficult to investigate nanoparticles' unique features and find methods to improve them for industrial application since they can't differentiate small details and variations between individual pieces. 


Researchers from UC Riverside describe a revolutionary imaging technology that compresses lamp light into a nanometer-sized spot in a new paper published in Nature Communications. 


Like a Hogwarts student learning the "Lumos" spell, it keeps the light at the end of a silver nanowire and utilizes it to show previously unseen features, including colors. 

Scientists will be able to examine nanomaterials in enough detail to make them more useful in electronics and other applications thanks to the breakthrough, which improves color imaging resolution to an unparalleled 6 nanometer level. 

With a superfocusing approach developed by the team, Ming Liu and Ruoxue Yan, associate professors at UC Riverside's Marlan and Rosemary Bourns College of Engineering, created this unique instrument. 


Previous research has utilized the technology to examine molecular bond vibrations at 1-nanometer spatial resolution without the need of a focusing lens. 



Liu and Yan improved the method in the current paper to measure signals covering the whole visible wavelength range, which may be used to produce color and portray the object's electrical band structures rather than just molecular vibrations. 

Light from a tungsten lamp is squeezed into a silver nanowire with near-zero scattering or reflection, where it is conveyed by the oscillation wave of free electrons at the silver surface. 

The condensed light travels in a conical route from the silver nanowire tip, which has a radius of only 5 nanometers, similar to a flashlight's light beam. 

The impact of an item on the beam shape and color is detected and recorded as the tip passes over it. 

"It's like controlling the water spray from a hose with your thumb," Liu said. 

"You know how to change the thumb position to acquire the desired spraying pattern, and similarly, in the experiment, we read the light pattern to extract the specifics of the item obstructing the 5 nm-sized light nozzle." The light is then concentrated into a spectrometer, where it takes the shape of a small ring. 


The researchers can colorize absorption and scattering pictures by scanning the probe across an area and capturing two spectra for each pixel. 


The previously grey carbon nanotubes are photographed in color for the first time, and each carbon nanotube may now display its own distinct hue. 

"The imaging is dependent on the atomically clean sharp-tip silver nanowire and its almost scatterless optical coupling and focusing," Yan stated. 

"Otherwise, there would be a lot of stray light in the backdrop, which would sabotage the entire thing." The researchers believe the new approach will be useful in assisting the semiconductor sector in producing homogenous nanomaterials with consistent characteristics for use in electronic devices. 

The new full-color nano-imaging approach should help researchers learn more about catalysis, quantum optics, and nanoelectronics. 

Xuezhi Ma, who worked on the topic as part of his PhD research at UCR Riverside, joined Liu, Yan, and Ma in the study. 


The study is titled "6 nm super-resolution optical transmission and scattering spectroscopic imaging of carbon nanotubes employing a nanometer-scale white light source." 


Although the ability to compress light is impressive in and of itself, the creators believe it will play a significant role in nanotechnology. 

Semiconductor manufacturers may be able to create more consistent nanomaterials for use in chips and other tightly packed devices. 

The constricted light might also help mankind grasp nanoelectronics, quantum optics, and other scientific domains that haven't had this resolution before.


~ Jai Krishna Ponnappan


You May Also Want To Read More About Nano Technology here.


Read the research paper attached  below:

Quantum Computing - Enter Quantinuum!




Honeywell and quantum software company Cambridge Quantum have finalized a transaction in which Honeywell's Quantum Solutions branch was split off and combined with Cambridge Quantum to become Quantinuum. 


The agreement, which was revealed around six months ago, helped to stoke interest in quantum-related investment, IPOs, and acquisitions. 

We'll have to wait and watch what happens once Honeywell sends Quantinuum out into the world with up to $300 million in cash. 

Q-CTRL, a company that focuses on quantum computing control and software solutions, has announced a $25 million fundraising round led by Airbus Ventures and other investors. 


Airbus' involvement is unsurprising, given the aerospace and military industries provide some near-term commercial prospects for quantum computing applications. 



Quantum spin liquids are a type of magnet matter with spinning electrons that, when frozen, becomes a "fluctuating" solid, according to a team of Harvard-led researchers. 

They appear to have nothing to do with liquid as we know it, but are a form of magnet matter with spinning electrons that, when frozen, becomes a "fluctuating" solid. 

This might lead to more durable qubits. 


This week, Google AI highlighted a recent experiment with time crystals. 

A time crystal includes layers of atoms in an oscillating pattern "made in time," while seeming like something comic book heroes could use to travel around the multiverse. 


Google's Sycamore quantum processor was used to prove that these time crystals may be seen. 


According to a blog post, "observing a time crystal reveals how quantum computers might be utilized to examine unique physical phenomena that have perplexed scientists for years." 

"Moving from theory to observation is a fundamental step that forms the basis of each scientific breakthrough. 

This kind of research opens the door to a slew of new experiments, not only in physics, but potentially in a variety of other domains as well..." 

Finland has officially entered the space race for quantum computing. 

Quantum computing start-up and Finland's VTT Technical Research Centre The country's first operational 5-qubit quantum computer, according to IQM, is up and running. 





Artificial Intelligence - AI And Robotics In The Battlefield.

 



Because of the growth of artificial intelligence (AI) and robots and their application to military matters, generals on the contemporary battlefield are seeing a possible tactical and strategic revolution.

Unmanned aerial vehicles (UAVs), also known as drones, and other robotic devices played a key role in the wars in Afghanistan (2001–) and Iraq (2003–2011).

It is possible that future conflicts will be waged without the participation of humans.

Without human control or guidance, autonomous robots will fight in war on land, in the air, and beneath the water.

While this vision remains in the realm of science fiction, battlefield AI and robotics raise a slew of practical, ethical, and legal issues that military leaders, technologists, jurists, and philosophers must address.

When many people think about AI and robotics on the battlefield, the first image that springs to mind is "killer robots," armed machines that indiscriminately destroy everything in their path.

There are, however, a variety of applications for battlefield AI that do not include killing.

In recent wars, the most notable application of such technology has been peaceful in character.

UAVs are often employed for surveillance and reconnaissance.

Other robots, like as iRobot's PackBot (the same firm that makes the vacuum-cleaning Roomba), are employed to locate and assess improvised explosive devices (IEDs), making their safe disposal easier.

Robotic devices can navigate treacherous terrain, such as Afghanistan's caves and mountain crags, as well as areas too dangerous for humans, such as under a vehicle suspected of being rigged with an IED.

Unmanned Underwater Vehicles (UUVs) are also used to detect mines underwater.

IEDs and explosives are so common on today's battlefields that these robotic gadgets are priceless.

Another potential life-saving capacity of battlefield robots that has yet to be realized is in the realm of medicine.

Robots can safely collect injured troops on the battlefield in areas that are inaccessible to their human counterparts, without jeopardizing their own lives.

Robots may also transport medical supplies and medications to troops on the battlefield, as well as conduct basic first aid and other emergency medical operations.

AI and robots have the greatest potential to change the battlefield—whether on land, sea, or in the air—in the arena of deadly power.

The Aegis Combat System (ACS) is an example of an autonomous system used by several militaries across the globe aboard destroyers and other naval combat vessels.

Through radar and sonar, the system can detect approaching threats, such as missiles from the surface or air, mines, or torpedoes from the water.

The system is equipped with a powerful computer system and can use its own munitions to eliminate identified threats.

Despite the fact that Aegis is activated and supervised manually, it has the potential to operate autonomously in order to counter threats faster than humans could.

In addition to partly automated systems like the ACS and UAVs, completely autonomous military robots capable of making judgments and acting on their own may be developed in the future.

The most significant feature of AI-powered robotics is the development of lethal autonomous weapons (LAWs), sometimes known as "killer robots." On a sliding scale, robot autonomy exists.

At one extreme of the spectrum are robots that are designed to operate autonomously, but only in reaction to a specific stimulus and in one direction.

This degree of autonomy is shown by a mine that detonates autonomously when stepped on.

Remotely operated machines, which are unmanned yet controlled remotely by a person, are also available at the lowest end of the range.

Semiautonomous systems occupy the midpoint of the spectrum.

These systems may be able to work without the assistance of a person, but only to a limited extent.

A robot commanded to launch, go to a certain area, and then return at a specific time is an example of such a system.

The machine does not make any "decisions" on its own in this situation.

Semiautonomous devices may also be configured to accomplish part of a task before waiting for further inputs before moving on to the next step.

Full autonomy is the last step.

Fully autonomous robots are designed with a purpose and are capable of achieving it entirely on their own.

This might include the capacity to use deadly force without direct human guidance in warfare circumstances.

Robotic gadgets that are lethally equipped, AI-enhanced, and totally autonomous have the ability to radically transform the current warfare.

Military ground troops made up of both humans and robots, or entirely of robots with no humans, would expand armies.

Small, armed UAVs would not be constrained by the requirement for human operators, and they might be assembled in massive swarms to overwhelm bigger, but less mobile troops.

Such technological advancements will entail equally dramatic shifts in tactics, strategy, and even the notion of combat.

This technology will become less expensive as it becomes more widely accessible.

This might disturb the present military power balance.

Even minor governments, and maybe even non-state organizations like terrorist groups, may be able to develop their own robotic army.

Fully autonomous LAWs bring up a slew of practical, ethical, and legal issues.

One of the most pressing practical considerations is safety.

A completely autonomous robot with deadly armament that malfunctions might represent a major threat to everyone who comes in contact with it.

Fully autonomous missiles might theoretically wander off course and kill innocent people due to a mechanical failure.

Unpredictable technological faults and malfunctions may occur in any kind of apparatus.

Such issues offer a severe safety concern to individuals who deploy deadly robotic gadgets as well as unwitting bystanders.

Even if there are no possible problems, restrictions in program ming may result in potentially disastrous errors.

Programming robots to discriminate between combatants and noncombatants, for example, is a big challenge, and it's simple to envisage misidentification leading to unintentional fatalities.

The greatest concern, though, is that robotic AI may grow too quickly and become independent of human control.

Sentient robots might turn their armament on humans, like in popular science fiction movies and literature, and in fulfillment of eminent scientist Stephen Hawking's grim forecast that the development of AI could end in humanity's annihilation.

Laws may also lead to major legal issues.

The rules of war apply to human beings.

Robots cannot be held accountable for prospective law crimes, whether criminally, civilly, or in any other manner.

As a result, there's a chance that war crimes or other legal violations may go unpunished.

Here are some serious issues to consider: Can the programmer or engineer of a robot be held liable for the machine's actions? Could a person who gave the robot its "command" be held liable for the robot's unpredictability or blunders on a mission that was otherwise self-directed? Such considerations must be thoroughly considered before any completely autonomous deadly equipment is deployed.

Aside from legal issues of duty, a slew of ethical issues must be addressed.

The conduct of war necessitates split-second moral judgments.

Will self-driving robots be able to tell the difference between a kid and a soldier, or between a wounded and helpless soldier and an active combatant? Will a robotic military force always be seen as a cold, brutal, and merciless army of destruction, or can a robot be designed to behave kindly when the situation demands it? Because combat is riddled with moral dilemmas, LAWs involved in war will always be confronted with them.

Experts wonder that dangerous autonomous robots can ever be trusted to do the right thing.

Moral action requires not just rationality—which robots may be capable of—but also emotions, empathy, and wisdom.

These later items are much more difficult to implement in code.

Many individuals have called for an absolute ban on research in this field because to legal, ethical, and practical problems highlighted by the potential of ever more powerful AI-powered robotic military hardware.

Others, on the other hand, believe that scientific advancement cannot be halted.

Rather than prohibiting such study, they argue that scientists and society as a whole should seek realistic answers to the difficulties.

Some argue that keeping continual human supervision and control over robotic military units may address many of the ethical and legal issues.

Others argue that direct supervision is unlikely in the long term because human intellect will be unable to match the pace with which computers think and act.

As the side that gives its robotic troops more autonomy gains an overwhelming advantage over those who strive to preserve human control, there will be an inevitable trend toward more and more autonomy.

They warn that fully autonomous forces will always triumph.

Despite the fact that it is still in its early stages, the introduction of more complex AI and robotic equipment to the battlefield has already resulted in significant change.

AI and robotics on the battlefield have the potential to drastically transform the future of warfare.

It remains to be seen if and how this technology's technical, practical, legal, and ethical limits can be addressed.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons 
Systems.


Further Reading

Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics, 
Law, and Technology 2, no. 1: n.p. https://www.degruyter.com/view/journals/selt/2/1/article-selt.2008.2.1.1036.xml.xml.

Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.” 
Military Review 98, no. 4 (July–August): 18–29.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New 
York: W. W. Norton.

Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st 
Century. London: Penguin.

Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.



Artificial Intelligence - AI Systems That Are Autonomous Or Semiautonomous.

 



Autonomous and semiautonomous systems are characterized by their decision-making dependence on external orders.

They have something in common with conditionally autonomous and automated systems.

Semiautonomous systems depend on a human user somewhere "in the loop" for decision-making, behavior management, or contextual interventions, while autonomous systems may make decisions within a defined region of operation without human input.

Under some situations, conditionally autonomous systems may operate independently.

Automated systems differ from semiautonomous and autonomous systems (autonomy) (automation).

The actions of the earlier systems are preset sequences directly related to specific inputs, while the later systems' actions are predefined sequences directly tied to specified inputs.

When a system's actions and possibilities for action are established in advance as reactions to certain inputs, it is termed automated.

A garage door that automatically stops closing when a sensor detects an impediment in its path is an example of an automated system.

Sensors and user interaction may both be used to collect data.

An automated dishwasher or clothes washer, for example, is a user-initiated automatic system in which the human user sets the sequences of events and behaviors via a user interface, and the machine subsequently executes the commands according to established mechanical sequences.

Autonomous systems, on the other hand, are ones in which the capacity to evaluate conditions and choose actions is intrinsic to the system.

The autonomous system, like an automated system, depends on sensors, cameras, or human input to give data, but its responses are marked by more complicated decision-making based on the contextual evaluation of many simultaneous inputs such as user intent, environment, and capabilities.

When it comes to real-world instances of systems, the terms automated, semiautonomous, and autonomous are used interchangeably depending on the nature of the tasks at hand and the intricacies of decision-making.

These categories aren't usually defined clearly or exactly.

Finally, the degree to which these categories apply is determined by the size and scope of the activity in question.

While the above-mentioned basic differences between automated, semiautonomous, and autonomous systems are widely accepted, there is some dispute as to whether these system types exist in real systems.

The degrees of autonomy established by SAE (previously the Society of Automotive Engineers) for autonomous automobiles are one example of such ambiguity.

Depending on road or weather conditions, as well as situational indices like the existence of road barriers, lane markings, geo-fencing, adjacent cars, or speed, a single system may be Level 2 partly autonomous, Level 3 conditionally autonomous, or Level 4 autonomous.

The degree of autonomy may also be determined by how an automobile job is characterized.

In this sense, a system's categorization is determined as much by its technical structure as by the conditions of its operation or the characteristics of the activity focus.



EXAMPLES OF AUTONOMOUS AI SYSTEMS



E Vehicles that are self-driving.


 The contrasts between automated, semiautonomous, conditionally autonomous, and completely autonomous vehicle systems are shown using automated, semiautonomous, conditionally autonomous, and fully autonomous car systems.

Automated technology, like as cruise control, is an example.

The user specifies a vehicle speed goal, and the vehicle maintains that speed while adjusting acceleration and deceleration as needed by the terrain.

However, in the case of semiautonomous vehicles, a vehicle may be equipped with an adaptive cruise control feature (one that regulates a vehicle's speed in relation to a leading vehicle and to a user's input), as well as lane keeping assistance, automatic braking, and collision mitigation technology.

Semiautonomous cars are now available on the market.

Many possible inputs (surrounding cars, lane markings, human input, impediments, speed restrictions, etc.) may be interpreted by systems, which can then regulate longitudinal and latitudinal control to semiautonomously direct the vehicle's trajectory.

The human user is still involved in decision-making, monitoring, and interventions in this system.

Conditional autonomy refers to a system that allows a human user to "leave the loop" of control and decision-making under certain situations.

The vehicle analyzes emergent inputs and controls its behavior to accomplish the objective without human supervision or intervention after a goal is set (e.g., to continue on a route).

Internal to the activity (defined by the purpose and accessible methods), behaviors are governed and controlled without the involvement of the human user.

It's crucial to remember that every categorization is conditional on the aim and activity being operationalized.

Finally, an autonomous system has fewer constraints than conditional autonomy and is capable of controlling all tasks in a given activity.

An autonomous system, like conditional autonomy, functions inside the activity structure without the involvement of a human user.



Autonomous Robotics


For a number of reasons, autonomous systems may be found in the area of robotics.

There are a variety of reasons why autonomous robots should be used to replace or augment humans, including safety (for example, spaceflight or planetary surface exploration), undesirable circumstances (monotonous tasks such as domestic chores and strenuous labor such as heavy lifting), and situations where human action is limited or impossible (search and rescue in confined conditions).

Robotics applications, like automobile applications, may be deemed autonomous within the confines of a carefully defined domain or activity area, such as a factory assembly line or a residence.

The degree of autonomy, like autonomous cars, is dependent on the specific area and, in many situations, excludes maintenance and repair.

Unlike automated systems, however, an autonomous robot inside such a defined activity structure would behave to achieve a set objective by sensing its surroundings, analyzing contextual inputs, and regulating behavior appropriately without the need for human interaction.

Autonomous robots are now used in a wide range of applications, including domestic uses such as autonomous lawn care robots and interplanetary exploration applications such as the Mars rovers MER-A and MER-B.




Semiautonomous Weapons


 is an acronym for "Semiautonomous Weapons." As part of contemporary military capabilities, autonomous and semiautonomous weapon systems are now being developed.

The definition of, and difference between, autonomous and semiautonomous changes significantly depending on the operationalization of the terminology, the context, and the sphere of activity, much as it does in the preceding automobile and robotics instances.

Consider a landmine as an example of an automated weapon that is not self-contained.

It reacts with fatal force when a sensor is activated, and there is no decision-making capabilities or human interaction.

A semiautonomous system, on the other hand, processes inputs and acts on them for a collection of tasks that form weaponry activity in collaboration with a human user.

The weapons system and the human operator must work together to complete a single task.

To put it another way, the human user is "in the know." Identifying a target, aiming, and shooting are examples of these activities.

Navigation toward a target, placement, and reloading are all possible.

These duties are shared between the system and the human user in a semiautonomous weapon system.

An autonomous system, on the other hand, would be accountable for all of these duties without the need for human monitoring, decision-making, or intervention after the objective was determined and the parameters provided.

There are presently no completely autonomous weapons systems that meet these requirements.

These meanings, as previously stated, are technologically, socially, legally, and linguistically dependent.

The distinction between semiautonomous and autonomous systems has ethical, moral, and political implications, particularly in the case of weapons systems.

This is particularly relevant for assessing accountability, since causal agency and decision-making may be distributed across developers and consumers.

As in the case of machine learning algorithms, the sources of agency and decision-making may also be ambiguous.



USER-INTERFACE CONSIDERATIONS.

 

The various obstacles in building optimum user interfaces for semiautonomous and autonomous systems are mirrored in the ambiguity of their definitions.

For example, in the case of automobiles, ensuring that the user and the system (as designed by the system's designers) have a consistent model of the capabilities being automated (as well as the intended distribution and degree of control) is crucial for the safe transfer of control responsibility.

In the sense that once an activity area is specified, control and responsibility are binary, autonomous systems pose similar user-interface issues (either the system or the human user is responsible).

The problem is reduced to defining the activity and relinquishing control in this case.

Because the description of an activity domain has no required relationship to the composition, structure, and interaction of constituent activities, semiautonomous systems create more difficult issues for the design of user interfaces.

Particular tasks (such as a car maintaining lateral position in a lane) may be decided by an engineer's use of specific technical equipment (and the restrictions that come with it) and therefore have no link to the user's mental representation of that work.

An obstacle detection task, in which a semiautonomous system moves about an environment by avoiding impediments, is an example.

The machine's obstacle detection technologies (camera, radar, optical sensors, touch sensors, thermal sensors, mapping, and so on) define what is and isn't an impediment, and such restrictions may be opaque to the user.

As a consequence of the ambiguity, the system must communicate with a human user when assistance is required, and the system (and its designers) must recognize and anticipate any conflict between system and user models.

Other considerations for designing semiautonomous and autonomous systems (specifically in relation to the ethical and legal dimensions complicated by the distribution of agency among developers and users) include identification and authorization methods and protocols, in addition to the issues raised above.

The difficulty of identifying and approving users for autonomous technology activation is crucial since once activated, systems no longer need continuous monitoring, intermittent decision-making, or interaction.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomy and Complacency; Driverless Cars and Trucks; Lethal Autonomous Weapons Systems.


Further Reading

Antsaklis, Panos J., Kevin M. Passino, and Shyh Jong Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems 11, no. 4 (June): 5–13.

Bekey, George A. 2005. Autonomous Robots: From Biological Inspiration to Implementation and Control. Cambridge, MA: MIT Press.

Norman, Donald A., Andrew Ortony, and Daniel M. Russell. 2003. “Affect and Machine Design: Lessons for the Development of Autonomous Machines.” IBM Systems Journal 42, no. 1: 38–44.

Roff, Heather. 2015. “Autonomous or ‘Semi’ Autonomous Weapons? A Distinction without a Difference?” Huffington Post, January 16, 2015. https://www.huffpost.com/entry/autonomous-or-semi-autono_b_6487268.

SAE International. 2014. “Taxonomy and Definitions for Terms Related to On-Road 
Motor Vehicle Automated Driving Systems.” J3016. SAE International Standard. 
https://www.sae.org/standards/content/j3016_201401/.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...