Showing posts with label Biometric Technology. Show all posts
Showing posts with label Biometric Technology. Show all posts

AI - Smart Homes And Smart Cities.

 



Projects to develop the infrastructure for smart cities and houses are involving public authorities, professionals, businessmen, and residents all around the world.


These smart cities and houses make use of information and communication technology (ICT) to enhance quality of life, local and regional economies, urban planning and transportation, and government.


Urban informatics is a new area that gathers data, analyzes patterns and trends, and utilizes the information to implement new ICT in smart cities.

Data may be gathered from a number of different sources.

Surveillance cameras, smart cards, internet of things sensor networks, smart phones, RFID tags, and smart meters are just a few examples.

In real time, any kind of data may be captured.

Passenger occupancy and flow may be used to obtain data on mass transit utilization.

Road sensors can count cars on the road or in parking lots.



They may also use urban machine vision technologies to determine individual wait times for local government services.


From public thoroughfares and sidewalks, license plate numbers and people's faces may be identified and documented.

Tickets may be issued, and statistics on crime can be gathered.

The information gathered in this manner may be compared to other big datasets on neighborhood income, racial and ethnic mix, utility reliability statistics, and air and water quality indices.



Artificial intelligence (AI) may be used to build or improve city infrastructure.




Stop signal frequencies at crossings are adjusted and optimized based on data acquired regarding traffic movements.


This is known as intelligent traffic signaling, and it has been found to cut travel and wait times, as well as fuel consumption, significantly.

Smart parking structures assist cars in quickly locating available parking spaces.


Law enforcement is using license plate identification and face recognition technologies to locate suspects and witnesses at crime scenes.

Shotspotter, a business that triangulates the position of gunshots using a sensor network placed in special streetlights, tracked and informed police agencies to over 75,000 bullets fired in 2018.

Information on traffic and pedestrian deaths is also being mined via big data initiatives.

Vision Zero is a global highway safety initiative that aspires to decrease road fatalities to zero.

Data analysis using algorithms has resulted in road safety efforts as well as road redesign that has saved lives.



Cities have also been able to respond more swiftly to severe weather occurrences because to ubiquitous sensor technology.


In Seattle, for example, conventional radar data is combined with RainWatch, a network of rain gauges.

Residents get warnings from the system, and maintenance staff are alerted to possible problem places.

Transport interconnection enabling completely autonomous autos is one long-term aim for smart cities.

At best, today's autonomous cars can monitor their surroundings to make judgments and avoid crashes with other vehicles and numerous road hazards.

However, cars that connect with one another in several directions are likely to create fully autonomous driving systems.

Collisions are not only averted, but also prevented in these systems.


Smart cities are often mentioned in conjunction with smart economy initiatives and foreign investment development by planners.


Data-driven entrepreneurial innovation, as well as productivity analyses and evaluation, might be indicators of sensible economic initiatives.

Some smart towns want to emulate Silicon Valley's success.

Neom, Saudi Arabia, is one such project.

It is a proposed megacity city that is expected to cost half a trillion dollars to build.

Artificial intelligence is seen as the new oil in the city's ambitions, despite sponsorship by Saudi Aramco, the state-owned petroleum giant.

Everything will be controlled by interconnected computer equipment and future artificial intelligence decision-making, from home technology to transportation networks and electronic medical record distribution.


One of Saudi Arabia's most significant cultural activities—monitoring the density and pace of pilgrims around the Kaaba in Mecca—has already been entrusted to AI vision technologies.

The AI is intended to avert a disaster on the scale of the 2015 Mina Stampede, which claimed the lives of 2,000 pilgrims.

The use of highly data-driven and targeted public services is another trademark of smart city programs.

Information-driven agencies are frequently referred to as "smart" or "e-government" when they work together.


Open data projects to encourage openness and shared engagement in local decision-making might be part of smart governance.


Local governments will collaborate with contractors to develop smart utility networks for the provision of electricity, telecommunications, and the internet.

Waste bins are linked to the global positioning system and cloud servers, alerting vehicles when garbage is ready for pickup, allowing for smart waste management and recycling initiatives in Barcelona.

Lamp poles have been converted into community wi-fi hotspots or mesh networks in certain areas to provide pedestrians with dynamic lighting safety.

Forest City in Malaysia, Eko Atlantic in Nigeria, Hope City in Ghana, Kigamboni New City in Tanzania, and Diamniadio Lake City in Senegal are among the high-tech centres proposed or under development.


Artificial intelligence is predicted to be the brain of the smart city in the future.


Artificial intelligence will personalize city experiences to match the demands of specific inhabitants or tourists.

Through customized glasses or heads-up displays, augmented systems may give virtual signs or navigational information.

Based on previous use and location data, intelligent smartphone agents are already capable of predicting user movements.


Artificial intelligence technologies are used in smart homes in a similar way.


Google Home and other smart hubs now integrate with over 5,000 different types of smart gadgets sold by 400 firms to create intelligent environments in people's homes.

Amazon Echo is Google Home's main rival.

These kinds of technologies can regulate heating, ventilation, and air conditioning, as well as lighting and security, as well as household products like smart pet feeders.

In the early 2000s, game-changing developments in home robotics led to widespread consumer acceptance of iRobot's Roomba vacuum cleaner.

Obsolescence, proprietary protocols, fragmented platforms and interoperability issues, and unequal technological standards have all plagued such systems in the past.


Machine learning is being pushed forward by smart houses.


Smart technology' analytical and predictive capabilities are generally regarded as the backbone of one of the most rapidly developing and disruptive commercial sectors: home automation.

To function properly, the smarter connected home of the future needs collect fresh data on a regular basis in order to develop.

Smart houses continually monitor the interior environment and use aggregated past data to establish settings and functionalities in buildings with smart components installed.

Smart houses may one day anticipate their owners' requirements, such as automatically changing blinds as the sun and clouds move across the sky.

A smart house may produce a cup of coffee at precisely the correct time, order Chinese takeout, or play music based on the resident's mood as detected by emotion detectors.


Pervasive, sophisticated technologies are used in smart city and household AI systems.


The benefits of smart cities are many.

Smart cities pique people's curiosity because of its promise for increased efficiency and convenience.

It's enticing to live in a city that anticipates and easily fulfills personal wants.

Smart cities, however, are not without their detractors.

Smart havens, if left uncontrolled, have the ability to cause major privacy invasion via continuous video recording and microphones.

Google contractors might listen to recordings of exchanges with users of its famous Google Assistant artificial intelligence system, according to reports in 2019.


The influence of smart cities and households on the environment is yet unknown.


Biodiversity considerations are often ignored in smart city ideas.


Critical habitat is routinely destroyed in order to create space for the new cities that tech entrepreneurs and government officials desire.

Conventional fossil-fuel transportation methods continue to reign supreme in smart cities.

The future viability of smart homes is likewise up in the air.

A recent research in Finland found that improved metering and consumption monitoring did not successfully cut smart home power use.


In reality, numerous smart cities that were built from the ground up are now almost completely empty.


Many years after their initial construction, China's so-called ghost cities, such as Ordos Kangbashi, have attained occupancy levels of one-third of all housing units.

Despite direct, automated vacuum waste collection tubes in individual apartments and building elevators timed to the arrival of residents' automobiles, Songdo, Korea, an early "city in a box," has not lived up to promises.


Smart cities are often portrayed as impersonal, elitist, and costly, which is the polar opposite of what the creators intended.

Songdo exemplifies the smart city trend in many aspects, with its underpinning structure of ubiquitous computing technologies that power everything from transportation systems to social networking channels.

The unrivaled integration and synchronization of services is made possible by the coordination of all devices.

As a result, by turning the city into an electronic panopticon or surveillance state for observing and controlling residents, the city simultaneously weakens the protective advantages of anonymity in public settings.


Authorities studying smart city infrastructures are now fully aware of the computational biases of proactive and predictive policing.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Biometric Privacy and Security; Biometric Technology; Driverless Cars and Trucks; Intelligent Transportation; Smart Hotel Rooms.


References & Further Reading:


Albino, Vito, Umberto Berardi, and Rosa Maria Dangelico. 2015. “Smart Cities: Definitions, Dimensions, Performance, and Initiatives.” Journal of Urban Technology 22, no. 1: 3–21.

Batty, Michael, et al. 2012. “Smart Cities of the Future.” European Physical Journal Special Topics 214, no. 1: 481–518.

Friedman, Avi. 2018. Smart Homes and Communities. Mulgrave, Victoria, Australia: Images Publishing.

Miller, Michael. 2015. The Internet of Things: How Smart TVs, Smart Cars, Smart Homes, and Smart Cities Are Changing the World. Indianapolis: Que.

Shepard, Mark. 2011. Sentient City: Ubiquitous Computing, Architecture, and the Future of Urban Space. New York: Architectural League of New York.

Townsend, Antony. 2013. Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia. New York: W. W. Norton & Company.





Artificial Intelligence - Person of Interest(2011–2016), The CBS Sci-Fi Series

 



Between 2011 through 2016, the fictitious television program Person of Interest ran on CBS for five seasons.

Although the show's early episodes resembled a serial crime drama, the tale developed into a science fiction genre that probed ethical questions around artificial intelligence development.

The show's central concept revolves upon a monitoring system known as "The Machine," which was developed for the United States by millionaire Harold Finch, portrayed by Michael Emerson.

This technology was created largely to avoid terrorist acts, but it has evolved to the point where it can anticipate crimes before they happen.

However, owing to its architecture, it only discloses the "person of interest's" social security number, which might be either the victim or the offender.

Normally, each episode is centered on a single person of interest number that has been produced.

Although the ensemble increases in size over the seasons, Finch first employs ex-CIA agent John Reese, portrayed by Jim Caviezel, to assist him in investigating and preventing these atrocities.

Person of Interest is renowned for emphasizing and dramatizing ethical issues surrounding both the invention and deployment of artificial intelligence.

Season four, for example, delves deeply into how Finch constructed The Machine in the first place.

Finch took enormous pains to ensure that The Machine had the correct set of values before exposing it to actual data, as shown by flashbacks.

As Finch strove to get the settings just correct, viewers were able to see exactly what might go wrong.

In one flashback, The Machine altered its own programming before lying about it.

When these failures arise, Finch deletes the incorrect code, noting that The Machine will have unrivaled capabilities.

The Machine quickly responds by overriding its own deletion procedures and even attempting to murder Finch.

"I taught it how to think," Finch says as he reflects on the process.

All I have to do now is educate it how to be concerned." Finally, Finch is able to program The Machine successfully with the proper set of ideals, which includes the preservation of human life.

The interaction of numerous AI beings is a second key ethical subject that runs through seasons three through five.

In season three, Samaritan, a competing AI surveillance software, is built.

This system does not care about human life in the same way as The Machine does, and as a result, it causes enormous harm and turmoil in order to achieve its goals, which include sustaining the United States' national security and its own survival.

As a result of their differences, Samaritan and The Machine find themselves at odds.

The Machine finally beats Samaritan, despite the fact that the program implies that Samaritan is more powerful owing to the employment of newer technology.

This program was mainly a critical success; nevertheless, declining ratings led to its cancellation after just thirteen episodes in its fifth season.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Biometric Privacy and Security; Biometric Technology; Predictive Policing.



References & Further Reading:



McFarland, Melanie. 2016. “Person of Interest Comes to an End, but the Technology Central to the Story Will Keep Evolving.” Geek Wire, June 20, 2016. https://www.geekwire.com/2016/person-of-interest/.

Newitz, Annalee. 2016. “Person of Interest Remains One of the Smartest Shows about AI on Television.” Ars Technica, May 3, 2016. https://arstechnica.com/gaming/2016/05/person-of-interest-remains-one-of-the-smartest-shows-about-ai-on-television/.



Artificial Intelligence - Who Is Helen Nissenbaum?

 



In her research, Helen Nissenbaum (1954–), a PhD in philosophy, looks at the ethical and political consequences of information technology.

She's worked at Stanford University, Princeton University, New York University, and Cornell Tech, among other places.

Nissenbaum has also worked as the primary investigator on grants from the National Security Agency, the National Science Foundation, the Air Force Office of Scientific Research, the United States Department of Health and Human Services, and the William and Flora Hewlett Foundation, among others.

Big data, machine learning, algorithms, and models, according to Nissenbaum, lead to output outcomes.

Her primary issue, which runs across all of these themes, is privacy.

Nissenbaum explores these problems in her 2010 book, Privacy in Context: Technology, Policy, and the Integrity of Social Life, by using the concept of contextual integrity, which views privacy in terms of acceptable information flows rather than merely prohibiting all information flows.

In other words, she's interested in establishing an ethical framework within which data may be obtained and utilized responsibly.

The challenge with developing such a framework, however, is that when many data sources are combined, or aggregated, it becomes possible to understand more about the people from whose the data was obtained than it would be feasible to accomplish with each individual source of data.

Such aggregated data is used to profile consumers, allowing credit and insurance businesses to make judgments based on the information.

Outdated data regulation regimes hamper such activities even more.

One big issue is that the distinction between monitoring users to construct profiles and targeting adverts to those profiles is blurry.

To make things worse, adverts are often supplied by third-party websites other than the one the user is currently on.

This leads to the ethical dilemma of many hands, a quandary in which numerous parties are involved and it is unclear who is ultimately accountable for a certain issue, such as maintaining users' privacy in this situation.

Furthermore, because so many organizations may receive this information and use it for a variety of tracking and targeting purposes, it is impossible to adequately inform users about how their data will be used and allow them to consent or opt out.

In addition to these issues, the AI systems that use this data are biased itself.

This prejudice, on the other hand, is a social issue rather than a computational one, since much of the scholarly effort focused on resolving computational bias has been misplaced.

As an illustration of this prejudice, Nissenbaum cites Google's Behavioral Advertising system.

When a search contains a name that is traditionally African American, the Google Behavioral Advertising algorithm will show advertising for background checks more often.

This sort of racism isn't encoded into the coding; rather, it develops through social contact with adverts, since those looking for traditionally African-American names are more likely to click on background check links.

Correcting these bias-related issues, according to Nissenbaum, would need considerable regulatory reforms connected to the ownership and usage of big data.

In light of this, and with few data-related legislative changes on the horizon, Nissenbaum has worked to devise measures that can be implemented right now.

Obfuscation, which comprises purposely adding superfluous information that might interfere with data gathering and monitoring procedures, is the major framework she has utilized to construct these tactics.

She claims that this is justified by the uneven power dynamics that have resulted in near-total monitoring.

Nissenbaum and her partners have created a number of useful internet browser plug-ins based on this obfuscation technology.

TrackMeNot was the first of these obfuscating browser add-ons.

This pluinator makes random queries to a number of search engines in attempt to contaminate the stream of data obtained and prevent search businesses from constructing an aggregated profile based on the user's genuine searches.

This plug-in is designed for people who are dissatisfied with existing data rules and want to take quick action against companies and governments who are aggressively collecting information.

This approach adheres to the obfuscation theory since, rather than concealing the original search phrases, it just hides them with other search terms, which Nissenbaum refers to as "ghosts." Adnostic is a Firefox web browser prototype plugin aimed at addressing the privacy issues related with online behavioral advertising tactics.

Currently, online behavioral advertising is accomplished by recording a user's activity across numerous websites and then placing the most relevant adverts at those sites.

Multiple websites gather, aggregate, and keep this behavioral data forever.

Adnostic provides a technology that enables profiling and targeting to take place exclusively on the user's computer, with no data exchanged with third-party websites.

Although the user continues to get targeted advertisements, third-party websites do not gather or keep behavioral data.

AdNauseam is yet another obfuscation-based plugin.

This program, which runs in the background, clicks all of the adverts on the website.

The declared goal of this activity is to contaminate the data stream, making targeting and monitoring ineffective.

Advertisers' expenses will very certainly rise as a result of this.

This project proved controversial, and in 2017, it was removed from the Chrome Web Store.

Although workarounds exist to enable users to continue installing the plugin, its loss of availability in the store makes it less accessible to the broader public.

Nissenbaum's book goes into great length into the ethical challenges surrounding big data and the AI systems that are developed on top of it.

Nissenbaum has built realistic obfuscation tools that may be accessed and utilized by anybody interested, in addition to offering specific legislative recommendations to solve troublesome privacy issues.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Biometric Privacy and Security; Biometric Technology; Robot Ethics.


References & Further Reading:


Barocas, Solon, and Helen Nissenbaum. 2009. “On Notice: The Trouble with Notice and Consent.” In Proceedings of the Engaging Data Forum: The First International Forum on the Application and Management of Personal Electronic Information, n.p. Cambridge, MA: Massachusetts Institute of Technology.

Barocas, Solon, and Helen Nissenbaum. 2014. “Big Data’s End Run around Consent and Anonymity.” In Privacy, Big Data, and the Public Good, edited by Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, 44–75. Cambridge, UK: Cambridge University Press.

Brunton, Finn, and Helen Nissenbaum. 2015. Obfuscation: A User’s Guide for Privacy and Protest. Cambridge, MA: MIT Press.

Lane, Julia, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, eds. 2014. Privacy, Big Data, and the Public Good. New York: Cambridge University Press.

Nissenbaum, Helen. 2010. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford University Press.


Artificial Intelligence - What Is Biometric Technology?

 


The measuring of a human attribute is referred to as a biometric.

It might be physiological, like fingerprint or face identification, or behavioral, like keystroke pattern dynamics or walking stride length.

Biometric characteristics are defined by the White House National Science and Technology Council's Subcommittee on Biometrics as "measurable biological (anatomical and physiological) and behavioral traits that may be employed for automated recognition" (White House, National Science and Technology Council 2006, 4).

Biometric technologies are "technologies that automatically confirm the identity of people by comparing patterns of physical or behavioral characteristics in real time against enrolled computer records of those patterns," according to the International Biometrics and Identification Association (IBIA) (International Biometrics and Identification Association 2019).

Many different biometric technologies are either in use or being developed.

Previously used to access personal smartphones, pay for goods and services, and verify identities for various online accounts and physical facilities, fingerprints are now used to access personal smartphones, pay for goods and services, and verify identities for various online accounts and physical facilities.

The most well-known biometric technology is finger print recognition.

Ultrasound, thermal, optical, and capacitive sensors may all be used to acquire fingerprint image collections.

In order to find matches, AI software applications often use minutia-based matching or pattern matching.

By lighting up the palm, sensors capture pictures of human veins, and vascular pattern identification is now feasible.

Other common biometrics are based on facial, iris, or voice characteristics.

Recognizing people by their faces Individual identification, verification, detection, and characterization may all be possible with AI technology.

Detection and characterization processes rarely involve determining an individual's identity.

Although current systems have great accuracy rates, privacy problems arise since a face might be gathered passively, that is, without the subject's awareness.

Iris identification makes use of near-infrared light to extract the iris's distinct structural characteristics.

The retinal blood vessels are examined using retinal technology, which employs a strong light.

The scanned eyeball is compared to the stored picture to evaluate recognition.

Voice recognition is a more advanced technology than voice activation, which identifies speech content.

Each individual user must be able to be identified via voice recognition.

To present, technology has not been sufficiently precise to allow for trustworthy identification in many situations.

For security and law enforcement applications, biometric technology has long been accessible.

However, in the private sector, these systems are increasingly being employed as a verification mechanism for authentication that formerly needed a password.

The introduction of Apple's iPhone fingerprint scanner in 2013 raised public awareness.

The company's newer models have shifted to face recognition access, which further normalizes the notion.

Financial services, transportation, health care, facility access, and voting are just a few of the industries where biometric technology is being used.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Biometric Privacy and Security.


Further Reading

International Biometrics and Identity Association. 2019. “The Technologies.” https://www.ibia.org/biometrics/technologies/.

White House. National Science and Technology Council. 2006. Privacy and Biometrics: Building a Conceptual Foundation. Washington, DC: National Science and Technology Council. Committee on Technology. Committee on Homeland and National Security. Subcommittee on Biometrics.




Artificial Intelligence - What Is The State Of Biometric Security And Privacy?

 


Biometrics is a phrase derived from the Greek roots bio (life) and metrikos (measurement).

It is used to examine data in the biological sciences using statistical or mathematical techniques.

In recent years, the phrase has been used in a more precise, high-tech sense to refer to the science of identifying people based on biological or behavioral features, as well as the artificial intelligence technologies that are employed to do so.

For ages, scientists have been measuring human physical characteristics or behaviors in order to identify them afterwards.

The first documented application of biometrics may be found in the works of Portuguese historian Joao de Barros (1496–1570).

De Barros reported how Chinese merchants stamped and recorded children's hands and footprints with ink.

Biometric methods were first used in criminal justice settings in the late nineteenth century.

Alphonse Bertillon (1853–1914), a police clerk in Paris, started gathering bodily measurements (head circumference, finger length, etc.) of prisoners in jail to keep track of repeat criminals, particularly those who used aliases or altered features of their appearance to prevent detection.

Bertillonage was the name given to his system.

After the 1890s, when it became clear that many people had identical dimensions, it went out of favor.

Richard Edward Henry (1850–1931), of Scotland Yard, created a significantly more successful biometric technique based on fingerprinting in 1901.

On the tips of people's fingers and thumbs, he measured and categorized loops, whorls, and arches, as well as subcategories of these components.

Fingerprinting is still one of the most often utilized biometric identifiers by law enforcement authorities across the globe.

Fingerprinting systems are expanding in tandem with networking technology, using vast national and international databases as well as computer matching.

In the 1960s and 1970s, the Federal Bureau of Investigation collaborated with the National Bureau of Standards to automate fingerprint identification.

This included scanning existing paper fingerprint cards and creating minutiae feature extraction algorithms and automatic classifiers for comparing electronic fingerprint data.

Because of the high expense of electronic storage, the scanned pictures of fingerprints, as well as the categorization data and minutiae, were not kept in digital form.

In 1980, the FBI made the M40 fingerprint matching technology operational.

In 1999, the Integrated Automated Fingerprint Identification System (IAFIS) became live.

In 2014, the FBI's Next Generation Identification system, an outgrowth of IAFIS, was used to record palm print, iris, and face identification.

While biometric technology is often seen as a way to boost security at the price of privacy, it may also be utilized to assist retain privacy in specific cases.

Many sorts of health-care employees in hospitals need access to a shared database of patient information.

The Health Insurance Portability and Accountability Act emphasizes the need of preventing unauthorized individuals from accessing this sensitive data (HIPAA).

For example, the Mayo Clinic in Florida was a pioneer in biometric access to medical records.

In 1997, the clinic started utilizing digital fingerprinting to limit access to patient information.

Today, voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor combine with big data and artificial intelligence recognition software to rap idly identify or authenticate individuals based on voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor.

The reliability of DNA fingerprinting has evolved to the point that it is widely recognized by courts.

Even in the absence of further evidence, criminals have been convicted based on DNA findings, while falsely incarcerated prisoners have been exonerated.

While biometrics is frequently employed by law enforcement agencies, courts, and other government agencies, it has also come under fire from the public for infringing on individual privacy rights.

Biometric artificial intelligence software research has risen in tandem with actual and perceived criminal and terrorist concerns at universities, government agencies, and commercial enterprises.

National Bank United used technology developed by biometric experts Visionics and Keyware Technologies to install iris recognition identification systems on three ATMs in Texas as an experiment in 1999.

At Super Bowl XXXV in Tampa, Florida, Visage Corporation presented the FaceFINDER System, an automatic face recognition device.

As fans entered the stadium, the technology scanned their faces and matched them to a database of 1,700 known criminals and terrorists.

Officials claimed to have identified a limited number of offenders, but there have been no big arrests or convictions as a result of such identifications.

At the time, the indiscriminate use of automatic face recognition sparked a lot of debate.

The Snooper Bowl was even dubbed after the game.

Following the terrorist events of September 11, 2001, a public policy discussion in the United States focused on the adoption of biometric technology for airport security.

Following 9/11, polls revealed that Americans were prepared to give up significant portions of their privacy in exchange for increased security.

Biometric technology were already widely used in other nations, such as the Netherlands.

The Privium program for passenger iris scan verification has been in effect at Schiphol Airport since 2001.

In 2015, the Transportation Security Administration (TSA) of the United States started testing biometric techniques for identification verification.

In 2019, Delta Air Lines, in collaboration with US Customs and Border Protection, provided customers at Atlanta's Maynard Jackson International Terminal the option of face recognition boarding.

Passengers can get their boarding cards, self-check baggage bags, and navigate TSA checkpoints and gates without interruption thanks to the technology.

Only 2% of travelers choose to opt out during the first launch.

Biometric authentication systems are currently being used by financial institutions in routine commercial transactions.

They are already widely used to secure personal smart phone access.

As smart home gadgets linked to the internet need support for safe financial transactions, intelligent security will become increasingly more vital.

Opinions on biometrics often shift in response to changing circumstances and settings.

People who support the use of face recognition technology at airports to make air travel safer may be opposed to digital fingerprinting at their bank.

Some individuals believe that private companies' use of biometric technology dehumanizes them, treating them as goods rather than persons and following them in real time.

Community policing is often recognized as an effective technique to create connections between law enforcement personnel and the communities they police at the local level.

However, other opponents argue that biometric monitoring shifts the emphasis away from community formation and toward governmental socio-technical control.

The importance of context, on the other hand, cannot be overstated.

Biometrics in the workplace may be seen as a leveler, since it subjects white-collar employees to the same level of scrutiny as blue-collar workers.

For usage in cloud security systems, researchers are starting to build video analytics AI software and smart sensors.

In real-time monitoring of workplaces, public spaces, and residences, these systems can detect known persons, items, sounds, and movements.

They may also be programmed to warn users when they are in the presence of strangers.

Artificial intelligence algorithms that were once used to create biometric systems are now being utilized to thwart them.

GANs, for example, are generative adversarial networks that replicate human users of network technology and applications.

GANs have been used to build fictitious people's faces using biometric training data.

GANs are often made up of a creator system that creates each new picture and a critic system that iteratively compares the fake face to the original photograph.

In 2020, the firm Icons8 claimed that it could make a million phony headshots in a single day using just seventy human models.

The firm distributes stock images of the headshots made using its proprietary StyleGAN technology.

A university, a dating app, and a human resources agency have all been clients.

Rosebud AI distributes GAN-generated photographs to online shopping sites and small companies who can't afford to pay pricey models and photographers.

Deepfake technology has been used to perpetrate hoaxes and misrepresentations, make fake news clips, and conduct financial fraud.

It uses machine learning algorithms to create convincing but counterfeit videos.

Facebook profiles with deepfake profile photographs have been used to boost political campaigns on social media.

Deepfake hacking is possible on smartphones with face recognition locks.

Deepfake technology may also be used for good.

Such technology has been utilized in films to make performers seem younger in flashbacks or other similar scenarios.

Digital technology was also employed in films like Rogue One: A Star Wars Story (2016) to incorporate the late Peter Cushing (1913–1994), who portrayed the same role from the original 1977 Star Wars picture.

Face-swapping is available to recreational users via a number of software apps.

Users may submit a selfie and adjust their hair and facial expression with FaceApp.

In addition, the computer may mimic the aging of a person's features.

Zao is a deepfake program that takes a single picture and replaces the faces of stars from movies and television shows in hundreds of video.

Deepfake algorithms are now being used to identify the deepfakes' own videos.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Biometric Technology.


Further Reading


Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, 

Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” NIPS ’14: Proceedings of the 27th International Conference on Neural Information Processing Systems 2 (December): 2672–80.

Hopkins, Richard. 1999. “An Introduction to Biometrics and Large-Scale Civilian Identification.” International Review of Law, Computers & Technology 13, no. 3: 337–63.

Jain, Anil K., Ruud Bolle, and Sharath Pankanti. 1999. Biometrics: Personal Identification in Networked Society. Boston: Kluwer Academic Publishers.

Januškevič, Svetlana N., Patrick S.-P. Wang, Marina L. Gavrilova, Sargur N. Srihari, and Mark S. Nixon. 2007. Image Pattern Recognition: Synthesis and Analysis in Biometrics. Singapore: World Scientific.

Nanavati, Samir, Michael Thieme, and Raj Nanavati. 2002. Biometrics: Identity Verification in a Networked World. New York: Wiley.

Reichert, Ramón, Mathias Fuchs, Pablo Abend, Annika Richterich, and Karin Wenz, eds. 2018. Rethinking AI: Neural Networks, Biometrics and the New Artificial Intelligence. Bielefeld, Germany: Transcript-Verlag.

Woodward, John D., Jr., Nicholas M. Orlans, and Peter T. Higgins. 2001. Biometrics: Identity Assurance in the Information Age. New York: McGraw-Hill.




Artificial Intelligence - What Is Algorithmic Error and Bias?

 




Bias in algorithmic systems has emerged as one of the most pressing issues surrounding artificial intelligence ethics.

Algorithmic bias refers to a computer system's recurrent and systemic flaws that discriminate against certain groups or people.

It's crucial to remember that bias isn't necessarily a bad thing: it may be included into a system in order to fix an unjust system or reality.

Bias causes problems when it leads to an unjust or discriminating conclusion that affects people's lives and chances.

Individuals and communities that are already weak in society are often at danger from algorithmic prejudice and inaccuracy.

As a result, algorithmic prejudice may exacerbate social inequality by restricting people's access to services and goods.

Algorithms are increasingly being utilized to guide government decision-making, notably in the criminal justice sector for sentencing and bail, as well as in migration management using biometric technology like face and gait recognition.

When a government's algorithms are shown to be biased, individuals may lose faith in the AI system as well as its usage by institutions, whether they be government agencies or private businesses.

There have been several incidents of algorithmic prejudice during the past few years.

A high-profile example is Facebook's targeted advertising, which is based on algorithms that identify which demographic groups a given advertisement should be viewed by.

Indeed, according to one research, job advertising for janitors and related occupations on Facebook are often aimed towards lower-income groups and minorities, while ads for nurses or secretaries are focused at women (Ali et al. 2019).

This involves successfully profiling persons in protected classifications, such as race, gender, and economic bracket, in order to maximize the effectiveness and profitability of advertising.

Another well-known example is Amazon's algorithm for sorting and evaluating resumes in order to increase efficiency and ostensibly impartiality in the recruiting process.

Amazon's algorithm was trained using data from the company's previous recruiting practices.

However, once the algorithm was implemented, it became evident that it was prejudiced against women, with résumés that contained the terms "women" or "gender" or indicated that the candidate had attended a women's institution receiving worse rankings.

Little could be done to address the algorithm's prejudices since it was trained on Amazon's prior recruiting practices.

While the algorithm was plainly prejudiced, this example demonstrates how such biases may mirror social prejudices, including, in this instance, Amazon's deeply established biases against employing women.

Indeed, bias in an algorithmic system may develop in a variety of ways.

Algorithmic bias occurs when a group of people and their lived experiences are not taken into consideration while the algorithm is being designed.

This can happen at any point during the algorithm development process, from collecting data that isn't representative of all demographic groups to labeling data in ways that reproduce discriminatory profiling to the rollout of an algorithm that ignores the differential impact it may have on a specific group.

In recent years, there has been a proliferation of policy documents addressing the ethical responsibilities of state and non-state bodies using algorithmic processing—to ensure against unfair bias and other negative effects of algorithmic processing—partly in response to significant publicity of algorithmic biases (Jobin et al.2019).

The European Union's "Ethics Guidelines for Trustworthy AI," issued in 2018, is one of the most important rules in this area.

The EU statement lays forth seven principles for fair and ethical AI and algorithmic processing regulation.

Furthermore, with the adoption of the General Data Protection Regulation (GDPR) in 2018, the European Union has been in the forefront of legislative responses to algorithmic processing.

A corporation may be penalized up to 4% of its annual worldwide turnover if it uses an algorithm that is found to be prejudiced on the basis of race, gender, or another protected category, according to the GDPR, which applies in the first instance to the processing of all personal information inside the EU.

The difficulty of determining where a bias occurred and what dataset caused prejudice is a persisting challenge for algorithmic processing regulation.

This is sometimes referred to as the algorithmic black box problem: an algorithm's deep data processing layers are so intricate and many that a human cannot comprehend them.

Different data is fed into the algorithm to observe where the unequal results emerge, based on the right to an explanation when, subject to an automated decision under the GDPR, one of the replies has been to identify where the bias occurred via counterfactual explanations (Wachter et al.2018).

Technical solutions to the issue included building synthetic datasets that seek to repair naturally existing biases in datasets or provide an unbiased and representative dataset, in addition to legal and legislative instruments for tackling algorithmic bias.

While such channels for redress are vital, one of the most comprehensive solutions to the issue is to have far more varied human teams developing, producing, using, and monitoring the effect of algorithms.

A mix of life experiences within diverse teams makes it more likely that prejudices will be discovered and corrected sooner.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Biometric Technology; Explainable AI; Gender and AI.

Further Reading

Ali, Muhammed, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. “Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes.” In Proceedings of the ACM on Human-Computer Interaction, vol. 3, CSCW, Article 199 (November). New York: Association for Computing Machinery.

European Union. 2018. “General Data Protection Regulation (GDPR).” https://gdpr-info.eu/.

European Union. 2019. “Ethics Guidelines for Trustworthy AI.” https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (September): 389–99.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law & Technology 31, no. 2 (Spring): 841–87.

Zuboff, Shoshana. 2018. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...