Showing posts with label Algorithmic Bias and Error. Show all posts
Showing posts with label Algorithmic Bias and Error. Show all posts

Artificial Intelligence - Machine Learning Regressions.

 


"Machine learning," a phrase originated by Arthur Samuel in 1959, is a kind of artificial intelligence that produces results without requiring explicit programming.

Instead, the system learns from a database on its own and improves over time.

Machine learning techniques have a wide range of applications (e.g., computer vision, natural language processing, autonomous gaming agents, classification, and regressions) and are used in practically every sector due to their resilience and simplicity of implementation (e.g., tech, finance, research, education, gaming, and navigation).

Machine learning algorithms may be generically classified into three learning types: supervised, unsupervised, and reinforcement, notwithstanding their vast range of applications.

Supervised learning is exemplified by machine learning regressions.

They use algorithms that have been trained on data with labeled continuous numerical outputs.

The quantity of training data or validation criteria required once the regression algorithm has been suitably trained and verified will depend on the issues being addressed.

For data with comparable input structures, the newly developed predictive models give inferred outputs.

These aren't static models.

They may be updated on a regular basis with new training data or by displaying the actual right outputs on previously unlabeled inputs.

Despite machine learning methods' generalizability, there is no one program that is optimal for all regression issues.

When choosing the best machine learning regression method for the present situation, there are a lot of things to think about (e.g., programming languages, available libraries, algorithm types, data size, and data structure).





There are machine learning programs that employ single or multivariable linear regression approaches, much like other classic statistical methods.

These models represent the connections between a single or several independent feature variables and a dependent target variable.

The models provide linear representations of the combined input variables as their output.

These models are only applicable to noncomplex and small data sets.

Polynomial regressions may be used with nonlinear data.

This necessitates the programmers' knowledge of the data structure, which is often the goal of utilizing machine learning models in the first place.

These methods are unlikely to be appropriate for most real-world data, but they give a basic starting point and might provide users with models that are straightforward to understand.

Decision trees, as the name implies, are tree-like structures that map the input features/attributes of programs to determine the eventual output goal.

The answer to the condition of that node splits into edges in a decision tree algorithm, which starts with the root node (i.e., an input variable).

A leaf is defined as an edge that no longer divides; an internal edge is defined as one that continues to split.

For example, age, weight, and family diabetic history might be used as input factors in a dataset of diabetic and nondiabetic patients to estimate the likelihood of a new patient developing diabetes.

The age variable might be used as the root node (e.g., age 40), with the dataset being divided into those who are more than or equal to 40 and those who are 39 and younger.

The model provides that leaf as the final output if the following internal node after picking more than or equal to 40 is whether or not a parent has/had diabetes, and the leaf estimates the affirmative responses to have a 60% likelihood of this patient acquiring diabetes.

This is a very basic decision tree that demonstrates the decision-making process.

Thousands of nodes may readily be found in a decision tree.

Random forest algorithms are just decision tree mashups.

They are made up of hundreds of decision trees, the ultimate outputs of which are the averaged outputs of the individual trees.

Although decision trees and random forests are excellent at learning very complex data structures, they are prone to overfitting.

With adequate pruning (e.g., establishing the n values limits for splitting and leaves) and big enough random forests, overfitting may be reduced.

Machine learning techniques inspired by the neural connections of the human brain are known as neural networks.


Neurons are the basic unit of neural network algorithms, much as they are in the human brain, and they are organized into numerous layers.

The input layer contains the input variables, the hidden layers include the layers of neurons (there may be numerous hidden levels), and the output layer contains the final neuron.

A single neuron in a feedforward process 

(a) takes the input feature variables, 

(b) multiplies the feature values by a weight, 

(c) adds the resultant feature products, together with a bias variable, and 

(d) passes the sums through an activation function, most often a sigmoid function.


The partial derivative computations of the previous neurons and neural layers are used to alter the weights and biases of each neuron.

Backpropagation is the term for this practice.


The output of the activation function of a single neuron is distributed to all neurons in the next hidden layer or final output layer.

As a result, the projected value is the last neuron's output.

Because neural networks are exceptionally adept at learning exceedingly complicated variable associations, programmers may spend less time reconstructing their data.

Neural network models, on the other hand, are difficult to interpret due to their complexity, and the intervariable relationships are largely hidden.

When used on extremely big datasets, neural networks operate best.

They need meticulous hyper-tuning and considerable processing capacity.

For data scientists attempting to comprehend massive datasets, machine learning has become the standard technique.

Machine learning systems are always being improved in terms of accuracy and usability by researchers.

Machine learning algorithms, on the other hand, are only as useful as the data used to train the model.

Poor data produces dramatically erroneous outcomes, while biased data combined with a lack of knowledge deepens societal disparities.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Algorithmic Bias and Error; Automated Machine Learning; Deep Learning; Explainable AI; Gender and AI.



Further Reading:


Garcia, Megan. 2016. “Racist in the Machine: The Disturbing Implications of Algorithmic Bias.” World Policy Journal 33, no. 4 (Winter): 111–17.

GĂ©ron, Aurelien. 2019. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Sebastopol, CA: O’Reilly.



Artificial Intelligence - Gender and Artificial Intelligence.

 



Artificial intelligence and robots are often thought to be sexless and genderless in today's society, but this is not the case.

Humans, on the other hand, encode gender and stereo types into artificial intelligence systems in a similar way that gender is woven into language and culture.

The data used to train artificial intelligences has a gender bias.

Biased data may cause significant discrepancies in computer predictions and conclusions.

These differences would be said to be discriminating in humans.

AIs are only as good as the people who provide the data that machine learning systems capture, and they are only as ethical as the programmers who create and supervise them.

Machines presume gender prejudice is normal (if not acceptable) human behavior when individuals exhibit it.

When utilizing numbers, text, graphics, or voice recordings to teach algorithms, bias might emerge.

Machine learning is the use of statistical models to evaluate and categorize large amounts of data in order to generate predictions.

Deep learning is the use of neural network topologies that are expected to imitate human brainpower.

Data is labeled using classifiers based on previous patterns.

Classifiers have a lot of power.

By studying data from automobiles visible in Google Street View, they can precisely forecast income levels and political leanings of neighborhoods and cities.

The language individuals employ reveals gender prejudice.

This bias may be apparent in the names of items as well as how they are ranked in significance.

Beginning with the frequency with which their respective titles are employed and they are referred to as men and women vs boys and girls, descriptions of men and women are skewed.

The analogies and words employed are skewed as well.

Biased AI may influence whether or not individuals of particular genders or ethnicities are targeted for certain occupations, whether or not medical diagnoses are correct, whether or not they are able to acquire loans, and even how exams are scored.

"Woman" and "girl" are more often associated with the arts than with mathematics in AI systems.

Similar biases have been discovered in Google's AI systems for finding employment prospects.



Facebook and Microsoft's algorithms regularly correlate pictures of cooking and shopping with female activity, whereas sports and hunting are associated with masculine activity.

Researchers have discovered instances when gender prejudices are purposefully included into AI systems.

Men, for example, are more often provided opportunities to apply for highly paid and sought-after positions on job sites than women.

Female-sounding names for digital assistants on smartphones include Siri, Alexa, and Cortana.

According to Alexa's creator, the name came from negotiations with Amazon CEO Jeff Bezos, who desired a virtual assistant with the attitude and gender of the Enterprise starship computer from the Star Trek television program, which is a woman.

Debo rah Harrison, the Cortana project's head, claims that their female voice arose from studies demonstrating that people react better to female voices.

However, when BMW introduced a female voice to its in-car GPS route planner, it experienced instant backlash from males who didn't want their vehicles to tell them what to do.

Female voices should seem empathic and trustworthy, but not authoritative, according to the company.

Affectiva, a startup that specializes in artificial intelligence, utilizes photographs of six million people's faces as training data to attempt to identify their underlying emotional states.

The startup is now collaborating with automakers to utilize real-time footage of drivers to assess whether or not they are weary or furious.

The automobile would advise these drivers to pull over and take a break.

However, the organization has discovered that women seem to "laugh more" than males, which complicates efforts to accurately estimate the emotional states of normal drivers.

In hardware, the same biases might be discovered.

A disproportionate percentage of female robots are created by computer engineers, who are still mostly male.

The NASA Valkyrie robot, which has been deployed on Shuttle flights, has breasts.

Jia, a shockingly human-looking robot created at China's University of Science and Technology, has long wavy black hair, pale complexion, and pink lips and cheeks.

She maintains her eyes and head inclined down when initially spoken to, as though in reverence.

She wears a tight gold gown that is slender and busty.

"Yes, my lord, what can I do for you?" she says as a welcome.

"Don't get too near to me while you're taking a photo," Jia says when asked to snap a picture.

It will make my face seem chubby." In popular culture, there is a strong prejudice against female robots.

Fembots in the 1997 film Austin Powers discharged bullets from their breast cups, weaponizing female sexuality.

The majority of robots in music videos are female robots.

Duran Duran's "Electric Barbarella" was the first song accessible for download on the internet.

Bjork's video "The Girl And The Robot" gave birth to the archetypal white-sheathed robot seen today in so many places.

Marina and the Diamonds' protest that "I Am Not a Robot" is met by Hoodie Allen's fast answer that "You Are Not a Robot." In "The Ghost Inside," by the Broken Bells, a female robot sacrifices plastic body parts to pay tolls and reclaim paradise.

The skin of Lenny Kravitz's "Black Velveteen" is titanium.

Hatsune Miku and Kagamine Rin are anime-inspired holographic vocaloid singers.

Daft Punk is the notable exception, where robot costumes conceal the genuine identity of the male musicians.

Sexy robots are the principal love interests in films like Metropolis (1927), The Stepford Wives (1975), Blade Runner (1982), Ex Machina (2014), and Her (2013), as well as television programs like Battlestar Galactica and Westworld.

Meanwhile, "killer robots," or deadly autonomous weapons systems, are hypermasculine.

Atlas, Helios, and Titan are examples of rugged military robots developed by the Defense Advanced Research Projects Agency (DARPA).

Achilles, Black Knight, Overlord, and Thor PRO are some of the names given to self-driving automobiles.

The HAL 9000 computer implanted in the spacecraft Discovery in 2001: A Space Odyssey (1968), the most renowned autonomous vehicle of all time, is masculine and deadly.

In the field of artificial intelligence, there is a clear gender disparity.

The head of the Stanford Artificial Intelligence Lab, Fei-Fei Li, revealed in 2017 that her team was mostly made up of "men in hoodies" (Hempel 2017).

Women make up just approximately 12% of the researchers who speak at major AI conferences (Simonite 2018b).

In computer and information sciences, women have 19% of bachelor's degrees and 22% of PhD degrees (NCIS 2018).

Women now have a lower proportion of bachelor's degrees in computer science than they did in 1984, when they had a peak of 37 percent (Simonite 2018a).

This is despite the fact that the earliest "computers," as shown in the film Hidden Figures (2016), were women.

There is significant dispute among philosophers over whether un-situated, gender-neutral knowledge may exist in human society.

Users projected gender preferences on Google and Apple's unsexed digital assistants even after they were launched.

White males developed centuries of professional knowledge, which was eventually unleashed into digital realms.

Will machines be able to build and employ rules based on impartial information for hundreds of years to come? In other words, is there a gender to scientific knowledge? Is it masculine or female? Alison Adam is a Science and Technology Studies researcher who is more concerned in the gender of the ideas created by the participants than the gender of the persons engaged.

Sage, a British corporation, recently employed a "conversation manager" entrusted with building a gender-neutral digital assistant, which was eventually dubbed "Pegg." To help its programmers, the organization has also formalized "five key principles" in a "ethics of code" paper.

According to Sage CEO Kriti Sharma, "by 2020, we'll spend more time talking to machines than our own families," thus getting technology right is critical.

Aether, a Microsoft internal ethics panel for AI and Ethics in Engineering and Research, was recently established.

Gender Swap is a project that employs a virtual reality system as a platform for embodiment experience, a kind of neuroscience in which users may sense themselves in a new body.

Human partners utilize the immersive Head Mounted Display Oculus Rift and first-person cameras to generate the brain illusion.

Both users coordinate their motions to generate this illusion.

The embodiment experience will not operate if one does not correlate to the movement of the other.

It implies that every move they make jointly must be agreed upon by both users.

On a regular basis, new causes of algorithmic gender bias are discovered.

Joy Buolamwini, an MIT computer science graduate student, discovered gender and racial prejudice in the way AI detected individuals' looks in 2018.

She discovered, with the help of other researchers, that the dermatologist-approved Fitzpatrick The datasets for Skin Type categorization systems were primarily made up of lighter-skinned people (up to 86 percent).

The researchers developed a skin type system based on a rebalanced dataset and used it to compare three gender categorization systems available off the shelf.

They discovered that darker-skinned girls are the most misclassified in all three commercial systems.

Buolamwini founded the Algorithmic Justice League, a group that fights unfairness in decision-making software.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

Algorithmic Bias and Error; Explainable AI.


Further Reading:


Buolamwini, Joy and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research: Conference on Fairness, Accountability, and Transparency 81: 1–15.

Hempel, Jessi. 2017. “Melinda Gates and Fei-Fei Li Want to Liberate AI from ‘Guys With Hoodies.’” Wired, May 4, 2017. https://www.wired.com/2017/05/melinda-gates-and-fei-fei-li-want-to-liberate-ai-from-guys-with-hoodies/.

Leavy, Susan. 2018. “Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning.” In GE ’18: Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14–16. New York: Association for Computing Machinery.

National Center for Education Statistics (NCIS). 2018. Digest of Education Statistics. https://nces.ed.gov/programs/digest/d18/tables/dt18_325.35.asp.

Roff, Heather M. 2016. “Gendering a Warbot: Gender, Sex, and the Implications for the Future of War.” International Feminist Journal of Politics 18, no. 1: 1–18.

Simonite, Tom. 2018a. “AI Is the Future—But Where Are the Women?” Wired, August 17, 2018. https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance/.

Simonite, Tom. 2018b. “AI Researchers Fight Over Four Letters: NIPS.” Wired, October 26, 2018. https://www.wired.com/story/ai-researchers-fight-over-four-letters-nips/.

Søraa, Roger Andre. 2017. “Mechanical Genders: How Do Humans Gender Robots?” Gender, Technology, and Development 21, no. 1–2: 99–115.

Wosk, Julie. 2015. My Fair Ladies: Female Robots, Androids, and Other Artificial Eves. New Brunswick, NJ: Rutgers University Press.



Artificial Intelligence - What Is Explainable AI Or XAI?

 




AI that can be explained Explainable AI (XAI) refers to approaches or design decisions used in automated systems such that artificial intelligence and machine learning produce outputs with a logic that humans can understand and explain.




The extensive usage of algorithmically assisted decision-making in social situations has raised considerable concerns about the possibility of accidental prejudice and bias being encoded in the choice.




Furthermore, the application of machine learning in domains that need a high degree of accountability and transparency, such as medicine or law enforcement, emphasizes the importance of outputs that are easy to understand.

The fact that a human operator is not involved in automated decision-making does not rule out the possibility of human bias being embedded in the outcomes produced by machine computation.




Artificial intelligence's already limited accountability is exacerbated by the lack of due process and human logic.




The consequences of algorithmically driven processes are often so complicated that even their engineering designers are unable to understand or predict them.

The black box of AI is a term that has been used to describe this situation.

To address these flaws, the General Data Protection Regulation (GDPR) of the European Union contains a set of regulations that provide data subjects the right to an explanation.

Article 22, which deals with automated individual decision-making, and Articles 13, 14, and 15, which deal with transparency rights in relation to automated decision-making and profiling, are the ones to look out for.


When a decision based purely on automated processing has "legal implications" or "similarly substantial" effects on a person, Article 22 of the GDPR reserves a "right not to be subject to a decision based entirely on automated processing" (GDPR 2016).





It also provides three exceptions to this right, notably when it is required for a contract, when a member state of the European Union has approved a legislation establishing an exemption, or when a person has expressly accepted to algorithmic decision-making.

Even if an exemption to Article 22 applies, the data subject has the right to "request human involvement on the controller's side, to voice his or her point of view, and to challenge the decision" (GDPR 2016).





Articles 13 through 15 of the GDPR provide a number of notification rights when personal data is obtained (Article 13) or from third parties (Article 14), as well as the ability to access such data at any time (Article 15), including "meaningful information about the logic involved" (GDPR 2016).

Recital 71 protects the data subject's right to "receive an explanation of the conclusion taken following such evaluation and to contest the decision" where an automated decision is made that has legal consequences or has a comparable impact on the person (GDPR 2016).





Recital 71 is not legally binding, but it does give advice on how to interpret relevant provisions of the GDPR.

The question of whether a mathematically interpretable model is sufficient to account for an automated judgment and provide transparency in automated decision-making is gaining traction.

Ex-ante/ex-post auditing is an alternative technique that focuses on the processes around machine learning models rather than the models themselves, which may be incomprehensible and counterintuitive.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Algorithmic Bias and Error; Deep Learning.


Further Reading:


Brkan, Maja. 2019. “Do Algorithms Rule the World? Algorithmic Decision-Making in the 

Framework of the GDPR and Beyond.” International Journal of Law and Information Technology 27, no. 2 (Summer): 91–121.

GDPR. 2016. European Union. https://gdpr.eu/.

Goodman, Bryce, and Seth Flaxman. 2017. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’” AI Magazine 38, no. 3 (Fall): 50–57.

Kaminski, Margot E. 2019. “The Right to Explanation, Explained.” Berkeley Technology Law Journal 34, no. 1: 189–218.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “A Study into the Layers of Automated Decision-Making: Emergent Normative and Legal Aspects of Deep Learn￾ing.” International Review of Law, Computers & Technology 31, no. 2: 170–87.

Selbst, Andrew D., and Solon Barocas. 2018. “The Intuitive Appeal of Explainable Machines.” Fordham Law Review 87, no. 3: 1085–1139.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...