Showing posts with label Predictive Policing. Show all posts
Showing posts with label Predictive Policing. Show all posts

AI - Milind Tambe

 



Milind Tambe (1965–) is a pioneer in artificial intelligence research for social good.

Public health, education, safety and security, housing, and environmental protection are some of the frequent areas where AI is being used to solve societal issues.

Tambe has developed software that preserves endangered species in game reserves, social network algorithms that promote healthy eating habits, and applications that track social ills and community difficulties and provide suggestions to help people feel better.

Tambe was up in India, where the robot novels of Isaac Asimov and the first Star Trek series (1966–1969) inspired him to study about artificial intelligence.

Carnegie Mellon University's School of Computer Science awarded him his PhD.

His first study focused on the creation of AI software for security.

After the 2006 Mumbai commuter train attacks, he got interested in the possibilities of artificial intelligence in this subject.

His doctoral research revealed important game theory insights into the nature of random encounters and collaboration.

Tambe's ARMOR program generates risk assessment scores by randomly scheduling human security patrols and police checkpoints.

Following random screening processes, Los Angeles Airport police uncovered a vehicle carrying five rifles, ten pistols, and a thousand rounds of ammunition in 2009.

Federal air marshals and port security patrols utilize more latest versions of the program to arrange their flights.

Today, Tambe's group uses deep learning algorithms to aid wildlife conservation agents in distinguishing between poachers and animals captured by infrared cameras on unmanned drone aircraft in real time.

Within three-tenths of a second of their arrival near animals, the Systematic Poacher Detector (SPOT) can identify poachers.

SPOT was tested in Zimbabwe and Malawi park reserves before being deployed in Botswana.

PAWS, a successor technology that predicts poacher activities, has been implemented in Cambodia and might be used in more than 50 nations across the globe in the future years.

Tambe's algorithms can simulate population migrations and epidemic illness propagation in order to improve the efficacy of public health campaigns.

Several nonobvious patterns have been discovered by the algorithm, which will help to enhance illness management.

Tambe's team created a third algorithm to assist drug misuse counselors in dividing addiction rehabilitation groups into smaller subgroups where healthy social ties may flourish.

Climate change, gang violence, HIV awareness, and counterterrorism are among the other AI-based answers.

Tambe is the Helen N. and Emmett H. Jones Professor of Engineering at the University of Southern California's Viterbi School of Engineering (USC).

He is the cofounder and codirector of USC's Center for Artificial Intelligence in Society, and he has received several awards, including the John McCarthy Award and the Daniel H. Wagner Prize for Excellence in Operations Research Practice.

Both the Association for the Advancement of Artificial Intelligence (AAAI) and the Association for Computing Machinery have named him a Fellow (ACM).

Tambe is the cofounder and director of research of Avata Intelligence, a company that sells artificial intelligence management software to help companies with data analysis and decision-making.

LAX, the US Coast Guard, the Transportation Security Administration, and the Federal Air Marshals Service all employ his methods.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Predictive Policing.



References And Further Reading


Paruchuri, Praveen, Jonathan P. Pearce, Milind Tambe, Fernando Ordonez, and Sarit Kraus. 2008. Keep the Adversary Guessing: Agent Security by Policy Randomization. Riga, Latvia: VDM Verlag Dr. Müller.

Tambe, Milind. 2012. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned. Cambridge, UK: Cambridge University Press.

Tambe, Milind, and Eric Rice. 2018. Artificial Intelligence and Social Work. Cambridge, UK: Cambridge University Press.




Artificial Intelligence - Predictive Policing.

 





Predictive policing is a term that refers to proactive police techniques that are based on software program projections, particularly on high-risk areas and periods.

Since the late 2000s, these tactics have been progressively used in the United States and in a number of other nations throughout the globe.

Predictive policing has sparked heated debates about its legality and effectiveness.

Deterrence work in policing has always depended on some type of prediction.





Furthermore, from its inception in the late 1800s, criminology has included the study of trends in criminal behavior and the prediction of at-risk persons.

As early as the late 1920s, predictions were used in the criminal justice system.

Since the 1970s, an increased focus on geographical components of crime research, particularly spatial and environmental characteristics (such as street lighting and weather), has helped to establish crime mapping as a useful police tool.





Since the 1980s, proactive policing techniques have progressively used "hot-spot policing," which focuses police resources (particularly patrols) in regions where crime is most prevalent.

Predictive policing is sometimes misunderstood to mean that it prevents crime before it happens, as in the science fiction film Minority Report (2002).

Unlike conventional crime analysis approaches, they depend on predictive modeling algorithms powered by software programs that statistically analyze police data and/or apply machine-learning algorithms.





Perry et al. (2013) identified three sorts of projections that they can make: 

(1) locations and times when crime is more likely to occur; 

(2) persons who are more likely to conduct crimes; and 

(3) the names of offenders and victims of crimes.


"Predictive policing," on the other hand, generally relates mainly to the first and second categories of predictions.






Two forms of modeling are available in predictive policing software tools.

The geospatial ones show when and where crimes are likely to occur (in which area or even block), and they lead to the mapping of crime "hot spots." Individual-based modeling is the second form of modeling.

Variables like age, criminal histories, gang involvement, or the chance of a person being engaged in a criminal activity, particularly a violent one, are used in programs that give this sort of modeling.

These forecasts are often made in conjunction with the adoption of proactive police measures (Ridgeway 2013).

Police patrols and restrictions in crime "hot areas" are naturally included in geospatial modeling.

Individuals having a high risk of becoming involved in criminal behavior are placed under observation or reported to the authorities in the case of individual-based modeling.

Since the late 2000s, police agencies have been progressively using software tools from technology businesses that assist them create projections and implement predictive policing methods.

With the deployment of PredPol in 2011, the Santa Cruz Police Department became the first in the United States to employ such a strategy.





This software tool, which was inspired by earthquake aftershock prediction techniques, offers daily (and occasionally hourly) maps of "hot zones." It was first restricted to property offenses, but it was subsequently expanded to encompass violent crimes.

More than sixty police agencies throughout the United States already employ PredPol.

In 2012, the New Orleans Police Department was one of the first to employ Palantir to perform predictive policing.

Since then, many more software programs have been created, including CrimeScan, which analyzes seasonal and weekday trends in addition to crime statistics, and Hunchlab, which employs machine learning techniques and adds weather patterns.

Some police agencies utilize software tools that enable individual-based modeling in addition to geographic modeling.

The Chicago Police Department, for example, has relied on the Strategic Subject List (SSL) since 2013, which is generated by an algorithm that assesses the likelihood of persons being engaged in a shooting as either perpetrators or victims.

Individuals with the highest risk ratings are referred to the police for preventative action.




Predictive policing has been used in countries other than the United States.


PredPol was originally used in the United Kingdom in the early 2010s, and the Crime Anticipation System, which was first utilized in Amsterdam, was made accessible to all Dutch police departments in May 2017.

Several concerns have been raised about the accuracy of predictions produced by software algorithms employed in predictive policing.

Some argue that software systems are more objective than human crime data analyzers and can anticipate where crime will occur more accurately.

Predictive policing, from this viewpoint, may lead to a more efficient allocation of police resources (particularly police patrols) and is cost-effective, especially when software is used instead of paying human crime data analysts.

On the contrary, opponents argue that software program forecasts embed systemic biases since they depend on police data that is itself heavily skewed due to two sorts of faults.

To begin with, crime records appropriately represent law enforcement efforts rather than criminal activity.

Arrests for marijuana possession, for example, provide information on the communities and people targeted by police in their anti-drug efforts.

Second, not all victims report crimes to the police, and not all crimes are documented in the same way.

Sexual crimes, child abuse, and domestic violence, for example, are generally underreported, and U.S. citizens are more likely than non-U.S. citizens to report a crime.

For all of these reasons, some argue that predictions produced by predictive police software algorithms may merely tend to repeat prior policing behaviors, resulting in a feedback loop: In areas where the programs foresee greater criminal activity, policing may be more active, resulting in more arrests.

To put it another way, predictive police software tools may be better at predicting future policing than future criminal activity.

Furthermore, others argue that predictive police forecasts are racially prejudiced, given how historical policing has been far from colorblind.

Furthermore, since race and location of residency in the United States are intimately linked, the use of predictive policing may increase racial prejudices against nonwhite communities.

However, evaluating the effectiveness of predictive policing is difficult since it creates a number of methodological difficulties.

In fact, there is no statistical proof that it has a more beneficial impact on public safety than previous or other police approaches.

Finally, others argue that predictive policing is unsuccessful at decreasing crime since police patrols just dispense with criminal activity.

Predictive policing has sparked several debates.

The constitutionality of predictive policy's implicit preemptive action, for example, has been questioned since the hot-spot policing that commonly comes with it may include stop-and-frisks or unjustified stopping, searching, and questioning of persons.

Predictive policing raises ethical concerns about how it may infringe on civil freedoms, particularly the legal notion of presumption of innocence.

In reality, those on lists like the SSL should be allowed to protest their inclusion.

Furthermore, police agencies' lack of openness about how they use their data has been attacked, as has software firms' lack of transparency surrounding their algorithms and predictive models.

Because of this lack of openness, individuals are oblivious to why they are on lists like the SSL or why their area is often monitored.

Members of civil rights groups are becoming more concerned about the use of predictive policing technologies.

Predictive Policing Today: A Shared Statement of Civil Rights Concerns was published in 2016 by a coalition of seventeen organizations, highlighting the technology's racial biases, lack of transparency, and other serious flaws that lead to injustice, particularly for people of color and nonwhite neighborhoods.

In June 2017, four journalists sued the Chicago Police Department under the Freedom of Details Act, demanding that the department provide all information on the algorithm used to create the SSL.

While police departments are increasingly implementing software programs that predict crime, their use may decline in the future due to their mixed results in terms of public safety.

Two police agencies in the United Kingdom (Kent) and Louisiana (New Orleans) have terminated their contracts with predictive policing software businesses in 2018.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.



References & Further Reading:



Collins, Francis S., and Harold Varmus. 2015. “A New Initiative on Precision Medicine.” New England Journal of Medicine 372, no. 2 (February 26): 793–95.

Haskins, Julia. 2018. “Wanted: 1 Million People to Help Transform Precision Medicine: All of Us Program Open for Enrollment.” Nation’s Health 48, no. 5 (July 2018): 1–16.

Madara, James L. 2016 “AMA Statement on Precision Medicine Initiative.” February 25, 2016. Chicago, IL: American Medical Association.

Morrison, S. M. 2019. “Precision Medicine.” Lister Hill National Center for Biomedical Communications. U.S. National Library of Medicine. Bethesda, MD: National Institutes of Health, Department of Health and Human Services.





Artificial Intelligence - Person of Interest(2011–2016), The CBS Sci-Fi Series

 



Between 2011 through 2016, the fictitious television program Person of Interest ran on CBS for five seasons.

Although the show's early episodes resembled a serial crime drama, the tale developed into a science fiction genre that probed ethical questions around artificial intelligence development.

The show's central concept revolves upon a monitoring system known as "The Machine," which was developed for the United States by millionaire Harold Finch, portrayed by Michael Emerson.

This technology was created largely to avoid terrorist acts, but it has evolved to the point where it can anticipate crimes before they happen.

However, owing to its architecture, it only discloses the "person of interest's" social security number, which might be either the victim or the offender.

Normally, each episode is centered on a single person of interest number that has been produced.

Although the ensemble increases in size over the seasons, Finch first employs ex-CIA agent John Reese, portrayed by Jim Caviezel, to assist him in investigating and preventing these atrocities.

Person of Interest is renowned for emphasizing and dramatizing ethical issues surrounding both the invention and deployment of artificial intelligence.

Season four, for example, delves deeply into how Finch constructed The Machine in the first place.

Finch took enormous pains to ensure that The Machine had the correct set of values before exposing it to actual data, as shown by flashbacks.

As Finch strove to get the settings just correct, viewers were able to see exactly what might go wrong.

In one flashback, The Machine altered its own programming before lying about it.

When these failures arise, Finch deletes the incorrect code, noting that The Machine will have unrivaled capabilities.

The Machine quickly responds by overriding its own deletion procedures and even attempting to murder Finch.

"I taught it how to think," Finch says as he reflects on the process.

All I have to do now is educate it how to be concerned." Finally, Finch is able to program The Machine successfully with the proper set of ideals, which includes the preservation of human life.

The interaction of numerous AI beings is a second key ethical subject that runs through seasons three through five.

In season three, Samaritan, a competing AI surveillance software, is built.

This system does not care about human life in the same way as The Machine does, and as a result, it causes enormous harm and turmoil in order to achieve its goals, which include sustaining the United States' national security and its own survival.

As a result of their differences, Samaritan and The Machine find themselves at odds.

The Machine finally beats Samaritan, despite the fact that the program implies that Samaritan is more powerful owing to the employment of newer technology.

This program was mainly a critical success; nevertheless, declining ratings led to its cancellation after just thirteen episodes in its fifth season.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Biometric Privacy and Security; Biometric Technology; Predictive Policing.



References & Further Reading:



McFarland, Melanie. 2016. “Person of Interest Comes to an End, but the Technology Central to the Story Will Keep Evolving.” Geek Wire, June 20, 2016. https://www.geekwire.com/2016/person-of-interest/.

Newitz, Annalee. 2016. “Person of Interest Remains One of the Smartest Shows about AI on Television.” Ars Technica, May 3, 2016. https://arstechnica.com/gaming/2016/05/person-of-interest-remains-one-of-the-smartest-shows-about-ai-on-television/.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...