Improving access to healthcare and predicting outbreaks of disease: significant progress has already been made in the detection and prevention of disease through the use of AI. AI is also used to boost access to healthcare in places where coverage is incomplete. Victims of disease outbreaks often benefit from the use of AI to allow health officials to intervene early to stop an outbreak before it begins.
Making life easier for visually impaired people: Image recognition apps help people who are visually impaired to better navigate the internet and the real world.
Mitigating climate change, forecasting natural disasters and conserving wildlife: with the global impact of climate change, machine learning is being used to create more reliable climate models for scientists. AI is already used to rate climate models and forecast extreme weather events, as well as to better predict extreme weather events and respond to natural disasters. AI is also useful for detecting and apprehending poachers and for finding and catching disease-causing animals.
Making government services more effective and available: despite sometimes sluggish adoption of new technology, governments around the world are using AI, from regional to national levels, to make public services more productive and usable, with an emphasis on smart cities. AI is also being used to distribute government resources and optimize budgets.
Perpetuating discrimination in criminal justice: most documented cases of AI have gone wrong in the criminal justice system. The use of AI in this context also exists in two different areas: risk scores assessing whether or not the criminal is likely to re-offend in order to suggest probation and bail or so-called “predictive police” using information from various data points to anticipate where or when crime will happen, and guide law enforcement action accordingly.
In many cases, these actions are likely to be well-intentioned. Use of risk-scoring machine-learning for defendants is marketed as eliminating the known human bias of judges in their sentencing and bail decisions. Yet predictive policing efforts aim to better deploy often-limited police resources to prevent crime, but there is always a high risk of mission creep. Furthermore, the suggestions of these AI systems also further reinforce the very prejudice they aim to combat, whether explicitly or through the inclusion of bias-proxied variables. Assisting the dissemination of disinformation: AI can be used to generate and disseminate targeted misinformation, and this problem is compounded by AI-powered “commitment”-driven social media algorithms that promote content most likely to be clicked on. Machine learning allows social media companies to use data analysis to create user profiles for targeted advertising. In addition, bots posing as real users further disseminate information beyond specifically focused social media networks, both by sharing links to false sources and by directly communicating with users as chatbots using natural language processing. In addition, the specter of “deep fakes,” AI systems capable of creating authentic video and audio recordings of real people, has led many to suspect that the software will be used in the future to produce faked images of world leaders for malicious purposes. Although it seems that deep fakes have yet to be used as part of actual misinformation or disinformation campaigns, and faked audio and video are still not strong enough to seem fully human, the AI behind deep fakes continues to advance, and