Dr. Michael Best recently finished his role as the first director of the United Nations University Institute on Computing and Society, where he served since 2015 to communicate the ethical and social implications of artificial intelligence to policymakers in the UN. He is an associate professor at the Georgia Institute of Technology, where he focuses on the role of computing technologies for social, economic, and political development. In his talk, Dr. Best discussed the potential that AI has to advance society if we can avoid its many pitfalls, and the challenges that researchers and policymakers face in directing the technology away from a dystopian course.
The current Secretary-General of the UN, António Guterres, has a background in electrical engineering. However, the majority of policymakers are intimidated by computing—AI in particular—and depend on those with technical backgrounds to guide their decision-making. Dr. Best recollects attending meetings where there is not a single other engineer in the room. The communication gap grows apparent when trying to explain topics such as gradient descent to a group of regulators.
Due to the challenge of discussing AI with a non-technical audience, Dr. Best has developed a value-driven framework. He chose to frame the conversation around four topics: livelihood and work, diversity and discrimination, data privacy and protection, and peace and physical security. For example, a discussion about livelihood and work would contrast the potential rewards of economic growth and expanded leisure against competition and labor disruption as AI technologies expand. This value-driven approach empowers strategists to think holistically about the impacts of AI, instead of breaking up the technology into sectors such as transportation and security, which are actually interconnected.
Especially among older UN members, there is a tendency to lump AI with past technological rises, such as the internet and telecommunications. However, Dr. Best analyzes AI and highlights three characteristics of this developing technology that may cause unprecedented impact. The first is the potential of a singularity, an event horizon beyond which we cannot fathom the capabilities of AI. The second is AI’s ability to develop through recursive self-improvement, which poses both a threat and opportunity. Finally is the possibility of an ethical AI, where we equip the technology with moral agency. None of these prospects were possible with the internet or telecommunications, which underscores the need to approach AI in ways different than other technology.
Pervasive social issues with AI also arise when evaluating the disparity between the creators and users of AI. “AI design is done in a few places,” Dr. Best said, “but its impact is global.” AI technology is predominantly created by a handful of countries, notably in the US and China. However, Dr. Best enumerated a number of AI organizations based in other countries, including FarmDrive, a Kenyan company using a machine learning–based approach to connect farmers to loans and financial management tools, and the Not Company, a Chilean startup using AI to develop plant-based substitutes for dairy and meat products. To ensure similar startups continue to develop across the world, policymakers must invest in education, infrastructure, and job opportunities. Another lack of representation in AI lies in gender diversity. AI researchers have asserted that the field has a “white guy problem” and estimated that female representation stands around 13.5 percent. Dr. Best calculated that the gender breakdown among USC’s own AI faculty is a mere 16 percent. When developing any technology, developers leave artifacts of their own biases, preconceptions, and thought patterns that may be hard to identify but raise many social concerns—and AI is no exception. Supporting gender and cultural diversity in the field is crucial to eliminating these artifacts.
Dr. Best also negotiated the trade-off between explainability and performance when evaluating AI. Some researchers demand total explainability, which is impossible in fields such as deep learning. “We cannot go to an image-recognition algorithm and ask why they classify an image as a cat,” Dr. Best reasoned. Other researchers, unwilling to give up performance, wish to relax standards. They point out that sacrificing performance would reduce the capabilities of AI to diagnose disease, identify causes of climate change, and individualize our education system. In their words, demanding explainability would make artificial intelligence “artificially stupid.” One compromise would be to instead demand auditability, ensuring that AI algorithms meet fairness standards we set beforehand. For example, we could require that an AI lacks bias across race, gender, LGBTQ status, and socioeconomic status.
Overall, Dr. Best’s work at the UN has been to lay out a value-driven framework for AI that attempts to balance the risks and rewards associated with this growing technology. His own research focuses on monitoring social media sources during critical events, such as the Ghana National Elections in 2016, to provide real-time responses ranging from delivering extra polling ballots to investigating a bomb threat.