Artificial Intelligence (AI) is going to have a huge impact—in fact, it already has. Now is the time to make sure it has the right impact: a positive and inclusive one. How do we get there? Retraining, standards, and a concentrated effort to include diverse voices and viewpoints.
AI is being called “the new electricity” and the global economic impact of AI applications is expected to reach $2.95 trillion by 2025. The effects of AI won’t simply be concentrated among the companies developing the technology or among the tech savvy, however. The impacts will reach almost everyone. We’re already seeing AI incorporated into areas like medical diagnosis, chatbots and AI personal assistants, self-driving cars, and language translation. A recent Gallup and Northeastern University survey showed that nearly 9 out of 10 Americans use AI in some form–whether it’s through connected home devices like thermostats, through navigation services like Google Maps, or through video streaming services that provide automated content recommendations. While AI advancements hold incredible promise for positive societal benefit, the true impacts of these systems are up to us to shape. The train has left the station, and it’s moving fast.
So what are some of the outcomes we can expect to see from the widespread adoption of AI? There are two critical questions dominating the AI zeitgeist 1) What will happen to jobs that are changed or disrupted by automation and AI? And 2) What will AI be used for and how will it be used? These two conversations often get conflated and it’s important to look at them separately.
Disruption
“Now is the time to proactively invest in retraining, apprenticeship and upskilling those most at risk for having their jobs automated.”
We know AI and automation will impact jobs and the economy, but no one can agree on what to expect. A January 2018 article in MIT Technology Review looked at 19 studies about automation-fueled job change from groups like McKinsey and Gartner and found that the predictions were all over the map. Despite this note of uncertainty, we do know a few things with relative certainty. We know that many jobs will not be replaced by automation, but instead will shift through incorporating automation (te.g. spreadsheets aid, but don’t replace, accountants). We also know that some jobs are more likely than others to be displaced by automation–think about taxi drivers and self-driving cars. Now is the time to proactively invest in retraining, apprenticeship and upskilling those most at risk for having their jobs automated. Some of this work is already starting to happen, through national initiatives like TechHire or corporate retraining initiatives like AT&T’s new investment, IBM’s P-Tech, or Capital One’s Tech College. Workforce development initiatives, employers, and educators must pro-actively invest in and scale these efforts to mitigate the potential negative impacts of widespread job change.
Deployment
The second thread in the AI impacts conversation is deployment: how can we ensure that this important and increasingly ubiquitous technology is used responsibly and ethically? Because AI systems are only as fair and accurate as the data they’re trained on, we’re already seeing human and systemic bias reflected in AI products. This is especially problematic when you take into account the overwhelming racial, gender, and educational homogeneity of the field. A recent survey found that only 13{87a18df7a28eb56c6a7dc02e4e1a3d322672f7d5de2b418517971f2bf2603901} of CEOs at AI companies are women, and the numbers for racial minorities of any gender are even worse. Examples of bias are cropping up everywhere–from facial recognition software that can only reliably recognize white male faces to sentencing software that erroneously predicts that black males are more likely to reoffend than their white counterparts.
What can be done to minimize the disparate impacts of the same software on different groups? Importantly, we need to ensure that a greater diversity of voices is reflected in the AI field, in technical roles, leadership roles and in humanities and social science roles. By broadening access to the field, we can ensure that AI is being used to its fullest potential. When the field includes people who don’t fit into that existing homogeneous set, we see new lines of inquiry open up, new questions addressed, and more useful products created.
In addition to pushing for greater diversity in the field, we also must create and adopt ethical standards and training for the development and use of AI. Though no single set of ethics or standards has been unilaterally put forward, there are groups working on or advocating for ethical standards including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Fairness, Accountability, Transparency in Machine Learning, Partnership on AI, and AI Now. In order to make the most broadly useful standards and policy, we need to take an interdisciplinary and human-centered approach, calling on people from a variety of disciplines to contribute to the responsible development and regulation of AI.
What next?
Earlier, I mentioned that the AI train has left the station and it’s moving fast. While this is true, it’s still early. We still have time to address some of the key challenges of displacement and deployment. I believe that AI can truly be used to solve some of the big challenges facing the world today. I will be speaking about the great potential of AI at the upcoming IEEE Women in Engineering International Leadership conference. It is my hope that we can bring the right people to the table to join in the discussion so that we don’t miss out on the AI opportunity.
Related:
- We Need a Different Future Where the World of Machines and Intelligent Robots Will Give Us More
- Artificial Intelligence and the Future of Warfare
- Machines and the Modern-Day Labor Market
- Are robots putting us out of work or creating work opportunities?
- Why using AI to sentence criminals is a dangerous idea
- Stephen Hawking warned about the perils of artificial intelligence – yet AI gave him a voice