[dropcap style=”font-size: 60px; color: #9b9b9b;”]A[/dropcap]rtificial intelligence (AI) is going to have a huge impact—in fact, it already has. And now is the time to make sure it has the right impact; a positive and inclusive one. How do we get there? With retraining, high standards, and a concentrated effort to include diverse voices and viewpoints.

AI is being called “the new electricity, and the global economic impact of AI applications is expected to reach $2.95 trillion by 2025. The effects of AI won’t simply be concentrated among the companies developing the technology or among the tech savvy, however. The impacts will reach almost everyone. We’re already seeing AI incorporated across fields and activities as varied as medical diagnosis, chatbots and AI personal assistants, self-driving cars, and language translation, to name a few. A recent Gallup and Northeastern University survey showed that nearly 9 out of 10 Americans use AI in some form, whether it’s through connected home devices like thermostats, or navigation services like Google Maps, or video streaming services that provide automated content recommendations. While AI advancements hold incredible promise for positive societal benefit, the true impacts of these systems are up to us to shape.

In order to make the most broadly useful standards and policy, we need to take an interdisciplinary and human-centered approach, calling on people from a variety of disciplines to contribute to the responsible development and regulation of AI.

The train has left the station, and it’s moving fast.

What are some of the outcomes we can expect to see from the widespread adoption of AI? There are two critical questions dominating the AI zeitgeist: (1) what will happen to jobs that are changed or disrupted by automation and AI, and (2) what will AI be used for and how will it be used? These conversations often get conflated, but it’s important to look at them separately.

Disruption

We know AI and automation will impact jobs and the economy, but no one can agree on what to expect. A January 2018 article in MIT Technology Review looked at 19 studies about automation-fueled job change from research insititutes like McKinsey and Gartner, and found that the predictions were all over the map. Despite this note of uncertainty, there are a few things we do know with relative certainty. We do know, for example, that many jobs will not be replaced by automation but instead will change through incorporating automation (e.g. spreadsheets aid, but don’t replace, accountants). We also know that some jobs are more likely than others to be displaced by automation. Think about taxi drivers and self-driving cars. So now is the time to proactively invest in the retraining and upskilling of those whose jobs are most at risk from automation. Some of this work is already starting to happen through U. S. national initiatives like TechHire and corporate retraining initiatives like AT&T’s new investmentIBM’s P-Tech, and Capital One’s Tech College. Employers and educators must proactively invest in workforce development initiatives such as these, and scale these efforts to mitigate the potential negative impacts of widespread job change.



Deployment

The second thread in the AI impacts conversation is deployment: how can we ensure that this important and increasingly ubiquitous technology is used responsibly and ethically? Because AI systems are only as fair and accurate as the data they’re trained on, we’re already seeing human and systemic bias reflected in AI products. This is especially problematic when you take into account the overwhelming racial, gender, and educational homogeneity of the field. A recent survey found that only 13{87a18df7a28eb56c6a7dc02e4e1a3d322672f7d5de2b418517971f2bf2603901} of CEOs at AI companies are women, and the numbers for racial minorities of any gender are even worse. Examples of bias are cropping up everywhere—from facial recognition software that can only reliably recognize white male faces to sentencing software that erroneously predicts that black males are more likely to reoffend than their white counterparts.

We need to think about what can be done to minimize the disparate impacts of the same software on different groups? Importantly, we need to ensure that a greater diversity of voices is reflected in the AI field, in technical roles, leadership roles, and in humanities and social science roles. By broadening access to the field, we can ensure that AI is being used to its fullest potential. When the field includes people who don’t fit into that existing homogeneous set, we will see new lines of inquiry opening up, new questions being addressed, and more useful products being created.

When the field includes people who don’t fit into that existing homogeneous set, we will see new lines of inquiry opening up, new questions being addressed, and more useful products being created.

In addition to pushing for greater diversity, we must also create and adopt ethical standards governing the development and use of AI. Though no single set of ethics or standards has been unilaterally put forward, there are groups working on or advocating for ethical standards in this field, including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Fairness, Accountability, Transparency in Machine Learning, Partnership on AI, and AI Now. In order to make the most broadly useful standards and policy, we need to take an interdisciplinary and human-centered approach, calling on people from a variety of disciplines to contribute to the responsible development and regulation of AI.

What next?

Earlier, I mentioned that the AI train has left the station and it’s moving fast. While this is true, it’s still early. We still have time to address some of the key challenges of displacement and deployment. I believe that AI can truly be used to solve some of the big challenges facing the world today. I will be speaking about the great potential of AI at the upcoming IEEE Women in Engineering International Leadership conference. It is my hope that we can bring the right people to the table to join in the discussion so that we don’t miss out on the AI opportunity.

Related:

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

©2019 Dahlitz Media

Log in with your credentials

Forgot your details?