Artificial Intelligence (AI) is going to have a huge impact—in fact, it already has. Now is the time to make sure it has the right impact: a positive and inclusive one. How do we get there? Retraining, standards, and a concentrated effort to include diverse voices and viewpoints.

AI is being called “the new electricity” and the global economic impact of AI applications is expected to reach $2.95 trillion by 2025. The effects of AI won’t simply be concentrated among the companies developing the technology or among the tech savvy, however. The impacts will reach almost everyone. We’re already seeing AI incorporated into areas like medical diagnosis, chatbots and AI personal assistants, self-driving cars, and language translation. A recent Gallup and Northeastern University survey showed that nearly 9 out of 10 Americans use AI in some form–whether it’s through connected home devices like thermostats, through navigation services like Google Maps, or through video streaming services that provide automated content recommendations. While AI advancements hold incredible promise for positive societal benefit, the true impacts of these systems are up to us to shape. The train has left the station, and it’s moving fast.

So what are some of the outcomes we can expect to see from the widespread adoption of AI? There are two critical questions dominating the AI zeitgeist 1) What will happen to jobs that are changed or disrupted by automation and AI? And 2) What will AI be used for and how will it be used? These two conversations often get conflated and it’s important to look at them separately.


“Now is the time to proactively invest in retraining, apprenticeship and upskilling those most at risk for having their jobs automated.”

We know AI and automation will impact jobs and the economy, but no one can agree on what to expect. A January 2018 article in MIT Technology Review looked at 19 studies about automation-fueled job change from groups like McKinsey and Gartner and found that the predictions were all over the map. Despite this note of uncertainty, we do know a few things with relative certainty. We know that many jobs will not be replaced by automation, but instead will shift through incorporating automation (te.g. spreadsheets aid, but don’t replace, accountants). We also know that some jobs are more likely than others to be displaced by automation–think about taxi drivers and self-driving cars. Now is the time to proactively invest in retraining, apprenticeship and upskilling those most at risk for having their jobs automated. Some of this work is already starting to happen, through national initiatives like TechHire or corporate retraining initiatives like AT&T’s new investmentIBM’s P-Tech, or Capital One’s Tech College. Workforce development initiatives, employers, and educators must pro-actively invest in and scale these efforts to mitigate the potential negative impacts of widespread job change.


The second thread in the AI impacts conversation is deployment: how can we ensure that this important and increasingly ubiquitous technology is used responsibly and ethically? Because AI systems are only as fair and accurate as the data they’re trained on, we’re already seeing human and systemic bias reflected in AI products. This is especially problematic when you take into account the overwhelming racial, gender, and educational homogeneity of the field. A recent survey found that only 13{87a18df7a28eb56c6a7dc02e4e1a3d322672f7d5de2b418517971f2bf2603901} of CEOs at AI companies are women, and the numbers for racial minorities of any gender are even worse. Examples of bias are cropping up everywhere–from facial recognition software that can only reliably recognize white male faces to sentencing software that erroneously predicts that black males are more likely to reoffend than their white counterparts.

What can be done to minimize the disparate impacts of the same software on different groups?  Importantly, we need to ensure that a greater diversity of voices is reflected in the AI field, in technical roles, leadership roles and in humanities and social science roles. By broadening access to the field, we can ensure that AI is being used to its fullest potential. When the field includes people who don’t fit into that existing homogeneous set, we see new lines of inquiry open up, new questions addressed, and more useful products created.

In addition to pushing for greater diversity in the field, we also must create and adopt ethical standards and training for the development and use of AI. Though no single set of ethics or standards has been unilaterally put forward, there are groups working on or advocating for ethical standards including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Fairness, Accountability, Transparency in Machine Learning, Partnership on AI, and AI Now. In order to make the most broadly useful standards and policy, we need to take an interdisciplinary and human-centered approach, calling on people from a variety of disciplines to contribute to the responsible development and regulation of AI.

What next?

Earlier, I mentioned that the AI train has left the station and it’s moving fast. While this is true, it’s still early. We still have time to address some of the key challenges of displacement and deployment. I believe that AI can truly be used to solve some of the big challenges facing the world today. I will be speaking about the great potential of AI at the upcoming IEEE Women in Engineering International Leadership conference. It is my hope that we can bring the right people to the table to join in the discussion so that we don’t miss out on the AI opportunity.

Tess Posner is a social entrepreneur focused on increasing equity and inclusion in the tech economy. As Executive Director of AI4ALL, she is creating pathways for underrepresented K-12 students to chart a future in the artificial intelligence field through education programs at top universities across North America. Before joining AI4ALL, she was Managing Director of TechHire at Opportunity@Work, a national initiative launched out of the White House to increase diversity and inclusion in the tech economy, where she oversaw the network of 72 cities, states and rural areas and 1300+ companies implementing the TechHire model. Prior to joining Opportunity@Work, Tess built and ran Samaschool, a social enterprise part of the SamaGroup that equips low-income people to find work in the digital economy. Tess grew the program from a pilot in San Francisco in 2013 to international adoption, now with over a dozen diverse locations from New York City to rural Arkansas and in East Africa and launched an online training platform, now serving 15,000 students in 70+ countries. Samaschool’s approach to giving low-income populations 21st century skills to tap into the digital economy led it to be named one of the most innovative schools in the world by Business Insider, featured in TechCrunch, FastCompany and receive funding from organizations such as the Tipping Point Community, JPMorgan Chase, the California Endowment and the Robin Hood Foundation. Prior to Samaschool, Tess led the employment and education programs at First Place for Youth, a nationally-recognized model that helps foster youth find housing, get their first job, and stay in school. Before First Place, she managed programs that taught competitive debate as a literacy and empowerment tool in underserved public schools in New York City. Tess holds a master’s degree from Columbia University School in Social Enterprise Administration and a bachelors in liberal arts.


0 0 votes
Article Rating