There’s a lot of excitement in the air these days about Artificial Intelligence, along with a lot of speculation about who will be impacted in the workplace and by how much.
We take it for granted that AI is in the process of transforming manufacturing along with many forms of blue-collar work.
What about white-collar work, such as psychotherapy? In fact, we have already seen AIs taking over some of the work in such white-collar fields as journalism, education, and the law among others. The current consensus seems to be that AI will not totally replace human workers in white-collar jobs. At least not in the short term. The more likely scenario that we are already witnessing is that AI will be used to create tools that assist white-collar workers by taking away the drudgery of repetitive tasks. Not only can AI handle repetitive tasks but also they can do so at super-human speeds. By using these sorts of intelligent assistants, white-collar workers can work more efficiently, thereby allowing them to do more work of the high-level thinking variety. Rather than being replaced, the human beings are freed up to do the parts where humans excel such as insight, empathy, creativity, and contextual understanding.
“This raises the question, however, as to when or whether an AI will have the sort of general intelligence to actually stand-in for a psychotherapist in a truly convincing way.”
There is so much enthusiasm about the promise of AI that it is being hyped and oversold. We can expect a major marketing assault from a variety of companies peddling products and services. Much like we’ve seen with soap powders and toothpaste, I predict we will be seeing “New and improved! Contains AI!” Buyer beware of rampant puffery.
AI does have its talents, however, and one of the most useful so far is pattern recognition. When large data sets are fed into an AI system, it is sometimes able to recognize patterns with greater accuracy than human experts. One example, I’ve written about previously is the ability of an AI that’s been provided with a large number of photos of possible skin cancers has been shown to outperform the human expert in recognizing those that are most likely to be deadly. And speaking of deadly, there is a recent report that Google has developed an AI that can predict with 95{87a18df7a28eb56c6a7dc02e4e1a3d322672f7d5de2b418517971f2bf2603901} accuracy when a patient in hospital will die.
Two examples that are more in the mental health domain relate to schizophrenia and depression. According to a post at IBM.com, “IBM scientists collaborated with researchers at the University of Alberta and the IBM Alberta Centre for Advanced Studies (CAS) to publish new research regarding the use of AI and machine learning algorithms to predict instances of schizophrenia with a 74 percent accuracy.” They go on to predict, “‘computational psychiatry’ can be used to help clinicians more quickly assess – and therefore treat – patients with schizophrenia.”
Techmergence.com notes, “Depression is a leading mental disorder impacting about 16 million Americans. According to the World Health Organization, the annual global economic impact of depression is estimated at $1 trillion and is projected to be the leading cause of disability by 2020.
The University of Texas at Austin is using a super computer called Stampede in developing a machine-learning algorithm to detect depression. According to DigitalJournal.com, “The program is intended to identify commonalities among patients using Magnetic Resonance Imaging (MRI) brain scans, genomics data and other factors in order to make predictions of risk for those with depression and anxiety. From the analysis of hundreds of patient data inputs (taken from 2 treatment-seeking participants with depression, and 45 healthy control participants), the researchers have successfully classified individuals with major depressive disorder with a 75 percent accuracy. This provides the basis for a workable diagnostic tool.”
What sorts of incursions into the practice of psychotherapy can we expect?
- These reflections were triggered, in part, by my recent Shrink Rap Radio interview with Silja Litvin (http://tinyurl.com/yc4kmx5k). Silja is a UK psychologist who is also founder and CEO of PsycApps. Her company’s first product is eQuoo, pronounced E.Q. because of Litvin’s conviction that our Emotional I.Q. is actually more important for success in today’s world than our Intellectual I.Q. Having spent time playing this game, I think it clearly could be marketed as a “therapy” app. Interestingly, at this point they’re NOT marketing it as a therapy app, but rather as an “Emotional Fitness Game” for mobile devices to widen its acceptance and the potential audience. Unfortunately, the word “therapy” still carries a certain stigma for the general population “emotional fitness” education is more palatable. Either way, they’ve succeeded in making it educational yet compelling through a mix of story telling, gamification and animation. In fact, one of their goals was to overcome the major drawback of competitors in this category, which is that people tend to drop out too soon. The app leads users painlessly through evidence-based information drawn from positive psychology, cognitive-behavior therapy, couples therapy, brain research, Big 5 personality theory and other psychological approaches that build resilience, optimism, and hope. Having spent time “playing” the game myself, I think eQuoo spearheads a revolution in mental health apps that eventually will incorporate AI and machine learning.
Right now Litvin is focusing on apps/games that help users to help themselves. It’s clear to me, however, that already eQuoo could be a leader in games/apps that are used in tandem with traditional person-to-person therapies, e.g. as homework between sessions.
Silja Litvin made me aware of another phenomenon that is more or less in this same space, which is the emergence of social media apps and websites designed to put mental health sufferers in touch with others who share their particular condition. For example, MIT researcher Richard Morris’s desire for a faster and less-expensive way to improve mental health than what he’d seen in his experience with traditional therapy. He created the temporary Panopoly site for his doctoral dissertation where people could post their problems and other users could re-frame them in less condemning ways. The site is no longer open to the general public but it led to a social networking app called Koko. According to the Hugging ton Post,
Koko operates just like any other social networking app in which you can post statuses and respond to other users’ content. The difference lies in what comes after you publish what’s on your mind. App users see your post and use a research-backed technique called “reframing” to make you think about an anxiety in a new way… “Reframing is all about changing how we think to change how we feel. When we’re stressed, we often become our worst enemy. We tell ourselves we can’t do it.”
In my interview with Silja, she also made reference to the growing number of meditation apps such as Calm, Headspace, Aura, and 10{87a18df7a28eb56c6a7dc02e4e1a3d322672f7d5de2b418517971f2bf2603901} Happier. While, strictly speaking, these are not therapy apps per se, they certainly can be considered adjunctive. It’s clear that we will see more and more apps that serve an adjunctive role.
This raises the question, however, as to when or whether an AI will have the sort of general intelligence to actually stand-in for a psychotherapist in a truly convincing way. I believe it’s unlikely to happen in our lifetimes, if ever. The problem is that AIs don’t have a wide-enough understanding of the world. They can be very smart in defined contexts but they don’t have the breadth of experience or the understanding of nuance that human beings possess. Amazon, in its quest to push that envelope, is offering a $3.5 million prize to developers in their Alexa competition to build an AI that can chat like a human. (https://www.theverge.com/2018/6/13/17453994/amazon-alexa-prize-2018-competition-conversational-ai-chatbots). It turns out that the steepness of the challenge is revealed in the mistakes the AI makes. For example, one Chabot in a conversation about Christmas said, “You know what I realized the other day? Santa Claus is the most elaborate lie ever told.’” At one level, that’s an amazing and provocative statement for a machine to make but imagine what a spoiler it would be if it were holding this conversation with a small child. The machine simply doesn’t have enough real world input (experience) to have the wisdom to anticipate how inappropriate that statement might be.
So I think all you psychoanalysts and their ilk can rest easy for some time.
Related:
- The Promise and Perils of AI
- The Impact of Machines on Jobs
- Can We Create Consciousness In A Machine?
- Learning to see: New artificial intelligence technique dramatically improves the quality of medical imaging