Artificial intelligence is finding a place in nearly every aspect of human life, from manufacturing to driving vehicles. It only makes sense that it would eventually move into healthcare. Here, AI shines in applications for diagnosis and data analysis, allowing doctors to confirm prognosis and to spot behaviors they might have missed.
Advancements like computer vision, natural language processing, and facial recognition increasingly allow algorithms to diagnose problems, to flag issues for medical review, and to analyze datasets to show patterns that might otherwise not be visible. This is the same technology that allows Google’s Deepmind to play computer games like StarCraft, researchers to identify calcifications with AI, and pathology diagnosis. Most AI algorithms achieve results by reviewing datasets to find patterns and classify data based on those patterns. This can range from oncological data like calcification of tumors to broken speech patterns in persons with schizophrenia. These advances are not yet at the stage to fully supplement mental healthcare but are nearing the point where they could add value for mental health professionals to offer earlier, better, and stronger diagnosis, plus additional help.
Chatbots and AI Apps in Mental Health
AI-based chatbots are quickly becoming popular in mental health apps. Individuals talk to an AI, sharing information, and can engage with the bot and receive help whenever needed, with no wait-time. The AI either holds a place to talk with a licensed counselor or therapist, or flags conversations that show the individual needs more help. Other applications include apps that automatically track user data and recommend information, and offer a diagnosis based on live sensor data for biometrics including heart rate, sleep data, and stress levels.
Woebo Health is one app that offers a chatbot to help users manage and deal with mental health. The app combines AI with principles from cognitive behavioral therapy to deliver a self-help tool designed to help users relax and help themselves. Others, like Quartet flag health risks based on short conversations. Individuals are directed to seek professional assistance if they show warning signs of depression.
Tools by companies like Babylon Health dominate this space, with a chatbot offering assistance with mental and physical health symptoms. The chatbot talks with users to work through symptoms and offer a tentative diagnosis, before forwarding all information to a doctor. Babylon Health was trialed by 21,500 people in hospitals throughout the UK in 2015, effectively reducing wait times and diagnosis times. Babylon Health straddles the line between two major offerings in mental health, which is a chatbot and diagnosis.
AI Mental Health Diagnosis
Human doctors are often ill-equipped to diagnose mental health problems with a single sitting. Most require in-depth contact with the patient, an understanding of their history and medical history, and multiple touchpoints before a reasonable diagnosis can be given. AI is very likely to play a strong point in the future of mental health diagnosis, because machines can track and recognize patterns in human behavior and emotions.
Today, there are numerous projects realizing this kind of diagnosis. They range from the World Well-Being Project to the Vanderbilt University. Some of these applications are actually more effective at specific diagnosis than human doctors. For example, algorithms designed to predict self-harm and suicide attempts was more accurate than most doctors without a significantly developed client relationship. Moreover, the AI is accurate to within 7 days, and with 85% accuracy. Another algorithm is able to successfully predict psychosis in high-risk youth based on simple speech analysis. This same technology is used to identify psychosis and schizophrenia with remarkable accuracy in adult patients. Speech analysis also helps doctors to predict risk of suicide in new patients, with higher accuracy than the doctors themselves. This project, which was funded by the world Well-Being Project, identify depression-associated language patterns.
While AI, including apps, chatbots, and diagnosis tooling shows great promise, it’s not in use yet. There are several reasons why. These include long-term accuracy, patient privacy, and lack of data sharing.
Accuracy – Studies show that algorithms are more accurate at diagnosing mental illness than most doctors. At the same time, most studies have been relatively small scale. Further research is needed to ensure accuracy over the long-term, including with patients with multiple diagnosis.
Patient Privacy – AI needs data to work. This means collecting personal data and analyzing it. In some cases, AI asks users to wear fitness bracelets or trackers to collect heart rate data, sleep data, and other information.
Data Sharing – Most medical institutions are notoriously strict with sharing data. This means that one medical organization will be very reluctant to share data to another organization. These systems are designed to protect patient privacy. Yet, according to John Hopkins University 10% of all preventable patient deaths are linked to poor communication between medical organizations. Data sharing is essential to the success of AI diagnosis and treatment, but impossible within the current climate.
Essentially, to adopt AI for mental healthcare, existing algorithms have to pass more rigorous testing and refinement. Patients also have to individually opt in, making decision regarding how much data they want to share and when. Long-term predictive analytics could be built into mobile phones and fitness devices, or opted into separately by users. These could flag signs of mental illness and alert the individual’s GP or psychologist before issues become major. Predictive diagnosis could also help doctors to offer early diagnosis before symptoms start to show, without invasive long-term monitoring. Most importantly, AI could eventually enhance the quality of existing diagnostics by adding AI to sessions. For now, these applications are not far off.
While AI is relatively new to the field of mental health, it can offer a great deal. Most of us can already access some AI in healthcare through online apps and tools. These can be helpful, but they do not replace the assistance or insight of a human psychologist. Some can and do aid medical professionals in offering diagnosis, in functioning as first line support, and in offering more information than doctors can collect alone. All of this is important and can prove to be valuable.
At the same time, AI is eventually just a tool. AI may never replace the insight of human cognition. AI use datasets for predictive analysis and if given the wrong dataset, the prediction will be wrong. It’s crucial that human involvement continues to be a factor in diagnosis and especially treatment. This means AI will likely never be a standalone form of treatment for mental illness.
Today, you cannot seek out treatment headed by AI diagnosis or AI led treatment. But, human empathy, concern, and connection are often crucial factors in recovery and motivation anyway. Connecting with people, including peers, therapists, and other staff is still a critical part of recovery, in learning to participate in positive social behavior, and in learning positive engagement with others as part of recovery. AI will eventually contribute to ensuring accuracy and in predicting the signals you are showing, but for now, old-fashioned human-led treatment is highly effective as well.