Big Data in Psychiatry – Brave New World?
Every day in the daily press, but also in the medical literature, the promises of “Big Data”, “Precision Medicine” and “Machine Learning” for medicine can be read and heard about. These proclamations usually begin with sentences such as: “Mental health (including substance abuse) is the fifth greatest contributor to the global burden of disease, with an economic cost of $ 2.5 trillion in 2010, and expected to double by 2030.” (Conway and O’Connor, 2016).
In my post on October 7, 2017, I already described what “digital phenotyping” means and what great expectations are associated with it. I have also expressed my skepticism. “Big Data” in psychiatry goes beyond that. It is hoped that from the pictures, which we post on Facebook or Instagram, our state of mind can be concluded. Initial studies have already been published, in which, based on Instagram photos or Twitter posts, the diagnoses “depression” or “post-traumatic stress disorder” could be made in a high percentage by an algorithm, sometimes long before a clinical diagnosis was made. Very soon, machines should be able to analyze speech in order to derive the diagnoses e.g. depression or incipient dementia. Also, the kind of music we hear should allow conclusions about our emotional state. One hopes in all seriousness, by an analysis of all the data, which one collects about us – and these are not only our digital data traces, but also biological data: genes, epigenetic patterns, hormone values, everything, which one can “measure” – that mental illnesses can be “discovered” so early that they do not even occur anymore.
If you want to get an idea of what this vision might mean, watch the great film by Steven Spielberg “Minority Report”. Here are crimes that are prevented before they are committed. However, the future vision of “Big Data Psychiatry” goes far beyond that, and it raises many questions: who will make a medical diagnosis in the future? A doctor? Or the machines from Google and Apple? And if the data collectors from all my data have found evidence that I’m suffering from depression, who will be informed? A public health system? A higher „authority for mental health“? Will I be contacted by this authority for treatment? And if I do not want that, will I be „monitored“ to prevent my possible suicide? What happens to someone whose data suggest he is diagnosed with psychosis at 90% certainty over the next six months? And if we believe – as some actually do, for which humans are a deterministic biological machines – that it occurs with 100% certainty, then what? Do we treat him prophylactically? Do we even have a right to warn him?
Who will define what is “normal? When is a “depression” in need of treatment when a machine makes the “diagnosis”? In a thoughtful article, Manrai, Patel (both Harvard University) and Ioannidis (Stanford University) recently asked the question, “In the Era of Precision Medicine and Big Data, Who Is Normal?” (Manrai et al., JAMA 2018). The concept of the Research Domain Criteria (RDoC) also suggests that in the future – maybe a little bit exaggerated – one no longer treats the suffering person, but the disturbed brain function. Will there be cut-off values, as usual in laboratory settings, outside of which one should advise treatment?
Finally: Emotions like depression, fear or despair have their evolutionary meaning. Especially the western industrialized societies tend to regard these as unwanted and want to turn them off at any cost. I am convinced that this is one reason why the use (or perhaps better consumption?) of antidepressants has increased dramatically in the last twenty years and continues to increase each year. Have we become healthier? The answer can be found in the first paragraph of this post. Big data psychiatry is the answer to social developments. It causes many people at least as much discomfort as these developments themselves.