New technologies and digitalisation are at the centre of medical debates. Imaging techniques, big data, artificial intelligence and their use in medical practice are topics that are controversial and raise considerable - especially ethical - questions.
Translated and adapted from the original German version.
The doorbell rings. I open it and look into the faces of two friendly but unfamiliar people. "Good afternoon, Ms. Zakirova. We are here on behalf of the psychiatric observation centre for crisis prevention and mental health and would like to carry out a wellness check on you. We have data analyses according to which it is very likely that you will experience a psychosis in the next few days. We see from your patient file that you have previously sought psychiatric help for depressive episodes, so you fall into the target group we want to observe. We would like to ask you to come with us to the clinic for clarification."
The already almost present future of AI-assisted psychiatry could look like this or something along those lines. What seems unimaginable has been possible for a long time. The realisation of all these possibilities has so far (at least in Germany) been hindered by laws. There is uncertainty in the field of data management - also among physicians.
One thing is certain: Digitalisation has triggered a development that is rapidly conquering more and more areas that were previously considered specifically human. This is particularly true of artificial intelligence (AI), which describes an epochal turning point in all areas of life and is also becoming increasingly important for medicine. One example is deep learning: The harnessing of AI using artificial neural networks modelled on the human brain.
Deep learning uses self-adaptive algorithms to combine data streams into systems that guide action. In this way, they can be used to improve diagnostics in many areas. Such complex systems open up possibilities for diagnosing and treating diseases at such an early stage that would be impossible without them. Particularly in the field of biomarkers, AI helps to identify subgroups, despite all the heterogeneity of disease symptoms, which can then be specifically treated. The application fields of AI that provide potential for optimising diagnostic and therapeutic practice are diverse and complex.
Predictive and prognostic indicators also bring with them the question of drawing boundaries: When is intervention necessary? Who determines this necessity and who is responsible for it? Could a situation arise where it is not the person, who is not (yet) suffering, who is being treated, but his or her biomarker profile?
Back to the made-up initial scene: Am I, as a patient, happy that my psychosis, which is expected to set in within the next few days, has been recognised by the algorithm and that a help system has been set in motion to provide me with the best possible care? Or do I feel forced into a situation that does not correspond to my actual perceived condition? Was the intervention preceded by my consent to data collection and analysis? Under what circumstances did I consent? Who manages my data, to whom is it accessible, where is it obtained from? Who has data sovereignty, what monetary interests are involved, which networks and collaborations create my biomarker profile, which is then subjected to analysis and evaluation and ultimately describes me as a person?
Technological affinity and technological pessimism seem to be the two poles between which medical discourse meanders. The discussion about what is possible and what is impossible and the question "Where does the road lead?" must be conducted above all from the point of view of navigation.