Computers can’t hold a candle to a 10-year-old’s mastery of human language. But despite their shortcomings, they continue to surprise us with their ability to mimic the way we communicate.
One reason natural language processing (NLP) confuses many clinicians is the term itself is a bit misleading. It’s really about human language processing, a branch of AI that uses algorithms to understand the complexities of what we humans take for granted, and then convert it into machine readable text.
That’s no small task, considering all the ways in which we communicate our thoughts and feelings, including idioms, metaphors, grammatical and usage exceptions, strange word spellings, multiple meanings to individual words, and unusual sentence structures. We have no trouble understanding the fact that noses can run, and feet can smell, but then again, noses can smell, and feet can run. And most of us don’t struggle with expressions like, ‘the ball is in your court,’ ‘I’m under the weather,’ or ‘break a leg,’ which make no sense if interpreted literally.
Similarly, when clinicians use casual terms in their EHR notes, like ‘snake oil remedy’ or ‘the patient can’t kick the habit,’ it’s easy to see how a computer with only an elementary knowledge of human speech might get it wrong.
NLP’s role as translator
Fortunately, there are several NLP algorithms to deal with these peculiarities. They use semantic analysis to figure out the difference between the verb ‘make,’ for instance, and the noun ‘make’, as in “what make of car is that?”, a process called part-of-speech tagging. Similarly, word sense disambiguation enables an NLP algorithm to differentiate between ‘make the grade’ and ‘make a bet’.
There are also algorithms that perform sentiment analysis, which attempts to decipher the emotions behind a person’s words. In its simplest form, sentiment analysis will label a statement as either negative or positive, loosely suggesting either love, kindness, and respect, or hate, suffering, or disgust.
Harrison et al explain that these rudimentary algorithms rely on lexicons that contain words or phrases that are typically associated with the relevant emotions. Such digital tools have the potential to improve population health analytics and have been used to track disease outbreaks and detect patterns in social media posts for specific disorders. Unfortunately, they are easily thwarted when survey respondents are sarcastic or use negative statements about the term being analyzed.
In early iterations, NLP used rules-based algorithms to interpret human communications, but more recently, they’ve been replaced by machine learning systems, including deep learning algorithms like convolutional neural networks. No doubt, the recent introduction of large language models will have an even more profound impact on NLP systems.
Making a case
Use cases for NLP algorithms are plentiful. They serve as the backbone for translation and clinical transcription services, and play a role in AI systems that convert 30-page EHR reports into manageable length summaries. As mentioned previously, they are being used in population health analytics. Aramaki et al also highlight their value in numerous medical specialties, noting: “NLP can support pathologists by providing a better computer-based image retrieval system incorporating pathology reports or by automated pathology reporting… NLP can help screen patients for contraindications to diagnostic imaging… NLP can help in the screening, early diagnosis, or severity estimation of various diseases such as depression, bipolar disorder, dementia, psychosis, and schizophrenia…”
Computers will never fully duplicate the poetry, reasoning, and eloquence of the human mind. As Shakespeare once said: “What a piece of work is a man, How noble in reason, how infinite in faculty, In form and moving how express and admirable, In action how like an Angel, In apprehension how like a god.” NLP systems may not have godlike qualities, but it’s hard to imagine a healthcare ecosystem without them.