Nowhere to hide: AI mind-reading tech turns thoughts into text

As recent developments in neurotech open up new opportunities for people with paralysis, they also raise ethical concerns related to privacy and consent.
  • New brain implant enables ALS patient to communicate using her thoughts
  • Can brain scans reveal what someone is thinking of?
  • Mind: the final privacy frontier

Since time immemorial, humans have been captivated by the tantalising concept of mind reading. The idea of someone or something possessing the ability to delve into our innermost thoughts and feelings — be it a telepath with supernatural abilities or a superintelligent computer capable of decoding neural patterns — has been a recurring theme in our stories, myths, and popular culture. This fascination stems from our inherent desire to understand and connect with each other on a deeper, more intimate level and, perhaps, from our innate curiosity about the mysteries of the human mind itself. In the realm of science fiction, mind-reading entities often serve as a metaphor for our fears and hopes about technology and its potential to invade or enhance our personal lives. But what if this concept, once confined to the pages of fantastical narratives, could become a reality? What if technology could indeed ‘read’ our minds, not as an invasive force, but as a tool for communication, understanding, and even healing?

As we stand on the precipice of a new era in neurotechnology, these questions are no longer purely speculative. The development of mind-reading brain implants is rapidly transforming from a far-fetched dream into a tangible reality. These devices, designed to interpret the complex language of our neurons, promise to revolutionise our understanding of the brain, potentially unlocking unprecedented avenues for treating neurological disorders, enhancing cognitive abilities, and even redefining the boundaries of human communication. In this article, we will delve into the fascinating world of mind-reading brain implants and explore their potential applications, ethical implications, and the challenges that lie ahead in this uncharted territory of neuroscience that could forever change our relationship with our own minds.

“Our results show a feasible path forward to restore communication to people with paralysis at conversational speeds”.

Stanford University researchers

New brain implant enables ALS patient to communicate using her thoughts

A team of researchers from Stanford University led by Krishna Shenoy recently published a paper in which they claim their new brain implant enabled a paralysed person to communicate using only her thoughts. The person in question lost her ability to speak due to ALS — also known as Lou Gehrig’s disease — nearly a decade ago and had to use an iPad or a writing board in order to communicate with others. The new device enabled her to translate her thoughts into text at a very impressive rate of 62 words per minute. While this figure still pales in comparison to the speed of normal human speech, which is around 160 words per minute, it’s a notable improvement compared to the previous record, which was set by another one of Shenoy’s volunteers in 2019 and stood at 18 words per minute. “Our results show a feasible path forward to restore communication to people with paralysis at conversational speeds”, write the researchers. 

To make this possible, the researchers embedded a brain-computer interface that consists of a small metal pad of sharp electrodes into the woman’s motor cortex, the part of the brain that is responsible for movement. They then asked the patient to imagine herself speaking and used the device to record her neural activity. The system would then analyse this information to identify patterns that may indicate which words the patient was thinking of and display them on a computer screen. The researchers also equipped the system with machine learning software that can predict which word is more likely to come next in a sentence, allowing them to significantly improve accuracy.

Can brain scans reveal what someone is thinking of?

While the device described in the previous example is impressive in its own right, it does still require the patient to undergo a highly invasive procedure, where electrodes are embedded directly into the brain. Unless the patient is suffering from ALS or a similar condition, the risks involved with such a procedure would likely outweigh any potential benefits. But what if there was a way to identify which words someone is thinking of just by looking at their brain scans? Sounds like science fiction, right? Well, it isn’t any longer. A team of researchers from the University of Texas at Austin recently published a groundbreaking study, in which they describe a new AI system that can accurately translate a person’s thoughts into a continuous stream of text by analysing images of their brain activity that is captured with functional magnetic resonance imaging (fMRI). To achieve this, the researchers first trained OpenAI’s GPT-1 model on a large data set consisting of English sentences obtained from Reddit, transcriptions of the Modern Love podcast produced by the New York Times, and 240 stories from The Moth Radio Hour. 

They then placed three participants in an MRI scanner and had them listen to multiple episodes of the same podcast — 16 hours in total. The brain scans were then fed into the AI system, which was instructed to guess which words were responsible for triggering which brain activity. The output was then compared to the actual words participants heard, to determine how accurate the system was and fine-tune it further. The researchers then played new episodes for the participants to listen to and asked the model to guess what they were hearing. The model proved remarkably accurate, returning something very similar to what the participants actually heard about 50 per cent of the time. In some cases, it even managed to identify exact words and phrases. “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences”, explains Alexander Huth, assistant professor of neuroscience and computer science at the University of Texas at Austin, who led the project.

“Before neurotechnology is used at scale in society, we need to protect humanity with a right to self-determination over our brains and mental experiences”.

Nita Farahany, author of The Battle for Your Brain

Mind: the final privacy frontier

As impressive as this technology is, it will be a while before it becomes commercially available. For one, fMRI machines are prohibitively expensive and can cost as much as $3 million a piece. They are also rather bulky, which means that the use of this technology would remain restricted to the lab environment. But even more importantly, the idea of a technology that would be able to read our minds raises a host of serious ethical concerns. What if, rather than using it to improve the quality of life for paralysed people, governments and law enforcement agencies used the technology for surveillance and interrogation instead? Which is not a very far-fetched idea… Convinced that the technology will inevitably be misused, some neuroethicists are already calling for human rights laws to be amended to account for this possibility. “This research shows how rapidly generative AI is enabling even our thoughts to be read”, says Nita Farahany, author of The Battle for Your Brain. “Before neurotechnology is used at scale in society, we need to protect humanity with a right to self-determination over our brains and mental experiences”.

While the researchers themselves have acknowledged these concerns and are fully aware that some people may try to use their technology for nefarious purposes, they have also offered assurances that the technology only works on willing participants who were involved in the training of the model. If it were applied to anyone else, it would produce unusable results. That being said, they do admit that could change in the future as technology continues to develop. “We think that mental privacy is really important, and that nobody’s brain should be decoded without their cooperation”, says Jerry Tang, a PhD student at the university who worked on the project. “We believe it’s important to keep researching the privacy implications of brain decoding, and enact policies that protect each person’s mental privacy”.

In closing

The rapid advancements in neurotechnology, as evidenced by the groundbreaking work from Stanford University and the University of Texas at Austin, underscore the incredible potential of mind-reading devices. These innovations promise to revolutionise communication, especially for those with debilitating conditions, and offer a deeper understanding of the intricate workings of the human brain. However, with such potential also come profound ethical dilemmas. The very idea of decoding thoughts, once confined to the realm of science fiction, now presents serious real-world concerns related to privacy, consent, and potential misuse. As we stand at the cusp of this new frontier, it is imperative to strike a balance between harnessing the transformative power of these technologies and safeguarding the sanctity of our innermost thoughts. The journey ahead requires not just technological prowess but also a deep reflection on our values, ensuring that our advancements respect and uphold the dignity and rights of every individual.

Schedule your free, inspiring session with our expert futurists.


Related updates

This site is registered on as a development site.