Home » A Brief History of Neurotechnology

A Brief History of Neurotechnology

by admin

Elon Musk recently announced on X-formerly-known-as-Twitter that his brain-computer interface company Neuralink had implanted the “first” “chip” in a test subject and that he was doing well after the operation. In another tweet, Musk said the patient had successfully moved “a mouse” — by which he meant a cursor on a screen. Musk and Neuralink’s stated goal: “Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer”. There is no evidence for Musk’s claims and the news value of these Musk messages is almost zero because:

Quite apart from the fact that the only source for the claims is Musk himself, these alleged advances in brain-computer interface technology (BCI for short) are old hat. As noted brain researcher Anil Seth notes in his Guardian article about whether we should all have chips put into our heads: Neuralink is a newcomer to the field and the first cursors on screens were mind-controlled back in the 1990s . The actual innovation from Musk’s company, a surgical robot that uses the brain implants, remains completely obscure in recent reports and neuroscientists are expressing concern.

It’s obvious why Elon Musk is working on BCI technology with Neuralink: The technology is making rapid progress (even without Musk) and an automated implantation machine with BCI and surgical robot promises gigantic profits. In the last 12 months alone, neuroscientists working with artificial intelligence have: made a paraplegic man walk again, while another paraplegic patient was able to use his arms again and feel with his hands; allowing a woman paralyzed after a stroke to speak again with a “brain-to-text (…) rate of 78 words a minute”; Reconstructing music from brain activity or reconstructing images from visual cortex activity. In previous years, digital “telepathy” had already been made possible and the brains of three patients could be connected to form a neuro-network.

See also  Luto is about loss and grief: "You start seeing things but you can't leave home..." - Sina Hong Kong

Psychologist Gary Lupyan and philosopher Andy Clark wrote an essay at Aeon about why they don’t want to believe in direct digital “telepathy,” in short: because neural activity is too idiosyncratic and individual to be reliably transferred from one person to another to be transferred. However, they overlook the fact that AI technology can function as an interpreter between the different, well: wavelengths of the participants. However, that is actually a thing of the future and (hopefully) still a long way off, because apart from the superficial sci-fi coolness of digital “telepathy”, the question naturally arises as to whether I really want to have a direct line to other people’s thoughts, with all of theirs invasive thoughts and the whole neuro-chaos that makes up a person’s inner world, or how one could technically prevent access to “private thoughts”. Maybe I don’t want to know what my neighbor really thinks about me – and vice versa.

In the text mentioned above, Anil Seth also asks the important question of whether people will really open their heads to such neuro-gimmicks just to be able to read the chaotic thoughts of their brain-implanted neighbors or to receive a few cognitive prostheses. His answer: No. However, it appears that invasive surgery will not be necessary in the future.

Last December, researchers at the University of Sydney unveiled a non-invasive brain scanner that is worn by patients as a simple EEG “cap.” Although its signal is noisier, the experiment still achieved “state-of-the-art performance” for Brain-2-Text outputs, albeit with a relatively high error rate. Nevertheless, it can be expected that these non-invasive technologies will make equally rapid progress, and there is already an initial indication of application in future mass markets: Apple filed a patent for EEG Airpods, i.e. in-ear devices, in the summer of 2023. Headphones that can read electrical signals from the brain. You actually don’t have to be a sci-fi nerd anymore to imagine the next generations of Apple’s VR/AR glasses with brain interfaces.

See also  Christian Greco: "Museums are a defense of freedom"

All of these massive advances in BCI technology make it clear that we need a broad public debate about the ethics of brain interfaces. There are already thousands of cases of brain implant patients left without tech support, outdated code and dead batteries after a startup goes bankrupt, and Technology Review writes about the case of Rita Leggett, a patient with excessive epilepsy seizures , who was able to live a near-normal life thanks to the use of a new type of implant that had to be removed after the company went bankrupt — a possible violation of her human rights.

In July 2023, UNESCO organized the first Conference on Neuro-Ethics, where a framework for human rights in the context of neurotechnologies was called for and the concept of neuro-rights was discussed. Chile was the first country in the world to change its constitution in 2021 and add explicit rights for neuro-privacy.

The scientific discourse on neuroethics, on the other hand, has been going on for several decades and the International Neuroethics Society was founded in 2006. The papers “on Neurorights” from 2021 and the paper “Ethical Aspects of BCI Technology: What is the state of the art?” from 2020 provide a good overview of the current state of affairs.

Personally, these ethical considerations don’t go far enough for me: a few days ago, AI researchers presented a computer vision algorithm that can read lips, i.e. decodes speech from a video signal. What happens in a society in which I can (not only with a brain implant, but also with smartphones and their high-resolution super cameras) listen to every conversation by looking at the speaker?

See also  The Committee on Artificial Intelligence presented, Butti: "Guide and control of this technology to the State"

It’s entirely conceivable, thanks to rapid advances in BCI technology, that visual cortex signals could be translated directly into lip-reading AI superhearing. For several years now, we have been discussing phenomena such as peer surveillance, in which people use new technologies to monitor, stalk and “doxx” others, or in which TikTokkers publish apparently private gossip talk from strangers. Neurotechnology in conjunction with artificial intelligence has the potential to transform entire societies into literally supernatural super-recognizers in the coming decades and to make such stories look like harmless banter. The “Big Brother” from Orwell’s 1984 becomes the “Big Family” in which everyone is subject to surveillance by other users.

The debates about the safety of neurodata from today’s BCI pioneers can therefore only be a beginning of a broad social discussion about where we set the limits for the new brain-enhanced AI/BCI cyborgs — and Nicholas J. Kelley, Stephanie Sheir and Timo Istace now take all of this as an opportunity to tell a brief history of neurotechnology on The Conversation and point out the immense ethical implications.

Because: “Thoughts are free, no one can guess them” may no longer be true in a few years.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy