The ethics of mind-reading | Technology

By Jared Genser, Georgetown University Law Center Washington, September 21 (360info) Tapping into someone’s mind may soon be technically possible. Our institutions are ill-equipped to deal with the resulting human rights abuses.

It used to be science fiction, but brain-machine interfaces — devices that connect a person’s brain to a computer, machine, or other device like a smartphone — are making rapid technological advances. In both science and medicine, brain-machine interfaces have revolutionized communication and mobility, helping people overcome immense mental and physical challenges. Brain-machine interfaces helped a paralyzed and nonverbal man communicate at a speed of 18 words per minute with up to 94 percent accuracy; a person paraplegic to drive a Formula 1 racing car; and a person who is paraplegic to take the first kick of the world championship with a mind-controlled robotic exoskeleton. And in consumer products, CTRL-Labs has developed a consumer wristband that controls your computer cursor with your thoughts, and Kernel’s wearable Helm Flow maps brain activity with unprecedented accuracy.

While these developments are promising, brain-machine interfaces also pose new human rights challenges. Other technologies use algorithms to extrapolate and collect data about users’ personal preferences and location, but brain-machine interfaces offer something completely different: they can connect the brain directly to machine intelligence. Because the brain is the site of human memory, perception, and personality, brain-machine interfaces pose challenges not only to the privacy of our minds, but also to our self-awareness and free will.

In 2017, the Morningside Group, a group of 25 global experts, identified five “neurorights” to characterize how current and future neurotechnology (methods of reading and recording brain activity, including brain-machine interfaces) might violate human rights. These include the right to mental identity or a “sense of self”; the right to intellectual agency or “free will”; the right to intellectual privacy; the right to fair access to mental augmentation; and protection against algorithmic bias, such as when neurotechnology is combined with artificial intelligence (AI). By protecting neuro-rights, societies can maximize the benefits of brain-machine interfaces and prevent abuse and abuse that violates human rights. Brain-machine interfaces are already being abused and abused. For example, a US neurotechnology startup sent wearable brain-tracking headbands to a school in China, where they were used in 2019 to monitor students’ attention without consent. Also, workers at a Chinese factory wore hats and helmets that allegedly used brain signals to decode their emotions. An algorithm then analyzed emotional changes affecting worker productivity levels. While the accuracy of this technology is debatable, it sets a disturbing precedent. But abuse and misuse of brain-machine interfaces could also occur in democratic societies. Fearing that non-invasive or non-surgical and wearable brain-machine interfaces could one day be used by law enforcement on criminal suspects in the US, some experts have advocated expanding constitutional doctrines to protect civil liberties. The rise of consumer neurotechnology underscores the need for laws and regulations that reflect the advancement of technology. In the US, brain-machine interfaces that do not require implantation in the brain, such as wearable helmets and headbands, are already being marketed as consumer products, with claims including supporting meditation and wellness, or improving learning efficiency or brain health to improve. Unlike implantable devices, which are regulated as medical devices, “wellness” devices are consumer products and are subject to minimal to no regulation.

Also Read :  Artificial Intelligence (AI) in Manufacturing Market Is Poised To Achieve Continuing Growth During Forecast Period 2022-2028 – PRIZM News

Consumers may not be aware of the ways in which using these devices can violate their human and privacy rights. The data collected by Consumer Neurotechnology may be stored insecurely or even sold to third parties. User agreements are long and technical, and they contain provisions that allow companies to keep users’ brain scans indefinitely and sell them to third parties without the kind of informed consent that protects the human rights of individuals. Today it’s possible to interpret only part of a brain scan, but that will only increase as brain-machine interfaces evolve.

Also Read :  Michael Flynn Claims People Working on Putting Robotics in Us, Changing DNA

The human rights challenges posed by brain-machine interfaces need to be addressed to ensure their safe and effective use. On a global level, the UN Human Rights Council, a body of 47 member states, is about to vote on and approve the first major UN study on neurorights, neurotechnology and human rights. The UN leadership on neurorights would create an international consensus on a definition of neurorights and provoke new legal frameworks and resolutions to address them.

Expanding the interpretation of existing international human rights treaties to protect neuro-rights is another important way forward.

The Neurorights Foundation, a US non-profit organization dedicated to protecting human rights and the ethical development of neurotechnology, released a first-ever report showing that existing international human rights treaties are ill-equipped to protect neurorights. For example, the Convention against Torture and the International Covenant on Civil and Political Rights were drafted before the advent of brain-machine interfaces and contain terms and legal norms such as “pain”, “liberty and security of the person” and “freedom of thought and conscience”, which needs to be further interpreted with a new language to address neuro-rights. Updating international human rights treaties would also legally oblige states that ratify them to create national laws protecting neuro-rights.

Also Read :  Rafael’s ‘Drone Dome’ counter-UAS system wins Pentagon certification

Another important step is the development of a global code of conduct for companies, which would also help create standards for the collection, storage and sale of brain data. For example, making the privacy of brain data an “opt-out” default for consumer neurotechnology would help protect users’ informed consent by allowing them to choose when their brain activity is monitored. This type of standard is easily replicated in national and industry level regulations.

Simultaneous effective multilateral cooperation, national attention and industry engagement are required to address neurorights and close “protection gaps” under international human rights law. Ultimately, these approaches will help guide the ethical development of neurotechnology, while identifying the strongest avenues to prevent misuse and abuse of the technology. ( NSA

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Source link