
This Article explores the ethical considerations surrounding brain-computer interfaces for enhancing human cognition and communication.
Article by Suhani Tikkisetty
Member- Junior
Brain-Computer Interfaces (BCIs) are devices that can receive, analyze, and translate brain signals into commands that people can utilize to perform particular tasks. The primary goal of a BCI is to assist individuals with neuromedical disorders in performing tasks that would typically be challenging for them. With the use of this new technology, these people are able to interact, influence, and modify their environments more effectively. Some who were previously unable to speak or use their limbs can now do so. While brain-computer interfaces provide many beneficial applications, their use raises a number of ethical questions such as the privacy of the user, the amount of self-agency the user has, and the responsibility for actions that the device may perform. Brain-computer Interfaces are not developed enough to be able to enhance human cognition and communication due to the ethical concerns surrounding them.
One big ethical concern surrounding Brain-computer Interfaces is the self-agency of individuals using BCIs. Self-agency is “the feeling of control over actions and their consequences” (Moore). Self-agency when it comes to BCI is a confusing matter, due to the fact that the BCI is technically the one completing a particular task, which means that the BCI would technically be responsible which means the person wouldn’t feel self-agency. However, the actions of the BCI are based on the person's thoughts, which are to carry out their desired action, which gives the person a degree of self-agency. Moreover, BCIs being a new technology, with still a lot of room for discovery, it’s quite difficult to determine whether the person actually feels a sense of self-agency or not. BCI’s are “machine learning software” that “[decode] neural data” through “complex transformations” that “cannot be fully understood or predicted by humans”. Because of this failure of humans to truly understand the way a BCI “[decodes] neural data” there is essentially a “black box between” a “person’s thoughts and technology that is acting beyond their behalf” (Journal of Child Neurology). Since humans aren’t able to fully predict the way that a BCI will translate neural signals and complete actions, there isn’t a definite way to determine the validity of the BCI’s actions corresponding to an individual's thoughts. From this discrepancy, naturally, individuals wouldn’t even feel any sort of self-agency for their actions due to the fact that there isn’t a confirmation that the BCI did what the person wanted the person to do. This ethical concern demonstrates that from the failure of the BCI to, with 100% accuracy, carry out tasks the individual desired, Brain-Computer Interfaces aren’t completely developed. Furthermore, the fact that “no matter the recording method, the signal type, or the signal-processing algorithm, BCI reliability for all but the simplest applications remains poor” explains that BCI in most cases is not reliable. The fact that BCI remains to be unreliable for most applications, signifies that the actions that the BCI completes will fail to give someone the feeling of self-agency, since the BCI, being unreliable in most applications, will fail to complete the main desired action of the individual, which makes the individual feel as though they aren’t responsible for the BCI’s consequences. The fact that BCI is unable to deliver desired actions with accuracy also demonstrates that it still needs to be put into development as its failure to be probable for most applications implies that it fails to enhance human cognition and communication.
Another large ethical concern regarding Brain-Computer Interfaces is privacy. BCIs, naturally, as they read someone's thoughts in order to perform commands, are able to gradually collect information about an individual, which raises privacy risks. For example, if someone that is unable to control their wheelchair is using a BCI to do so when they are hungry, the BCI will naturally be able to pick up this person's food preferences, the time the person may get hungry or thirsty, or where they tend to go (Yin). Although these small privacy breaches may not seem too large, eventually as the machine continues to aid people, naturally the information will build up to great extents. Eventually, the BCI will know every little thing about this person. To make it worse, with “readily available basic hardware tools, information can be captured, read, and manipulated” (Sundararajan). Essentially anyone who is able to control the BCI machine is able to easily steal, use, and manipulate this information for the worse. Brain-computer interfaces, although able to help enhance communication and cognition, are not worth the risk if many people are able to gain personal information and regular habits of a person. The main purpose of the BCI being able to read thoughts also means that any thoughts the person has, unrelated to commands that the BCI is meant to complete can easily be accessed. As long as someone is able to use basic hardware tools, they can easily capture the information. With BCI no information is safe. Besides information being out in the open, there are also “various types of attacks” that “can be performed”, such as passive eavesdropping, active interception, denial of service, and a data modification attack (Sundararajan). Due to the fact that anyone is able to easily access the information, the manipulation that can happen with this information can disrupt the service of the BCI, with one of the many attacks being denial of service, and even simply gathering information on someone to use against them in the future. The ethical drawbacks of BCI, one of which literally denies the main purpose of BCI, demonstrate how BCI actually fails to enhance communication and human cognition for individuals.
Although BCI can greatly enhance human communication and cognition, the ethical considerations outweigh the actual effectiveness of BCI, which deems BCI to be unable to enhance human communication and cognition. Brain-Computer interfaces contain many different ethical considerations; to name a few, self-agency and privacy. These concerns regarding BCI disrupt the potential enhancement of communication and human cognition more than would be benefited by the BCI. Brain-Computer Interfaces have the potential to be a wonderful tool but must be further researched and developed to help humans. However, for now, BCIs fail to enhance human communication and cognition due to the ethical considerations surrounding them.
Sources:
Comentários