Human brain partially connected to circuit board, illustration by Prostock-studio/Adobe Stock

TechLab Narrative

The Science Fact and Policy Future of Brain-Digital Interfaces

Prostock-studio/Adobe Stock

August 2, 2019

Brain-to-digital interfaces have been the stuff of science fiction for decades, but they are edging closer to science fact. The recent use of high-density electrocorticography to achieve ~60% success rates in “mind reading” points toward these capabilities becoming reality sooner rather than later. When combined with IoT sensor networks (including body computing and wearables) and rapidly evolving machine learning techniques, developers expect to have a viable capability in the lab within the next five years. It will likely take longer to move from the lab to commercial deployment for a variety of reasons, but the path is likely to follow that of other technological developments.

The first developments will be technical and address a set of problems we’ve seen and solved before: the technology needs to be lighter, smaller, faster, less obtrusive, un-tethered, and so forth. Cell phones evolved from a shoe box to the current smartphone, augmented and virtual reality (AR/VR) devices are in the middle of that technological improvement, and brain interfaces are in the earlier stages of that arc. Scientists know what technical “right answers” look like, so they can work towards them.

The second set of developments will concern the user experience. New interfaces and mediums require rethinking what humans do, how they interact with technology and the world, and why they should embrace a new capability. Just as the first movies were crude, the first computer user interfaces clunky, and the first AR/VR experiences exciting but not particularly useful or compelling, brain interfaces will need to evolve a new set of interactions to move from curiosity to compelling to necessary. It's unclear what “right answers” look like, so this is a challenging area.

The third set of developments will involve improving accuracy, efficacy, and safety. Technological capabilities are ubiquitous, from cradle to grave, and adoption rates continue to increase. As with any consumer product, brain interface designers will need to sort out what agencies are looking at what aspects of how devices impact humans, both individually and at a societal level. This is where independent standards organizations like IEEE can play an important role. Evidence-based analysis is needed beyond marketing and hype as society moves deeper into the digital epoch.

The fourth set of developments dovetails with the above, and will center on policy. What situations are appropriate for applying these interfaces, and what aren’t? New societal norms and ethical mores will need to be considered. Because of the complexity of the technology, it is critical that scientists, engineers, and developers be involved in the conversation along with other stakeholders. The future of technology needs informed input from technologists, policy makers, and the public. It is not solely a government decision, nor does it entirely rest with corporations. It is a human question that needs engagement from all sides.

These emerging technologies beg the question, “what does it mean to be human?” We may not be creating these interfaces in Pardee RAND Tech & Narrative Lab, but our work to date confirms our belief that the answer — to this and many other technology questions — needs to emerge from an ongoing negotiation among all sectors of society


Todd Richmond is director of the Pardee RAND Tech & Narrative Lab and a professor at the Pardee RAND Graduate School.

An earlier version of this commentary appeared on toddrichmond.com on August 1, 2019. The TechLab Narrative gives Pardee RAND students and faculty a platform to share insights based on their explorations and experiments within the school's Tech and Narrative Lab.