During a keynote session, Thomas Reardon, cofounder and CEO of CTRL-Labs, presented his company’s cutting-edge work on neural interfaces. His team has developed a bracelet that picks up signals from the body’s motor units and uses neural networks to interpret intended movements—what Reardon calls “intention capture.” The results were quite stunning. In one video demonstration, a person wearing the bracelet played the Atari game Asteroids with his hand resting on a desk and virtually no hand movements—the bracelet simply picked up and executed the moves that the player thought about making and signaled through his muscle fibers. In another demonstration, a bracelet wearer “typed” using finger movements but no actual keyboard. Reardon explained that one day, he expects his technology will eliminate the need for people to interact with physical devices such as keyboards and cell phones. Reardon says CTRL-Labs will sell the first version of the bracelet technology by the end of the year.
Session topics and attendee questions reflected concern about some of the issues swirling around artificial intelligence. AI academics and practitioners alike spoke about the difficulty in preventing machine-learning models from becoming biased. Jana Eggers, CEO of AI platform provider Nara Logics, explained that one of the best defenses is having a culturally diverse set of people working on AI data collection and models. Still, that approach isn’t foolproof, she said; only through extensive testing can biases be surfaced and corrected. Olga Russakovsky, assistant computer science professor at Princeton and computer vision specialist, pointed out the virtual impossibility of manually sifting through, for example, image data to surface and weed out bias. “You need a model to detect bias, but then how do you know if the model itself is unbiased?” she said.