Time to move the Ethics of AI to the Front of the Line, by Jacqueline Ganim-DeFalco
Artificial intelligence is part of our life. But by the time we see it at work, it may be too late to understand its impact. Visionaries like Cansu Canca, Founder and Director of the Harvard AI Ethics Lab and leaders like Rana el Kaliouby, CEO and Founder of Affectiva and Matthew Wansley General Council of nuTonomy are on the forefront of the Ethics of AI – understanding both its power and limitations. I was fortunate to be privy to an event hosted by the Harvard Digital Initiative (#DigIn) this past Thursday. The challenge of how to plan for and monitor AI applications is seeping to the top of many digitalization arenas. The most common discussions have been in Health Care, Pharma, and BioTech. As such the frameworks that exist are part of a dense infrastructure that is not nimble enough to keep up with the pace of innovation in AI. According to the experts like Cansu Canca it begins with R & D where ethical dilemmas should be flagged. This alone requires retraining in this discipline of research, software development, and product development. Ethics analysis effectively needs to be built into the product development lifecycle. She gave an example of a “wearable” T-shirt designed to work with an app and coach a person through a healthy lifestyle. By collecting intimate data from the individual’s lifestyle, it could begin to identify bad habits and go as far as proselyting and harassing the person. These outcomes must be anticipated and discussed up front, not held up for review later on by an ethics board or part of a newsworthy scandal.
The Chief Legal Counsel of NuTonomy deconstructed the Arizona car accident between a real driver and an Uber self-driving vehicle. This discussion focused on the astounding AI capabilities that can make driving safer – including understanding the condition of the driver. However, he drew the limit on the human “judgement” which is not yet replicable. What was more compelling is looking across the entire spectrum of transportation to understand the extent to which traditional infrastructure needs to be re-tooled to fully take advantage of what is being added to vehicles. The automotive industry cannot tackle this alone and it will take many unique partnerships to achieve. Everything from the digitization of the rules of the road to the roads themselves requires redesign over the long haul. In the meantime, one can be assured that driving in an autonomous vehicle is closer to riding with your grandparents than with Mario Andretti. Truly, they are erring on the side of caution.
Rana Kaliouby and her team at Affectiva are tackling the vast data set of human emotions – effectively creating a global emotion repository. The software captures facial expressions and other physically track-able changes in the body based on typical communications between humans. Her company advocates for a new “social contract between humans and AI based on reciprocal trust.” Affectiva is not only building powerful tools, but holding its company to incredibly high standards in terms of how the data and technology is applied – staying out of potentially compromising areas like “security.” All of the research is “opt-in” and their employee base is a microcosm of the cultures they hope to understand throughout the research. The primary industry using this data today is automotive and the ironically the Higher Education segment seems to be hesitant to dive in to using this research due to privacy issues. Overcoming privacy issues and amplifying the benefits of this highly human-centered approach to using AI is key to the growth in this area. We might just be kidding ourselves about whether there is really a trade-off at all? Nearly every device we own is collecting valuable data. Personally, I’d rather channel it proactively into the right hands, but this is a question for thought leaders like Rana to help us better understand and embrace. Many thanks to the organizers at Harvard’s Digital Initiative for this engaging and timely forum.
The frameworks that exist around AI and Ethics today are part of a dense infrastructure that is not nimble enough to keep up with the pace of innovation in AI. Share on X