CyLab Researchers Develop a Taxonomy for AI Privacy Risks
Media Inquiries
Privacy is a key principle for developing ethical AI technologies. But as technological advances in AI far outpace regulation of these technologies, the responsibility of mitigating privacy risks in goods and services that incorporate these technologies falls primarily on the developers of these goods and services themselves.
That鈥檚 a tricky proposition for AI practitioners, and it starts with tangibly defining AI-driven privacy risks in order to address them in the research and development stage of new technologies.
And while there is a privacy taxonomy that has a聽, it鈥檚 likely that groundbreaking AI technological advancement will bring with it unprecedented privacy risks that are unique to these new technologies.
鈥淧ractitioners need more guidance on how to protect privacy when they're creating AI products and services,鈥 said聽, assistant professor at 一本道无码鈥檚聽.
鈥淭here's a lot of hype about what risks AI does or doesn鈥檛 pose and what it can or can鈥檛 do. But there鈥檚 not a definitive resource on how modern advances in AI change privacy risks in some meaningful way, if at all.鈥
In their paper,听"," Das and a team of researchers seek to build the foundation for this definitive resource.
The research team, which also features 一本道无码 researchers聽,听 补苍诲听, constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. The team鈥檚 goal was to codify how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter known risks.
Das and his team referred to Daniel J. Solove鈥檚 2006 paper聽"" as a baseline taxonomy of traditional privacy risks that predate modern advances in AI. They then cross-referenced the documented AI privacy incidents to see how, and if, they fit within Solove鈥檚 taxonomy.
鈥淚f the incidents where we're seeing the AI causing harm is challenging that taxonomy, then that's an instance where AI has changed privacy harm in some way,鈥 explained Das. 鈥淏ut if the incident fits neatly into the taxonomy, then that's an instance where maybe it's just exacerbated the existing harm, or maybe it hasn't meaningfully changed that privacy harm at all.鈥
In examining the documented AI privacy incidents through the lens of Solove鈥檚 taxonomy, the team identified 12 high-level privacy risks that AI technologies either newly created or exacerbated, outlined in the table below.
The researchers identified 12 privacy risks that the unique capabilities and/or requirements of AI can entail. For example, the capabilities of AI create new risks (purple) of identification, distortion, physiognomy and unwanted disclosure; the data requirements of AI can exacerbate risks (light blue) of surveillance, exclusion, secondary use and data breaches owing to insecurity.
鈥淲e set a divide as it relates to products and services and in two ways that pipe into the taxonomy: the requirements of AI and the capabilities of AI,鈥 said Das.
鈥淭he requirements of AI refers to ways that the data and infrastructural requirements of AI exacerbated privacy risks already captured in Solove鈥檚 taxonomy.聽
鈥淭he capabilities of AI refers to its ability to do things like infer information about users to predict聽where they're going to go next or what they're going to do next.鈥
Two examples of newly created privacy risks resulting from AI technologies that the researchers identified are physiognomy (the long debunked pseudoscientific art of judging one鈥檚 character from facial characteristics) and the proliferation of deepfake pornography.
鈥淭here's a 鈥榙istortion鈥 category in Solove鈥檚 taxonomy which addresses instances where information about you can be used against you, which at a general class would capture this use of deepfakes,鈥 said Das. 鈥淏ut there's something fundamentally new about the capability of AI to take information about you in one context and generate it to make photorealistic content about you in another context that information and computing technology wasn't able to do in the past in a way that wasn't obvious, or at least not without a lot of effort. It represents a聽 new category of distortion risks that never existed in the past, and AI has fundamentally changed that.鈥
Das and his team will present their findings in May at the聽 2024听 in Honolulu. They hope to build on their current research to make it easier for practitioners and regulators to use their taxonomy to mitigate privacy risks when developing and managing these technologies.
鈥淪oon, we're going to have a web version of this taxonomy, so that should make it a little bit more accessible,鈥 said Das. 鈥淥ur hope is that this taxonomy gives practitioners a clear roadmap of the types of privacy risks that AI, specifically, can entail.鈥