USENIX Enigma 2023 - What Public Interest AI Auditors Can Learn from Security Testing...
USENIX Enigma Conference USENIX Enigma Conference
7.43K subscribers
61 views
0

 Published On Feb 22, 2023

What Public Interest AI Auditors Can Learn from Security Testing: Legislative and Practical Wins

Justin Brookman, Consumer Reports

Public interest researchers (such as journalists, academics, or even concerned citizens) testing for algorithmic bias and other harms can take much away from security testing practices. The Computer Fraud and Abuse Act, while intended to dissuade hacking, can be a legal barrier for public interest security testing (although recently much of this has been cleared up by the courts). Similarly, researchers trying to test for algorithmic bias and other harm in the AI space run into similar CFAA barriers when tinkering with algorithms. AI researchers can look to legal and practical techniques that security researchers have done in the past. This includes apply for DMCA exemptions for narrowly tailored objectives, promoting the use of bug bounty programs but for AI harm, and more. We provide practical and policy recommendations that stem from security researchers that AI testing experts can advocate for in attempts to remove legal and practical barriers that prevent this kind of research.

View the full Enigma 2023 program at https://www.usenix.org/conference/eni...

show more

Share/Embed