The use of artificial intelligence (AI) has been raising legal and ethical questions across the country. The AI Now Institute at New York University is a research center dedicated to studying the social implications of artificial intelligence. This week, two of its members, along with other experts, will testify in Congressional hearings about the growing accountability gap in algorithmic systems and AI.
This is on the heels of the introduction of the Algorithmic Accountability Act by Senators Ron Wyden (D-OR), Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY). The bill would require companies examine repercussions, fairness, and possible bias of developed algorithms. This and other regulations would be monitored and enforced by the Federal Trade Commission (FTC). Showing that not only the city of San Francisco, but the country as a whole is concerned about the rapid expansion of the use of AI.
Parentology recently reported on San Francisco’s ban of the use of facial recognition software by all law enforcement agencies. Begging the question, when and how will these issues be resolved as the use of AI increases across the country?
Artificial Intelligence is being used for everything from facial recognition, emergency rescue tactics and social media to college admissions and crime tracking. Often these algorithms affect people without their knowledge, gathering available online data to create “profiles” that can be used positively, in instances like the recovery of missing children, or negatively, such as racially or economically profiling people.
The Algorithmic Accountability Act doesn’t eliminate use of AI, but proposes a regulatory system. This week’s hearings are designed to inform law makers of the capabilities of the technologies that currently exist along with the potential that future development of AI could pose to the public at large.
Technology companies believe there is a way to utilize this technology while still maintaining a code of ethics but many civil rights organizations aren’t so sure. The A.C.L.U. of Northern California made that clear to Parentology in regard to the use of facial recognition software, created with AI, “You can’t build a face recognition system for investigative purposes that can’t also be used for unprecedented mass surveillance. History shows that if we put this technology in government hands, agencies like police or ICE will inevitably use it to target communities of color, round up immigrants, and track people in their daily, private lives.”
AI is already widely used and growing at a rapid rate, but the discussion is just getting started on Capitol Hill. Will the legislature be able to keep up or stay ahead of the AI curve?