WASHINGTON — Senate Majority Leader Charles E. Schumer’s outline for potential federal legislation on artificial intelligence systems was widely welcomed by tech companies for its light touch and promise of federal spending.
Civil society groups have been far less welcoming. They say it fails to adequately address the harms that AI systems may pose and lacks the specifics needed to develop strong federal policy.
“The Senate’s bipartisan AI roadmap is scant on details, especially the guardrails needed to prevent and mitigate harm,” Maya Wiley, president of The Leadership Conference on Civil and Human Rights, said in an email. Senators “missed an opportunity to present real ideas on how to protect us all from defective and harmful AI.”
In presenting the 31-page outline, which encourages congressional committees to consider varied legislation appropriate to their jurisdictions, Schumer, D-N.Y., called it “balanced” between supporting innovation and mitigating risks. The outline was put together by the Senate AI Working Group, composed of Schumer and Sens. Martin Heinrich, D-N.M., Todd Young, R-Ind., and Mike Rounds, R-S.D.
The group proposed that Congress appropriate at least $32 billion annually for nondefense-related AI systems. A similar amount could be spent on defense-related AI as well, Schumer said.
That has the tech industry cheering.
“The Senate’s bipartisan roadmap for AI policy places a welcome focus on promoting innovation in technology and recognizes the benefits of AI across the economy and society,” Victoria Espinel, president of BSA Software Alliance, said in a statement. The trade group, which represents more than 40 companies including Cisco Systems Inc., Oracle Corp. and Microsoft Corp., said it also welcomed the outline’s call to advance a federal data privacy law.
Schumer said the Senate group knew from the outset that its work was meant to “supplement, not supplant” the work of the committees, adding that he hoped they would “lay down a base of bipartisan policy that will harness AI’s potential while safeguarding its risks using our road map as a foundation to build on.”
A year ago, when Schumer began convening meetings with tech company CEOs, scientists and other experts on AI, a group of 350 researchers, executives and engineers added to the urgency of congressional action by highlighting the great dangers.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the group, which came together under a nonprofit called Center for AI Safety, said. AI systems could help criminals and malicious actors create chemical weapons and spread misinformation, perpetuate inequalities by helping small groups of people gain a lot of power, and deceive human overseers and seek power for themselves, the group said.
Schumer appeared to have felt the conflicting impulses, saying in a floor speech in May 2023 that “we have got to move fast” while cautioning that “we can’t move so fast that we do flawed legislation.”
A year later, after meetings involving dozens of tech leaders and civil society groups, Schumer’s report made no mention of extinction risks.
“This report shows very clearly that Schumer is not taking AI seriously, which is disappointing given his previous capacity for honesty, problem-solving and leadership on the issue,” Rashad Robinson, president of Color Of Change, an online racial justice advocacy group, said in a statement. “It’s imperative that the legislature not only establishes stronger guardrails for AI in order to ensure it isn’t used to manipulate, harm and disenfranchise Black communities, but that they recognize and quickly respond to the risky, unchecked proliferation of bias AI poses.”
Some safety advocates say that group should have focused on the day-to-day harms the technology could pose alongside the big risks.
“People can be denied housing, job opportunities and access to education by ‘black box’ AI systems that even developers cannot fully explain the reasoning behind certain decisions,” said Wiley, one of several civil rights advocates who participated in the AI system discussions with lawmakers. “Instead of centering protections against known harms that occur every day, this framework focuses almost solely on the type of investment needed to bolster Big Tech.”
Wiley said industry groups push strongly on innovation in those discussions, rather than on protections for people who may be marginalized by the technology.
Data compiled by OpenSecrets, a nonprofit group that tracks lobbying and political campaign spending, shows that the number of organizations lobbying Congress and the federal government on AI nearly tripled to 460 last year from 158 the year prior, ranging from AARP to Zillow Group.
Accountability
The road map passes the responsibility to congressional committees to consider whether those who develop AI applications and those who deploy them should be held accountable if their products or actions cause harm.
It’s an important task, said Casey Mock, chief policy and public affairs officer at the Center for Humane Technology, a nonprofit group that exposes the effects of harmful technologies.
“We hope to see more in the coming weeks on what specific next steps committees will plan to prioritize this year, and hope the liability piece gets the attention it deserves,” Mock said in an email.
In an election year in which voters are primarily concerned about inflation and economic issues, it is unlikely the Schumer-led group could have advanced specific legislation, Rumman Chowdhury, the State Department’s 2024 U.S. science envoy, said in an interview.
Even so, senators could have engaged more on specific issues of concern, she said. The group ought to have addressed AI’s use in criminal justice systems as well as apps and websites that track women’s health and abortion access, said Chowdhury, who is also among the experts who briefed the Senate AI Working Group.
“We know that people are denied jobs” because of ratings developed using an AI system, Chowdhury said. “We know that Medicare and Medicaid fraud detection models adversely impact people with disability.”
Chowdhury said the group also failed to adequately address the use of AI in election campaigns and to spread disinformation.
The outline briefly mentions the need to develop technologies that will identify audio and video material, and Schumer pointed to legislation sponsored by Sen. Amy Klobuchar, D-Minn., chairwoman of the Senate Rules and Administration Committee, which advanced three measures to the full Senate that are designed, among other things, to limit deepfake misinformation from filtering into campaigns and elections.
However, Chowdhury said she anticipated the outline’s shortcomings.
“It’s not regulation, it’s not legislation, it’s a guidance document,” she said. “It’s made by a bipartisan group, intended for senators, most of whom are going to be up for reelection.”
___
©2024 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.