Last week, we shared an anonymous report that Valve was blocking from Steam at least some games that make use of AI-generated artwork. Over the weekend, Valve confirmed that report, telling Ars in an e-mailed statement that the company is blocking games that use AI-generated content unless developers can prove those AI models were trained with data that does not “infringe on existing copyrights.”
“The introduction of AI can sometimes make it harder to show that a developer has sufficient rights in using AI to create assets, including images, text, and music,” Valve spokesperson Kaci Boyle told Ars. “In particular, there is some legal uncertainty relating to data used to train AI models. It is the developer’s responsibility to make sure they have the appropriate rights to ship their game.”
Boyle stressed in the statement that Valve’s “goal is not to discourage the use of [AI-generated content] on Steam” and that the company’s “priority, as always, is to try to ship as many of the titles we receive as we can.” Generative AI is “bound to create new and exciting experiences in gaming,” Valve continued.
At the same time, the company says its hands are tied by the current state of the law. “Stated plainly, our review process is a reflection of current copyright law and policies, not an added layer of our opinion. As these laws and policies evolve over time, so will our process,” Valve said.
Far from settled
Despite Valve’s glib assertion that it is not using “opinion” to interpret “current copyright law and policies,” the copyright status of most AI models is far from certain. That’s largely because current copyright laws were written long before this kind of large-scale AI modeling was technically possible.
The companies running these AI models argue that machine learning based on copyrighted works is covered under fair use, akin to human artists being influenced by the art they study, reference, and remix. But a number of high-profile lawsuits brought by artists and stock art companies are vehemently contesting that argument, saying these AI models copied their content wholesale without permission.
Until those lawsuits generate some modern case law on the subject, the position of “current copyright law” surrounding this content is far from apparent. “I’m more unsettled than I’ve ever been about whether training is fair use in cases where AIs are producing outputs that could compete with the input they were trained on,” Cornell legal scholar James Grimmelmann told Ars in April.
Given that uncertain legal environment, organizations like Getty Images, Newgrounds, and science journal Nature have explicitly banned contributors from using AI-generated art. At the same time, companies from Marvel to DeviantArt have embraced the use of the technology to one degree or another.
Valve is taking the more conservative route, avoiding what it calls “some legal uncertainty” by simply rejecting AI content trained on copyrighted material altogether. And there’s nothing wrong with that. But by doing so, the company is effectively using its own interpretation of copyright law, even as it says the decision does not have “an added layer of our opinion.”