As generative AI tools continue to proliferate, more questions are being raised over the risks of these processes, and what regulatory measures can be implemented to protect people from copyright violation, misinformation, defamation, and more.
And while broader government regulation would be the ideal step, that also requires global cooperation, which as we’ve seen in past digital media applications, is difficult to establish, given the varying approaches and opinions on the responsibilities and actions required.
As such, it’ll most likely come down to smaller industry groups, and individual companies, to implement control measures and rules in order to mitigate the risks associated with generative AI tools.
Which is why this could be a significant step – today, Meta and Microsoft, which is now a key investor in OpenAI, have both signed onto the Partnership on AI (PAI) Responsible Practices for Synthetic Media initiative, which aims to establish industry agreement on responsible practices in the development, creation, and sharing of media created via generative AI.
As per PAI:
“The first-of-its-kind Framework was launched in February by PAI and backed by an inaugural cohort of launch partners including Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, WITNESS, and synthetic media startups Synthesia, D-ID, and Respeecher. Framework partners will gather later this month at PAI’s 2023 Partner Forum to discuss implementation of the Framework through case studies and to create additional practical recommendations for the field of AI and Media Integrity.”
PAI says that the group will also work to clarify their guidance on responsible synthetic media disclosure, while also addressing the technical, legal, and social implications of recommendations around transparency.
As noted, this is a rapidly rising area of importance, which US Senators are now also looking to get on top of before it gets too big to regulate.
Earlier today, Republican Senator Josh Hawley and Democrat Senator Richard Blumenthal introduced new legislation that would remove Section 230 protections for social media companies that facilitate sharing of AI-generated content, meaning the platforms themselves could be held liable for spreading harmful material created via AI tools.
There’s still a lot to be worked out in that bill, and it’ll be difficult to get approved. But the fact that it’s even being proposed underlines the rising concerns that regulatory authorities have, particularly around the adequacy of existing laws to cover generative AI outputs.
PAI isn’t the only group working to establish AI guidelines. Google has already published its own ‘Responsible AI Principles’, while LinkedIn and Meta have also shared their guiding rules over their use of the same, with the latter two likely reflecting much of what this new group will be aligned with, given that they’re both (effectively) signatories to the framework.
It’s an important area to consider, and like misinformation in social apps, it really shouldn’t come down to a single company, and a single exec, making calls on what is and is not acceptable, which is why industry groups like this offer some hope of more wide-reaching consensus and implementation.
But even so, it’ll take some time – and we don’t even know the full risks associated with generative AI as yet. The more it gets used, the more challenges will arise, and over time, we’ll need adaptive rules to tackle potential misuse, and combat the rise of spam and junk being churned out through the misuse of such systems.