AI desperately needs global oversight
Every time you post a photo, respond on social media, make a website, or possibly even send an email, your data is scraped, stored, and used to train generative AI technology that can create text, audio, video, and images with just a few words. This has real consequences: OpenAI researchers studying the labor market impact of their language models estimated that approximately 80 percent of the US workforce could have at least 10 percent of their work tasks affected by the introduction of large language models (LLMs) like ChatGPT, while around 19 percent of workers may see at least half of their tasks impacted. We’re seeing an immediate labor market shift with image generation, too. In other words, the data you created may be putting you out of a job.
When a company builds its technology on a public resource—the internet—it’s sensible to say that that technology should be available and open to all. But critics have noted that GPT-4 lacked any clear information or specifications that would enable anyone outside the organization to replicate, test, or verify any aspect of the model. Some of these companies have received vast sums of funding from other major corporations to create commercial products. For some in the AI community, this is a dangerous sign that these companies are going to seek profits above public benefit.
Code transparency alone is unlikely to ensure that these generative AI models serve the public good. There is little conceivable immediate benefit to a journalist, policy analyst, or accountant (all “high exposure” professions according to the OpenAI study) if the data underpinning an LLM is available. We increasingly have laws, like the Digital Services Act, that would require some of these companies to open their code and data for expert auditor review. And open source code can sometimes enable malicious actors, allowing hackers to subvert safety precautions that companies are building in. Transparency is a laudable objective, but that alone won’t ensure that generative AI is used to better society.
In order to truly create public benefit, we need mechanisms of accountability. The world needs a generative AI global governance body to solve these social, economic, and political disruptions beyond what any individual government is capable of, what any academic or civil society group can implement, or any corporation is willing or able to do. There is already precedent for global cooperation by companies and countries to hold themselves accountable for technological outcomes. We have examples of independent, well-funded expert groups and organizations that can make decisions on behalf of the public good. An entity like this is tasked with thinking of benefits to humanity. Let’s build on these ideas to tackle the fundamental issues that generative AI is already surfacing. [Continue reading…]