News
Living Guidelines for generative AI – why scientists must oversee its use
2023-11-20

9kpi1ahzupwcqbj4a59z.PNG


Nearly one year after the technology firm OpenAI released the chatbot ChatGPT, companies are in an arms race to develop ‘generative’ artificial-intelligence (AI) systems that are ever more powerful. Each version adds capabilities that increasingly encroach on human skills. By producing text, images, videos and even computer programs in response to human prompts, generative AI systems can make information more accessible and speed up technology development. Yet they also pose risks.

 

AI systems could flood the Internet with misinformation and ‘deepfakes’ — videos of synthetic faces and voices that can be indistinguishable from those of real people. In the long run, such harms could erode trust between people, politicians, the media and institutions.

 

The integrity of science itself is also threatened by generative AI, which is already changing how scientists look for information, conduct their research and write and evaluate publications. The widespread use of commercial ‘black box’ AI tools in research might introduce biases and inaccuracies that diminish the validity of scientific knowledge. Generated outputs could distort scientific facts, while still sounding authoritative.

 

The risks are real, but banning the technology seems unrealistic. How can we benefit from generative AI while avoiding the harms?

 

Governments are beginning to regulate AI technologies, but comprehensive and effective legislation is years off (see Nature 620, 260–263; 2023). The draft European Union AI Act (now in the final stages of negotiation) demands transparency, such as disclosing that content is AI-generated and publishing summaries of copyrighted data used for training AI systems. The administration of US President Joe Biden aims for self-regulation. In July, it announced that it had obtained voluntary commitments from seven leading tech companies “to manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety”. Digital ‘watermarks’ that identify the origins of a text, picture or video might be one mechanism. In August, the Cyberspace Administration of China announced that it will enforce AI regulations, including requiring that generative AI developers prevent the spread of mis-information or content that challenges Chinese socialist values. The UK government, too, is organizing a summit in November at Bletchley Park near Milton Keynes in the hope of establishing intergovernmental agreement on limiting AI risks.

 

In the long run, however, it is unclear whether legal restrictions or self-regulation will prove effective. AI is advancing at breakneck speed in a sprawling industry that is continuously reinventing itself. Regulations drawn up today will be outdated by the time they become official policy, and might not anticipate future harms and innovations.

 

In fact, controlling developments in AI will require a continuous process that balances expertise and independence. That’s why scientists must be central to safeguarding the impacts of this emerging technology. Researchers must take the lead in testing, proving and improving the safety and security of generative AI systems — as they do in other policy realms, such as health. Ideally, this work would be carried out in a specialized institute that is independent of commercial interests.

 

However, most scientists don’t have the facilities or funding to develop or evaluate generative AI tools independently. Only a handful of university departments and a few big tech companies have the resources to do so. For example, Microsoft invested US$10 billion in OpenAI and its ChatGPT system, which was trained on hundreds of billions of words scraped from the Internet. Companies are unlikely to release details of their latest models for commercial reasons, precluding independent verification and regulation.

 

Society needs a different approach1. That’s why we — specialists in AI, generative AI, computer science and psychological and social impacts — have begun to form a set of ‘living guidelines’ for the use of generative AI. These were developed at two summits at the Institute for Advanced Study at the University of Amsterdam in April and June, jointly with members of multinational scientific institutions such as the International Science Council, the University-Based Institutes for Advanced Study and the European Academy of Sciences and Arts. Other partners include global institutions (the United Nations and its cultural organization, UNESCO) and the Patrick J. McGovern Foundation in Boston, Massachusetts, which advises the Global AI Action Alliance of the World Economic Forum (see Supplementary information for co-developers and affiliations). Policy advisers also participated as observers, including representatives from the Organisation for Economic Co-operation and Development (OECD) and the European Commission.

 

Here, we share a first version of the living guidelines and their principles (see ‘Living guidelines for responsible use of generative AI in research’). These adhere to the Universal Declaration of Human Rights, including the ‘right to science’ (Article 27). They also comply with UNESCO’s Recommendation on the Ethics of AI, and its human-rights-centred approach to ethics, as well as the OECD’s AI Principles.