Cambridge establishes AI ethics policy for research

  • Home
  • Trending
  • Cambridge establishes AI ethics policy for research
Cambridge University has launched an AI research ethics policy

Cambridge University has established new guidelines allowing researchers to use generative AI tools while maintaining the required standards for accuracy, integrity, plagiarism, and authenticity.

Cambridge University set out these rules in its first AI ethics policy, and they will apply to books, scholarly writings, and research work. The policy stated that AI is not an author of books and academic papers from Cambridge University Press. This decision makes academics clear, considering the confusing use of language models like ChatGPT in research.

According to the Managing Director for Academics at Cambridge University Press, Mandy Hills, Generative AI will allow researchers to embark on new methods of experimentation and research. 

Furthermore, she added that just as researchers, academic writers, and editors use other research tools, they have the freedom to use new technologies as they see fit within the required standards.

“We want our new policy to help the thousands of researchers we publish each year, and their many readers. We will continue to work with them as we navigate the potential biases, flaws and compelling opportunities of AI.”

Mandy Hills, Managing Director for Academics at Cambridge University Press

Like their academic community, they are moving towards this emerging technology with a spirit of critical participation. By making integrity, accuracy, and authenticity of utmost importance, they can now see changes in using generative AI for research.

Political and Computational Social Science Professor at the university, Michael Alvarez, said that generative AI brings about various issues for educators and academic writers. He also hopes that a conversation will hold concerning opportunities and shortcomings caused by using generative AI for publishing for many years to come.

Cambridge’s AI ethics policy

Cambridge’s ethics policy for generative AI in publishing research works includes the following;

● Authors must reference the use of AI in research writings the way educators do with other software and tools.

● AI cannot be regarded as an author because of the need for accountability. Cambridge will not list AI and LLM as an author in any of its published works.

● The use of AI must not breach the university’s plagiarism policy. The research works must be that of the author and not present another person’s data or writing without proper referencing.

● Authors must be accountable for integrity, accuracy, and authenticity, including if they use AI.

Photo Credits: Tim WildSmith (Unsplash)

Tags: