Skip to content
Register Sign in Wishlist

Cambridge launches AI research ethics policy

Our new guidelines will help researchers use generative AI tools like ChatGPT, while upholding academic standards around transparency, plagiarism, accuracy and originality.  

The rules are set out in the first AI ethics policy from Cambridge University Press and apply to research papers, books and other scholarly works.  

They include a ban on AI being treated as an 'author' of academic papers and books we publish.  

Mandy Hill, Cambridge's Managing Director for Academic Publishing, said: "Generative AI can enable new avenues of research and experimentation. Researchers have asked us for guidance to navigate its use.  

R. Michael Alvarez, Professor of Political and Computational Social Science at the California Institute of Technology, said: "Generative AI introduces many issues for academic researchers and educators. 

As a series editor for Cambridge University Press, I appreciate the leadership the Press is taking to outline guidelines and policies for how we can use these new tools in our research and writing. 

Have your say

Post

Find content that relates to you

Join us online

This site uses cookies to improve your experience. Read more Close

Are you sure you want to delete your account?

This cannot be undone.

Cancel

Thank you for your feedback which will help us improve our service.

If you requested a response, we will make sure to get back to you shortly.

×
Please fill in the required fields in your feedback submission.
×