In this section you will find guidance and suggestions to use generative AI tools responsibly in your studies. You'll also find some information about the limitations of these powerful tools to demonstrate that while it may be tempting to use generative AI tools - sometimes you can get unexpected outputs!
When using generative AI tools you should also consider the concerns listed on the Ethical and Legal Implications page of this LibGuide.
AI research tools can help you organise yourself, spark discovery of new sources, and potentially help you improve your writing. We have collated some suggestions below from our colleagues. These are not endorsements for these tools just some examples that we have found useful. All of these have free-to-use options.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Ownership of data following inputting into some AI tools becomes murky, raising ethical and data privacy concerns. Please refer to the Ethical and Legal Implications page before using AI tools for data analysis.
The Library recommends you thoroughly read any publisher agreements prior to publication. There have been recent instances where publishers have used author's work to train third-party AI models.
Andy Stapleton does excellent videos on using generative AI tools in higher education - make sure to check out his channel! He has a slight STEM focus, but we think his demos of tools apply to other subject areas too!
Just like failing to declare sources and existing research in your work constitutes academic misconduct, the same goes for failing to declare your use of generative AI.
Goldsmiths requires that all staff and students abide by its values of honesty and trust within their work. For students, it is necessary to abide by the college's academic misconduct policy and procedure regarding AI tool usage:
Use of Artificial Intelligence (AI) tools to produce work that is presented as the student’s own. The use of AI tools may be legitimate within some assessments, where this is the case, this will be set out in the module specification or assessment brief. The use of AI tools in this instance will be appropriately cited within the student’s submitted assessment. (Goldsmiths, University of London, 2023)
You can reference an AI tool if usage is allowed. It is also useful to note the date and version of the tool, as technology in this area develops and changes very rapidly. However, please remember that it is not considered an academic resource and usually lacks references. Do not copy directly from an AI as this is a form of plagiarism.
Goldsmiths, University of London. (2023). Academic Misconduct Policy and Procedures. https://www.gold.ac.uk/media/docs/gam/Academic-Misconduct-Policy-and-Procedures.pdf
Your course convenor will have selected from 3 options on your module VLE page to let you know what how you can use generative AI tools in your assessment submissions:
Turnitin checks for similarity between a students submitted work and published work. Turnitin has also introduced an AI detection tool in April 2023. However, Goldsmiths and other universities have not enabled it mostly due to accuracy concerns and unfairness towards those who have used AI-powered translation tools in their submissions.
Just like failing to declare sources and existing research in your work constitutes academic misconduct, the same goes for failing to declare your use of generative AI.
Goldsmiths requires that all staff and students abide by its values of honesty and trust within their work. For students, it is necessary to abide by the college's academic misconduct policy and procedure regarding AI tool usage:
Use of Artificial Intelligence (AI) tools to produce work that is presented as the student’s own. The use of AI tools may be legitimate within some assessments, where this is the case, this will be set out in the module specification or assessment brief. The use of AI tools in this instance will be appropriately cited within the student’s submitted assessment. (Goldsmiths, University of London, 2023)
You can reference an AI tool if usage is allowed. It is also useful to note the date and version of the tool, as technology in this area develops and changes very rapidly. However, please remember that it is not considered an academic resource and usually lacks references. Do not copy directly from an AI as this is a form of plagiarism.
Goldsmiths, University of London. (2023). Academic Misconduct Policy and Procedures. https://www.gold.ac.uk/media/docs/gam/Academic-Misconduct-Policy-and-Procedures.pdf
Prompt engineering is the process of writing and refining inputs for genAI tools like LLMs and image or video generators - it is more than simply one line questions and accepting the output. Done correctly, prompt engineering can positively impact the robustness, accuracy and relevance of your outputs. There are many different strategies and frameworks you can take with this - we have identified the CAST and CREATE frameworks as good places to start. Try experimenting with these frameworks across different LLMs and compare the results you get - you may be surprised at the differences in outputs.
Using the CAST framework (Hayes Jacobs and Fisher, 2023) will give your prompts a 3-element structure, followed by a process of testing and review. You don't need to separate your prompts into 3 sections or paragraphs, but a good prompt using this framework should touch on all of the elements outline below.
Criteria | Shape the output. Define any rules or norms to be followed including type of language to use, vocabulary, length, format, purpose or destination. |
Audience | Define the target audience. Who is the output intended for? Consider expectations or conventions around language, tone or format for your audience. |
Specifications | Add context to your prompt through details and description. You may want to review and iterate this information depending on your outputs. This is where you might include a research topic. |
Testing | Review and evaluate the output. Is it what you expected? Is it too general or too specific? Have any assumptions been made, or any biases identified? If so, argue these and prompt against them (or try another genAI model!). |
The CREATE framework (Birss, 2023) is useful for providing an output from a particular perspective in a particular format. You can access Birss' full prompting guide here.
Character | Outline the character / role or persona you want the AI tool to play: maybe an academic or expert in an area, maybe a complete layman or beginner. |
Request | Specify what you want the AI to do. |
Examples | Provide some examples of what you expect the output to look like or include. |
Adjustments | Give some instructions on how to shape the output. Information to disregard, ask to highlight key information, restrict the AI to specific sources. |
Types of output | Outline the format for the output: bullet points? Word count? Language, tone, vocabulary. |
Extras | Any additional information, instructions or limitations you would like the AI to consider. |
Birss, D. (2023) ‘The Prompt Collection’. Available at: https://davebirss.com/documents/the_prompt_guide.pdf (Accessed: 15 September 2025).
Hayes Jacobs, H. and Fisher, M. (2023) Prompt Literacy: A Key for AI-Based Learning, ASCD. Available at: https://www.ascd.org/el/articles/prompt-literacy-a-key-for-ai-based-learning (Accessed: 15 September 2025).
Critical use of any resource is necessary within your academic work. When using any generative AI tool (from AI writing tools to AI image generators), we must assess the suitability for our work, management and ownership of the tools, and the wider impact of using such tools. You can find more information on some of the areas to consider on the Ethical and Legal Implications page of this LibGuide.
Recent research published by researchers at Microsoft and Carnegie Mellon University (Lee et al., 2025) identified a worrying link between overuse of genAI tools and a decline in critical thinking. They found that those with higher confidence in genAI capability and outputs were less likely to critically evaluate and interrogate those outputs, while those who had more self-confidence in their own skills, abilities and knowledge were more likely to engage in critical thinking when using genAI tools.
Below is a set of questions to support you in critically evaluating the AI tools you might use within your work. LibrAIry's original test has been modified slightly to include additional considerations. It is recommended you ask these questions before and throughout your research project.
In addition to the above questions, you want to consider the potential environmental impacts of using AI. Development and usage of AI requires a substantial amount of energy and resources which have a significant environmental impact.
Hallucinations occur when an AI tool produces incorrect or non-existent information. Often this information can look real or factually correct.
It is important to always double check the information returned by an AI tool. There are various cautionary tales in academia and industry about its usage including a lawyer who cited fake articles recommended by ChatGPT who may now be facing sanctions for presenting false information to the court.
Here at Goldsmiths, multiple staff have reported LLMs producing false sources and references - presented in a list that looks correct, professional and adheres to referencing styles but further exploration identifies non-existent sources such as journal articles and book chapters. So the moral of this story: always fact check your outputs!
Lee, H.-P. et al. (2025) ‘The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers’, in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. CHI 2025: CHI Conference on Human Factors in Computing Systems, Yokohama Japan: ACM, pp. 1–22. Available at: https://doi.org/10.1145/3706598.3713778.
Coming soon
It can be really tempting to use generative AI tools in many different aspects of your work - but remember, these tools have been trained on existing human knowledge and data. Therefore, we should question whether or not they can actually offer meaningful new perspectives and insights in our areas of study. You may be able to use tools to help digest complex theories, or uncover new sources - but a human is still required to be creative, provide nuance to intricate arguements and uncover new and undiscovered avenues for investigation. Generative AI tools are inherently generic and derivative and can never be a substitute for your own creativity.
When generative AI tools first started appearing, there was much debate about why image generation tools were so bad at generating human hands. This video from Vox gives an overview of why this is - and also provides insight into how image generative AI tools work.