In this section you will find guidance and suggestions to use generative AI tools responsibly in your studies. You'll also find some information about the limitations of these powerful tools to demonstrate that while it may be tempting to use generative AI tools - sometimes you can get unexpected outputs!
When using generative AI tools you should also consider the concerns listed on the Ethical and Legal Implications page of this LibGuide.
AI research tools can help you organise yourself, spark discovery of new sources, and potentially help you improve your writing. We have collated some suggestions below from our colleagues. These are not endorsements for these tools just some examples that we have found useful. All of these have free-to-use options.
|
|
|
|
|
|
|
|
Ownership of data following inputting into some AI tools becomes murky, raising ethical and data privacy concerns. Please refer to the Ethical and Legal Implications page before using AI tools for data analysis.
The Library recommends you thoroughly read any publisher agreements prior to publication. There have been recent instances where publishers have used author's work to train third-party AI models.
Andy Stapleton does excellent videos on using generative AI tools in higher education - make sure to check out his channel! He has a slight STEM focus, but we think his demos of tools apply to other subject areas too!
Just like failing to declare sources and existing research in your work constitutes academic misconduct, the same goes for failing to declare your use of generative AI.
Goldsmiths requires that all staff and students abide by its values of honesty and trust within their work. For students, it is necessary to abide by the college's academic misconduct policy and procedure regarding AI tool usage:
Use of Artificial Intelligence (AI) tools to produce work that is presented as the student’s own. The use of AI tools may be legitimate within some assessments, where this is the case, this will be set out in the module specification or assessment brief. The use of AI tools in this instance will be appropriately cited within the student’s submitted assessment. (Goldsmiths, University of London, 2023)
You can reference an AI tool if usage is allowed. It is also useful to note the date and version of the tool, as technology in this area develops and changes very rapidly. However, please remember that it is not considered an academic resource and usually lacks references. Do not copy directly from an AI as this is a form of plagiarism.
Goldsmiths, University of London. (2023). Academic Misconduct Policy and Procedures. https://www.gold.ac.uk/media/docs/gam/Academic-Misconduct-Policy-and-Procedures.pdf
Your course convenor will have selected from 3 options on your module VLE page to let you know what how you can use generative AI tools in your assessment submissions:
Turnitin checks for similarity between a students submitted work and published work. Turnitin has also introduced an AI detection tool in April 2023. However, Goldsmiths and other universities have not enabled it mostly due to accuracy concerns and unfairness towards those who have used AI-powered translation tools in their submissions.
Just like failing to declare sources and existing research in your work constitutes academic misconduct, the same goes for failing to declare your use of generative AI.
Goldsmiths requires that all staff and students abide by its values of honesty and trust within their work. For students, it is necessary to abide by the college's academic misconduct policy and procedure regarding AI tool usage:
Use of Artificial Intelligence (AI) tools to produce work that is presented as the student’s own. The use of AI tools may be legitimate within some assessments, where this is the case, this will be set out in the module specification or assessment brief. The use of AI tools in this instance will be appropriately cited within the student’s submitted assessment. (Goldsmiths, University of London, 2023)
You can reference an AI tool if usage is allowed. It is also useful to note the date and version of the tool, as technology in this area develops and changes very rapidly. However, please remember that it is not considered an academic resource and usually lacks references. Do not copy directly from an AI as this is a form of plagiarism.
Goldsmiths, University of London. (2023). Academic Misconduct Policy and Procedures. https://www.gold.ac.uk/media/docs/gam/Academic-Misconduct-Policy-and-Procedures.pdf
CW: some of the articles cited refer to suicide, mental health crises, abuse and sexual assault.
If you are struggling with your wellbeing and would like support, please visit Goldsmiths’ student wellbeing support service for advice: https://www.gold.ac.uk/students/wellbeing/
The sudden advancement and deployment of genAI tools into our lives and society can be overwhelming and for some, disheartening – particularly for those in the creative sector. Remember: genAI lacks human creativity and genuine inspiration (despite what big tech and the media tell you!).
When using genAI tools you may want to consider the following information to stay safe and protect your wellbeing:
De Freitas, J. et al. (2024) ‘Chatbots and mental health: Insights into the safety of generative AI’, Journal of Consumer Psychology, 34(3), pp. 481–491. Available at: https://doi.org/10.1002/jcpy.1393.
Kleinman, Z. (2025) Microsoft boss troubled by rise in reports of ‘AI psychosis’, BBC News. Available at: https://www.bbc.com/news/articles/c24zdel5j18o (Accessed: 19 September 2025).
Kooli, C., Kooli, Y. and Kooli, E. (2025) ‘Generative artificial intelligence addiction syndrome: A new behavioral disorder?’, Asian Journal of Psychiatry, 107, p. 104476. Available at: https://doi.org/10.1016/j.ajp.2025.104476.
McMahon, L. (2025) Hundreds of thousands of Grok chats exposed in Google results, BBC News. Available at: https://www.bbc.com/news/articles/cdrkmk00jy0o (Accessed: 19 September 2025).
Valtonen, A. et al. (2025) ‘AI and employee wellbeing in the workplace: An empirical study’, Journal of Business Research, 199, p. 115584. Available at: https://doi.org/10.1016/j.jbusres.2025.115584.
Generative AI tools offer some exciting possibilities around accessibility and inclusivity – particularly from an assistive technology and neurodiversity perspective. However there are still concerns around representation of minoritised communities in the outputs of genAI tools due to the western-focussed, Eurocentric data used to train generative AI models. There is also inequality of access to digital and online knowledge, skills and tools across the world – and this extends to access to generative AI tools – this is called the digital divide. As such, can we label genAI tools as authentically inclusive and accessible when so many people worldwide simply do not have a means to access them.
That said, here are some potential applications for genAI as an assistive technology:
Henneborn, L. and Burden, A. (2024) Inclusivity in Generative AI Should Be an Attribute, Not an Add-On, Stanford Social Innovation Review. Available at: https://ssir.org/articles/entry/inclusive-generative-artificial-intelligence (Accessed: 26 September 2025).
Mainstreaming accessibility and inclusivity in AI and digital (2024) UNESCO. Available at: https://www.unesco.org/en/articles/mainstreaming-accessibility-and-inclusivity-ai-and-digital-technologies (Accessed: 26 September 2025).
Syed, W. (2025) ‘Advancing Accessibility and Social Inclusion through Generative AI and Machine Learning-Powered Multi-Modality on Mobile Platforms’, International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 11(1), pp. 475–483. Available at: https://doi.org/10.32628/CSEIT25111254.
Welker, Y. (2023) Generative AI holds great potential for those with disabilities - but it needs policy to shape it, World Economic Forum. Available at: https://www.weforum.org/stories/2023/11/generative-ai-holds-potential-disabilities/ (Accessed: 26 September 2025).
Prompt engineering is the process of writing and refining inputs for genAI tools like LLMs and image or video generators - it is more than simply one line questions and accepting the output. Done correctly, prompt engineering can positively impact the robustness, accuracy and relevance of your outputs. There are many different strategies and frameworks you can take with this - we have identified the CAST and CREATE frameworks as good places to start. Try experimenting with these frameworks across different LLMs and compare the results you get - you may be surprised at the differences in outputs.
Using the CAST framework (Hayes Jacobs and Fisher, 2023) will give your prompts a 3-element structure, followed by a process of testing and review. You don't need to separate your prompts into 3 sections or paragraphs, but a good prompt using this framework should touch on all of the elements outline below.
| Criteria | Shape the output. Define any rules or norms to be followed including type of language to use, vocabulary, length, format, purpose or destination. |
| Audience | Define the target audience. Who is the output intended for? Consider expectations or conventions around language, tone or format for your audience. |
| Specifications | Add context to your prompt through details and description. You may want to review and iterate this information depending on your outputs. This is where you might include a research topic. |
| Testing | Review and evaluate the output. Is it what you expected? Is it too general or too specific? Have any assumptions been made, or any biases identified? If so, argue these and prompt against them (or try another genAI model!). |
The CREATE framework (Birss, 2023) is useful for providing an output from a particular perspective in a particular format. You can access Birss' full prompting guide here.
| Character | Outline the character / role or persona you want the AI tool to play: maybe an academic or expert in an area, maybe a complete layman or beginner. |
| Request | Specify what you want the AI to do. |
| Examples | Provide some examples of what you expect the output to look like or include. |
| Adjustments | Give some instructions on how to shape the output. Information to disregard, ask to highlight key information, restrict the AI to specific sources. |
| Types of output | Outline the format for the output: bullet points? Word count? Language, tone, vocabulary. |
| Extras | Any additional information, instructions or limitations you would like the AI to consider. |
Birss, D. (2023) ‘The Prompt Collection’. Available at: https://davebirss.com/documents/the_prompt_guide.pdf (Accessed: 15 September 2025).
Hayes Jacobs, H. and Fisher, M. (2023) Prompt Literacy: A Key for AI-Based Learning, ASCD. Available at: https://www.ascd.org/el/articles/prompt-literacy-a-key-for-ai-based-learning (Accessed: 15 September 2025).
Critical use of any resource is necessary within your academic work. When using any generative AI tool (from AI writing tools to AI image generators), we must assess the suitability for our work, management and ownership of the tools, and the wider impact of using such tools. You can find more information on some of the areas to consider on the Ethical and Legal Implications page of this LibGuide.
Recent research published by researchers at Microsoft and Carnegie Mellon University (Lee et al., 2025) identified a worrying link between overuse of genAI tools and a decline in critical thinking. They found that those with higher confidence in genAI capability and outputs were less likely to critically evaluate and interrogate those outputs, while those who had more self-confidence in their own skills, abilities and knowledge were more likely to engage in critical thinking when using genAI tools.
Below is a set of questions to support you in critically evaluating the AI tools you might use within your work. LibrAIry's original test has been modified slightly to include additional considerations. It is recommended you ask these questions before and throughout your research project.
In addition to the above questions, you want to consider the potential environmental impacts of using AI. Development and usage of AI requires a substantial amount of energy and resources which have a significant environmental impact.
Hallucinations occur when an AI tool produces incorrect or non-existent information. Often this information can look real or factually correct.
It is important to always double check the information returned by an AI tool. There are various cautionary tales in academia and industry about its usage including a lawyer who cited fake articles recommended by ChatGPT who may now be facing sanctions for presenting false information to the court.
Here at Goldsmiths, multiple staff have reported LLMs producing false sources and references - presented in a list that looks correct, professional and adheres to referencing styles but further exploration identifies non-existent sources such as journal articles and book chapters. So the moral of this story: always fact check your outputs!
Lee, H.-P. et al. (2025) ‘The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers’, in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. CHI 2025: CHI Conference on Human Factors in Computing Systems, Yokohama Japan: ACM, pp. 1–22. Available at: https://doi.org/10.1145/3706598.3713778.
Given the language processing power of LLMs they can be a useful tool for translating text from one language to another. However, you should consider the following if you are planning on using an LLM in this way:
He, Z. et al. (2024) ‘Exploring Human-Like Translation Strategy with Large Language Models’, Transactions of the Association for Computational Linguistics, 12, pp. 229–246. Available at: https://doi.org/10.1162/tacl_a_00642.
Resende, N. and Hadley, J. (2024) ‘The Translator’s Canvas: Using LLMs to Enhance Poetry Translation’, in 16th Conference of the Association for Machine Translation in the Americas. 16th Conference of the Association for Machine Translation in the Americas, Chicago: Association for Machine Translation in the Americas, pp. 178–189. Available at: https://aclanthology.org/2024.amta-research.16.pdf (Accessed: 15 September 2025).
Zhu, W. et al. (2024) ‘Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis’. arXiv. Available at: https://doi.org/10.48550/arXiv.2304.04675.
It can be very tempting to use generative AI tools for transcription – especially with research interviews or focus groups and online meetings or lectures! If you must use a transcription tool, you should always request consent from all interviewees, attendees and/or the presenter before activating tools – both as a courtesy and from a data protection and privacy perspective! It is important to note that research into generative AI transcription tools has raised significant concerns regarding their accuracy, reliability and hallucinations and that human transcription mitigates these issues in most cases.
Remember there are built-in dictation and transcription tools in Microsoft products. As part of Goldsmith’s contract with Microsoft, certain safety measures and guardrails are in place to minimise the transfer of personal and sensitive data as defined under the UK GDPR.
As with all generative AI tools you should be mindful of hallucinations – where generative models make up information and present it as fact. In research conducted by Koenecke et al. (2024) identified that 40% of hallucinations fabricated by OpenAI’s Whisper tool could be considered harmful or of concern due to the speakers being misinterpreted or misrepresented. You should try retain recordings of what you are transcribing in order to ensure that transcriptions are accurate.
Generative AI transcription tools can also struggle with understanding accents or specialist vocabulary. You should be mindful of this if you plan on transcribing speech using genAI.
Burke, G. and Schellmann, H. (2024) Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said, AP News. Available at: https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14 (Accessed: 23 September 2025).
Koenecke, A. et al. (2024) ‘Careless Whisper: Speech-to-Text Hallucination Harms’, in The 2024 ACM Conference on Fairness, Accountability, and Transparency. FAccT ’24: The 2024 ACM Conference on Fairness, Accountability, and Transparency, Rio de Janeiro Brazil: ACM, pp. 1672–1681. Available at: https://doi.org/10.1145/3630106.3658996.
Patel, R. (2025) ‘GenAI vs NMT: Understanding the Future of Language Translation’, AiThority, 23 January. Available at: https://aithority.com/machine-learning/genai-vs-nmt-understanding-the-future-of-language-translation/ (Accessed: 23 September 2025).
Siegel, R. et al. (2023) ‘Poster: From Hashes to Ashes - A Comparison of Transcription Services’, in Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. CCS ’23: ACM SIGSAC Conference on Computer and Communications Security, Copenhagen Denmark: ACM, pp. 3534–3536. Available at: https://doi.org/10.1145/3576915.3624380.
It can be really tempting to use generative AI tools in many different aspects of your work - but remember, these tools have been trained on existing human knowledge and data. Therefore, we should question whether or not they can actually offer meaningful new perspectives and insights in our areas of study. You may be able to use tools to help digest complex theories, or uncover new sources - but a human is still required to be creative, provide nuance to intricate arguements and uncover new and undiscovered avenues for investigation. Generative AI tools are inherently generic and derivative and can never be a substitute for your own creativity.
When generative AI tools first started appearing, there was much debate about why image generation tools were so bad at generating human hands. This video from Vox gives an overview of why this is - and also provides insight into how image generative AI tools work.