On this page we will offer guidance and questions to ask around the ethical, legal and moral implications of genAI to help you use tools responsibly - and make informed decisions on whether your use of genAI tools is appropriate for certain tasks.
The UK General Data Protection Regulation (GDPR) is one of the main pieces of data protection and data privacy legislation in the UK (the other being the Data Protection Act 2018). The UK GDPR lays out the rights of the individual with regard to their personal data and the responsibilities of organisations in the collection, processing and storage of that data. Under the UK GDPR personal data is defined as data that can be used to directly identify a person; or indirectly identify a person from that data in combination with other information.
Responsibility for administering the UK GDPR falls to the Information Commissioner's Office (ICO). You can check out their website for more information on the legislation and your individual rights. The UK GDPR is a hugely complex piece of legislation, so we will only include information that applies to personal data and using genAI tools.
Individuals are entitled to the following rights:
Organisations are required to demonstrate the following principles:
Well, let's think hypothetically about this. Suppose we have the results of an online survey we've undertaken for our dissertation, we collected names, email addresses and demographic information as well as participants' experiences with mental health and we input this into an LLM (like chatGPT or DeepSeek) for analysis. As soon as we press the generate button on the LLM we have the potential for conflict: the data that we entered has now been sent to a server somewhere on earth for processing before a response is sent back to us on our device. We have no reasonable or easy way to confirm where that remote server is located geographically - thus we may have transferred that data internationally - maybe to the US or somewhere in the EU. The point is we don't know where the data has gone - therefore we don't know if we should implement additional guardrails to ensure secure data processing.
There is another issue arising here too - what happens if a participant wishes to withdraw their consent and remove their data from the survey or, in GDPR terms, exercise their right to erasure? We are unable to delete the data that has been sent to the LLM server (we cannot ask the LLM to delete it either!) which would mean a breach of GDPR - as an individual's rights have been violated. We can apply the same thinking to the right to rectification - that is an individuals right to keep their data up-to-date - we cannot rewrite a previous LLM request to update the data. There are also concerns around the right to restrict processing and those around automated processing when it comes to using LLMs.
Let's also consider some of the organisational responsibilities - as this may affect you in future, in the workplace. Organisations must be transparent on why they are collecting personal data and what they intend to use it for - you'll often find this information in a data collection or privacy notice, and it is against the UK GDPR to use personal data in any way that is not already specified in this notice. Organisations cannot keep personal data indefinitely either - personal data must be deleted or destroyed when it is no longer needed, which is almost impossible to do once data is entered into an LLM.
So, as you can see, there are a lot of implications around generative AI and data protection. You should never enter personal data (either your own or other people's) into a generative AI tool. You should also be aware that some tools may train themselves on data inputted by users - another reason not to put anything personal into an LLM or generative AI tool. You don't want your personal details or information surfacing on an other user's request!
When using generative AI tools to make decisions you should always consider who is has responsibility for any decisions made and action taken. Ultimately, an AI model itself cannot be held responsible for its generated outputs - it is not a human, it is not sentient. Generative AI models only produce a statistically probably output. So who should (or can) be held accountable? Is it the end user? The AI development company? Governments or regulators?
Who do we hold accountable for the reliability of outputs - especially when factually incorrect information or dangerously biased content influences decision making. As end users of generative AI tools, we should take care to use these tools as responsibly as we can by challenging biases, fact-checking outputs and engaging our critical thinking skills, to ensure we ourselves are accountable.
Explainable AI is a concept to try to make AI tools and outputs more trustworthy by allowing humans to better understand an AI model's decision making processes. If humans can understand the decision making process of an AI model, then we can correct stages in the process where it gets things wrong or makes biased assumptions.
What is Explainable AI (XAI)? (2023) IBM. Available at: https://www.ibm.com/think/topics/explainable-ai (Accessed: 12 March 2025).
AI is a human invention trained on data collected, interpreted, processed and curated by humans, therefore human biases are ingrained within AI systems - and these can surface in generative AI outputs. When we use generative AI tools - we must be aware that these biases exist and be prepared to challenge them. Have a look at this example with gendered explanations for the economy and lightbulbs:
You should also remember that biases can surface in other types of generative AI tool - not just LLMs. Image and video generators can also reinforce harmful stereotypes in the content they generate too. Development companies have also run into problems when trying to mitigate these biases. Have a look at this article on Al Jazeera about the backlash that Google Gemini received for only generating depictions of people of colour - whether appropriate not. Google temporarily disabled Gemini following the controversy.
Recent research done by our colleagues at UCL has worryingly identified that biases in AI systems can actually amplify our own biases:
People are inherently biased, so when we train AI systems on sets of data that have been produced by people, the AI algorithms learn the human biases that are embedded in the data. AI then tends to exploit and amplify these biases to improve its prediction accuracy. (Bias in AI amplifies our own biases, 2024)
Here, we’ve found that people interacting with biased AI systems can then become even more biased themselves, creating a potential snowball effect wherein minute biases in original datasets become amplified by the AI, which increases the biases of the person using the AI. (ibid.)
You can find out more about this research on the UCL blog here (and access the academic paper).
Bias in AI amplifies our own biases (2024) UCL News. Available at: https://www.ucl.ac.uk/news/2024/dec/bias-ai-amplifies-our-own-biases (Accessed: 17 February 2025).
Shamim, S. (2024) Why Google’s AI tool was slammed for showing images of people of colour, Al Jazeera. Available at: https://www.aljazeera.com/news/2024/3/9/why-google-gemini-wont-show-you-white-people (Accessed: 22 July 2025).
One of the biggest objections to the use of generative AI tools stems from concerns around copyright and ownership. Here at Goldsmiths these concerns are often cited as one of the reasons that many students choose not to use generative AI tools - because of the dubious practices involved in obtaining data to train generative AI models.
Hopefully you are aware that generative AI models are trained on huge amounts of data, scraped from across the internet - and this often includes data that is protected by copyright. Copyright owners are rarely (if ever) appropriately contacted, consulted or remunerated for the inclusion of their work in generative AI model training processes. AI companies often do not disclose where the data used to train their models comes from, nor is it always possible to prove that a creative's work has been used to train a model - which can make mounting legal challenges tricky.
As always, regulation is slow to catch up with technological advancement - if there is even an appetite to regulate at all. Here in the UK, the current government is doing very little to allay the concerns of creatives. In fact, (at the time of writing) there is an ongoing "ping-pong" (disagreement) between the House of Commons and House of Lords regarding allowing tech companies to use copyrighted material to train their models: the Lords are pushing for more robust protections for artists and creatives including around transparency, that is ensuring copyright owners are able to see where their content has been used and by whom. (Kleinman, 2025). You can read more about this on the BBC here. The EU is taking a slightly more transparency-orientated approach: that all copyrighted material used to train models should be summarised and publicly available. (EU AI Act: first regulation on artificial intelligence, 2023).
You should check the terms and conditions of use on generative AI tools that you use to confirm who owns the content that is generated - and whether content you upload is used to train models in future. This can also have implications. For example, OpenAI say that end users have ownership of outputs created in ChatGPT (OpenAI, 2025) - but can realistically expect OpenAI to have asked permission of all of the creatives whose work has been used to train their tools on whether they consent to this?
To support artists in protecting their works, tools like Nightshade and Glaze have been created to overlay images, which when ingested into AI tools, will poison their datasets.
For some further information on this contentious area, have a look at this article from intellectual property firm Marks Clerk on the state of AI-generated content ownership in the UK and this blog post from the University of Portsmouth.
Tia Ilana Acheampong (2025) Who owns the content generated by AI? Available at: https://www.marks-clerk.com/insights/latest-insights/102k38x-who-owns-the-content-generated-by-ai/ (Accessed: 12 March 2025).
EU AI Act: first regulation on artificial intelligence (2023) Topics | European Parliament. Available at: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (Accessed: 23 July 2025).
Kleinman, Z. (2025) Government AI copyright plan suffers fourth House of Lords defeat, BBC News. Available at: https://www.bbc.com/news/articles/clyrgv2n190o (Accessed: 23 July 2025).
OpenAI (2025) EU Terms of Use, OpenAI. Available at: https://openai.com/policies/eu-terms-of-use/ (Accessed: 23 July 2025).
Sekhon, J., Ozcan, O. and Ozcan, S. (2023) ChatGPT: what the law says about who owns the copyright of AI-generated content, The Conversation. Available at: http://theconversation.com/chatgpt-what-the-law-says-about-who-owns-the-copyright-of-ai-generated-content-200597 (Accessed: 28 March 2025).
Access and barriers to AI tools. Who can afford to pay vs. those who cannot - the digital divide. Consider: should you use AI for a task if someone else cannot?
UN Secretary-General António Guterres says this about AI: “[It] must benefit everyone, including the one third of humanity who are still offline. Human rights, transparency and accountability must light the way.”
JOB DISPLACEMENT: UK government to replace UK civil servants with AI: https://www.theguardian.com/technology/2025/mar/12/ai-should-replace-some-work-of-civil-servants-under-new-rules-keir-starmer-to-announce
https://www.informationweek.com/machine-learning-ai/ai-is-deepening-the-digital-divide
CW: child abuse.
Creation of harmful material e.g deepfakes and disinformation. Cybersecurity implications. Scams and fraud. Plagiarism.
AI to deceive - deepfakes and disinformation
UK has become first country to outlaw the creation of obscene images using AI
Anti-intellectualism - perception that society has ‘had enough of experts’ and much like boomers believing everything they read on Facebook - people are believing everything that is output from a genAI tool.
Where does training data come from? How transparent are tech companies when disclosing where their data comes from?
The Content Authenticity Initiative brings together technology companies, media and cultural organisations and institutions with the aim of 'restoring trust and transparency in the age if AI.' Members include the BBC, Adobe and Nikon.
The initiative offers a tool to check the content credentials of AI generated content - but beware, not all AI models label their content as AI generated!
Using AI (and other cloud computing services) is hugely energy intensive. Not to mention the raw materials used in the production of hardware.
ChatGPT-4 writing a 100 word email: uses enough energy to fully charge 7 iPhone Pro Maxes (140KWh).
Uses 500ml of water.
Training GPT-3 used 5.4m litres of water - annual usage of 26 UK households. GPT 3 is nearly 10 times smaller than GPT4 in terms of parameters (175 billion vs est. 1.7 trillion)
By 2027, global AI is expected to use the equivalent of half UK annual water usage. 4.2-6.6 billion cubic meters.
https://www.theecoexperts.co.uk/news/deepseek-ai-environment
https://www.theverge.com/climate-change/603622/deepseek-ai-environment-energy-climate
https://www.businessenergyuk.com/knowledge-hub/chatgpt-energy-consumption-visualized/