Sign In for Full Access

Quick access through the institutional single sign-on Manchester Met Sign In
Skip this for now
Public Access Here

Sign In for Free Access

Login with email for free guest access to a range of Rise content
Logging You In!
Incorrect Password (Click Here to Reset)! Passwords Must Match Password must be more than 8 characters
Skip this for now
Man Met Access Here

So is generative AI a good thing?

Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Managing Data Hazards / CC-BY 4.0

This sprint is going to introduce some of the things that you should know about generative AI tools.

We don’t want to put you off using them, but it’s important to have an awareness of their limitations and concerns that people have about them. But first lets consider the ways in which AI can have a positive impact. Choose one of the stories below to watch:

You might be tempted to use generative AI tools in your studies. They may help you brainstorm ideas, outline essays, and generate drafts or assist in proofreading and suggesting improvements to your writing. But just because you can use them doesn’t always mean you should. There are some risks to using generative AI; being aware of these is an important part of making informed decisions about when and what to use them for.

There are also a number of AI tools that are recommended to students for accessibility purposes, such as Dragon. If you have been advised to use a specific tool as part of a personal learning plan, you should continue to use it.


Generative AI tools are trained on HUGE amounts of data. The companies that develop them are not very open about where that data comes from, but we know that some of the earlier language models were trained on data taken from the internet. This was largely done without asking the owner of the data being asked if this was okay, which has led to concerns about copyright. The data isn’t always up to date. For example, when ChatGPT was first released to the public, it did not know who the current UK prime minister was. The biases in the initial data can also be present in the outputs of generative AI tools. So it’s really important that you don’t take the outputs of these tools at face value.

You might look at some text generated by generative AI and be unable to distinguish it from text written by a human. This is a result of the trial-and-error training process used to train the language model. Humans are shown the generated text and then decide on its quality. If it’s good, the model is adjusted to do more of that, and if it’s bad, it is adjusted to do less. Over time, the model moves from generating quite random responses to coherent ones.

Although the generated text can seem like a human has written it, it’s important to remember that it hasn’t. AI can’t think, feel or experience things. A good way to think about generative AI tools is as a really advanced autocomplete. In the same way that when you type a message on your phone, you are shown the most likely next word, tools that generate text work by predicting a series of words. They don’t have a way to check that the information in the text they generate is correct. You should assume that it is wrong and confirm the information using more reliable sources.

Other concerns have been raised about the way that these tools have been developed. They require a lot of energy and so there is an environmental impact. There are also some concerns about the tools being used for unethical purposes. Generative AI tools are coded to refuse to create many types of harmful content, but these guardrails are not perfect.

These issues are covered in more detail in the considering AI topic later on in the study pack.

We will now move on to looking specifically at how generative AI tools can be used in your studies. The next two sections are split into thinking about its use in learning and its use in assessment.