Section 1.6 Large Language Models
LLMs are the algorithms that make generative AI work: ChatGPT, DeepSeek, Gemini, parts of Alexa, Siri, Coplit, and many others. LLMs are the text generators, the pieces that actually take a prompts and produce the response text (or image, video, music, etc). Obviously, LLMs have quickly become a part of how many of us live and work. I have specific things to say about them as a tool in mathematics.
LLMs can imitate two things I’ve already talked about: educational resources and computer algebra systems. If you ask them, they will produce both explanation of mathematics and answers to specific calculations. First, then, the same general principles apply. Having an LLM do you work for you defeats the purpose of an assignment. Just like it was for other tools, this is an academic honesty offense.
So what about using LLMs in place of other resources or calculation tools, in the appropriate ways I’ve described above? The answer to this depends on the course.
- In courses where assignments are graded normally, the use of LLMs are resources is prohibited.
- In courses where assignments are graded just on completion, the use of LLMS is not prohibit but it is very strongly discouraged.
Why this policy? It’s because of the nature of how an LLM works. And LLM is a statistical pattern matching machines. It produces text that it things matches the patterns of all that has read in its training. What it can do with these patterns is pretty incredible, for sure. But it is all about patterns of what is in the training data. It’s not at all about what is true or correct -- that’s not what an LLM is designed to do.
LLMs are pretty good these days at getting information right by accident. If you ask first-year level mathematics quesitons, you’ll get correct information much of the time. But not always, since generating things that are verifyable true is not was an LLM is for. Therefore, they are unreliable.
If you ask an LLM to explain a math concept, its explanation may just be wrong. That’s of no help to a student, particular since you don’t know when it is accurate. Therefore, for conceptual questions, you should rely on known sources: text written by reputable mathematicians, videos by the same, and so on. The source is important. You need to know that this information was generated by a person who knows the mathematics and intends to share it with the world accurately.
Similarly, if you ask an LLM to do a math calculations, it might just do it wrong. I doesn’t have a mathematical logic in it, it’s just predicting text. It will get many calcluations correct, but far from all of them. Instead, you would ask a computer algebra system, as discussed above. These are programed with the logic of mathematics in them, and will consistently generate correct calculations.
This is my reasoning for either prohibiting or very strongly disouraging LLMs. Please take this to hear and make use of other resources.