Norwegian version of this page

About generative AI

NIH's policy for use of artificial intelligence (AI), text generators such as ChatGPT and others, in the academic environment and class room. Remember that AI text generators are language models. Use them as text assistants and not as a source of knowledge.

By Beate Torset
Published Sep. 21, 2023 9:25 AM - Last modified Jan. 8, 2024 12:28 PM
Robot and human looking at eachother

Access to secure AI-chat for students

Students and staff now have access to an AI chat designed for use in teaching. You log in through Feide to Sikt KI-chat, and then you're ready.

The benefits of Sikt AI chat include:

  • user privacy is maintained
  • the chat is secure - only you have access to your chats
  • conversations are stored in the users' browsers and are deleted after a maximum of 30 days
  • Microsoft does not know who the end user is, and Sikt does not store chat history
  • the data is not used to further train the AI model

First, a clarification

Academic integrity is central in higher education. It's about trusting your own skills, making independent assessments and taking your own standpoint, and being able to argue these. This applies to both assignments and requirements during courses, and to exams. As a general rule, it is a requirement that assignment/exam answers are an independently produced text. This means that submitting a text that is entirely or partially generated by others or by AI tools (AI chat, ChatGPT, etc.) will be considered cheating, unless it is disclosed in an honest manner.

We would also like to emphasize that the assessment of whether, which, how and why you as a student use AI chatbots and other similar tools in your course work, you must make yourself. Make sure you adhere to any guidelines given in the specific course and for the specific assignment/exam. You are 100% responsible for the content of your answers, regardless of the source or tool you use. It is also important that you are aware that the use of AI tools assumes that you know the material well and are able to distinguish between correct and incorrect information.

NIH policy for good use

As of March 2023, the Norwegian School of Sport Sciences (NIH) has developed an institutional policy for the proper use of these tools:

  • NIH recognizes that AI text generators (ChatGPT and others) are here to stay, are evolving, and can serve as aids in learning processes.
  • All use of such tools in submissions (exams, portfolio assessments, assignments) must be transparent and disclosed. It should be stated which tool was used, in which part(s) of the text, and how.
  • It is up to each course responsible to assess how AI text generators can be used in the course, in teaching, assignments, and other assessments. This must be clarified for students in the course, and course policy can be developed in dialogue with the students.

*AI chatbots are not sources and should never be cited as a source of information/facts.

Specifications regarding the "policy"

Such a "course policy" can range from "free use" provided that guidelines for academic writing and reliability are followed, to a "ban" because independent thinking and creativity are essential in the course. A prohibition will, of course, not be enforceable and verifiable, but will signal to you as a student what is required to achieve the learning outcomes in the subject.

What can be regarded as attempts of cheating

Cheating or attempts at cheating are met with strict sanctions out of consideration for fellow students, future employers, and the reputation of NIH as an educational institution.

You must familiarize yourself with the rules that apply to aids for exams and for the use of sources and citations. Breaching these rules may lead to suspicions of cheating or attempted cheating.

Cheating is a breach of academic integrity. Assignments and responses must be your own independent work. Academic integrity means making it clear what are your own thoughts and ideas and what has been borrowed from others.

In regards to AI you may be suspected of cheating if:

  • you use fabricated/made-up data. 
  • text from artificial intelligence text generators (ChatGPT, etc.) is copied directly into the response. Usage must be accounted for (which tool, which part of the task, and how it was used).

Citation / APA style

The APA manual has now been updated with guidelines on citing AI. These guidelines can be found here: https://apastyle.apa.org/blog/how-to-cite-chatgpt 

Use in learning processes

If you are considering using AI tools in your coursework, remember

An exam assignment is an individual, independent piece of work. We understand, however, that AI tools are used by students for many different purposes, including exam assignments, and therefore we want you to be both conscious and open about this usage with us. Follow the rules given in the APA manual and any additional information specified in the individual assignment/exam.

Source references: We do not consider AI tools as relevant sources of academic content (ref. the point about source references in APA 7). This means that you must find source references from credible sources in academic literature for your work, and not refer to chatbots such as ChatGPT for definitions, quotes, and academic arguments. Here you must be careful!

Academic language: You should be aware that AI tools do not always present the topic correctly, and that the terms used there are sometimes unusual and strange. We want you to use the academic language that we have used in the course and that is used in Norway in general on this topic.

Fake sources: BEWARE! AI can provide false sources, and citing these as credible sources has led to several cases of cheating in higher education.

Good ways to use ChatGPT and other AI can be

  • As a sparring partner, e.g., about the issue, content (keywords) to include in text, video, or podcast.
  • Generate codes, etc.
  • Compile texts, paraphrasing
  • Get feedback on text
  • Overcome writer's block
  • Explain simple concepts
  • Translations
  • Correct grammar and spelling errors

Limitations

  • Not 100% truthful.
  • Might make up answers or hallucinate.
  • Doesn't know what it's talking about; it just suggests words based on probability.
  • Only trained on data up to 2021 (as of March 2023).

(Possible) advantages of use

  • Time-saving
  • Can, when used correctly, provide practice in critical thinking
  • Useful tool for writing assistance, e.g., for dyslexia, etc.
  • Can serve as a personal assistant for explaining simple concepts
  • Can serve as a mentor that asks questions about your text
  • Can generate self test quizzes etc. from a text

Good "prompting" = better result

Is AI tool new to you? These five short films (approx. 12 min. each) provide a good introduction. Practical AI for Instructors and Students

Six simple tips for designing a prompt

  • Be specific: The more concrete and clear you are, the better the result from the digital assistant will be.
  • Mind the language: Consider sentence structure, write simply and avoid negation (i.e., words like "not" and "no"). A digital assistant often responds best to English. Most language models are based on English training data, and therefore are better suited to understand the nuances of the English language. You can ask it to translate to Norwegian afterwards, if you need to use some of the answer directly.
  • Work on priming: Priming can be explained as improving or warming up the model - or getting it to "lean the right way" so that it interprets the task in the right direction. It's about giving the assistant a hint about how it should respond or solve a task, as part of the prompt.
  • Instruct step by step: Language models provide better explanations if they get the opportunity to respond step by step. By adding the instruction "let's think step by step" in a prompt, the model itself can add context and explain how it arrived at the answer.
  • Provide examples: Give the digital assistant examples of what you want it to achieve. In this way, it can learn from the examples and use this knowledge to generate tailored results based on similar examples or situations. Try, fail - and try again: Adjust, adapt and experiment with the instructions. Identify what works well and what can be improved. Is the answer inaccurate? Do you want a different format?

The resource aiforeducation has many suggestions for different prompts you can use, including these that are specific for students.

Ethical challenges

Beyond issues related to not being able to trust the text, privacy, and copyright, there are several challenges with using these tools. 

  • OpenAI is not so open after all, i.e., they share little about how the language model is trained.
  • The Moderation API ("culture filter") is trained by underpaid employees who have had to go through vast amounts of data to teach the model what is, for example, racist, pornographic, etc. (which can sometimes be a traumatizing job).
  • Resource consumption in terms of electricity results in significant CO2 emissions and requires large storage capacity.
  • When the user "talks to" ChatGPT, they work for free for OpenAI since they use the information we provide to improve the model.
  • Ethnic and gender imbalance in the data the model is trained on.

You can now turn off the option for your chats to be stored and used in training the chatbot:  How to manage your data in ChatGPT