The AI in Classes Experiment - Current Status

Author(s): Brandy A. B. Perkl, Ph.D. - Feel free to use/adapt with credit! | Originally posted: December 17, 2024

1) My AI Use Course Policy (via UArizona)

In my own words vs. the University words: You can use AI on ANY assignment, but ONLY with active transparency. 


You can use AI on ANY assignment, but ONLY with active transparency. 


Basically: If you even LOOK at an AI tool while doing your work, even Grammarly, tell me when, where, what, and how you used it in a disclosure at all times. There will be no penalty for usage unless specifically stated that you cannot (very rare circumstances), or if there is a clear usage without disclosure.  

Active transparency means: Noting briefly (either via a citation or reflective comments) how and when you used AI in your process. 

My hope: Is that we will help each other learn when AI is helpful, how to use it ethically, and when it is useless or even harmful. To do that, we have to disclose our usage. As a leadership student, you may also want to include a brief critique* of how well it did at 'helping' you complete your work so we all learn together what to be wary of and when to use it with minimal concern.

Possible penalties I hope I never to need to use: I reserve the right to impose a significant penalty for the unreflective reuse of material generated by AI tools and to then assign zero points for merely reproducing the output from AI tools without citation, reflection, or critique.

NOTE: Your written work may be shared with one or more AI-detection tools designed to predict if the text was created by a generative AI/large-language model like ChatGPT. 

My pledge: However, I am committed to being fair and maintaining transparency in my grading, so this would be an opportunity for discussion with the student if I exercise the penalty rights noted above. (I’m hoping we won’t have a problem in this regard but want to make sure that the expectations are clear so that we can spend the semester learning things together—and not worrying about the origins of your work. If any part of this is confusing or uncertain, please reach out to me for a conversation before submitting!)

1a) The 'official words' version of the policy above:

Related University resources can be found here: Artificial Intelligence in Teaching and Learning

In this course you are welcome and expected to use generative artificial intelligence/large language model tools, e.g. ChatGPT, Dall-e, Bard, Perplexity. Using these tools aligns with my teaching goal of training leaders for our current and future society.

Citation Example

OpenAI. (2023). ChatGPT (July 2023 version) [Large Language model]. https://chat.openai.com/share/e5416379-c4a3-47d2-8cb9-2a603d1569b4 

Reflective Comments Example

Reflecting on my creation of this page. 

AI Acknowledgement Section: No site copy for this page was generated using AI, however, ChatGPT was consulted regarding when students should be penalized for using it in college courses. https://chat.openai.com/share/e5416379-c4a3-47d2-8cb9-2a603d1569b4  

Important Caveat: Be aware that other classes will have different policies and some may forbid AI use altogether! When prohibited or not in compliance with a course's policy, usage could result in violations of the Honor Code and have actional consequences at the class, college, or University level.  

2) Prioritize ethical considerations! 

Your reputation is on the line every time you use AI (unless you get consent and disclose). 

You can also use this list to think it through + guide checking its work!

As future leaders, your reputation is your currency – built on trust, integrity, and authenticity. When you bring AI into your work, doing things the right way is a MUST.

Remember, even the hint of doing something shady with AI can hurt your reputation. By tackling ethical concerns head-on, you show that you're serious about responsible leadership and inspire others to do the same. 

3) My Top Reminder: Don't Surrender Your Voice

In leadership over the past decade+, there has been a continual push for more transparent, authentic leaders. These leaders are often most distinguishable and prized for their 'voice'. We often value our leaders for helping us see and share in a goal and illuminating how to create the path to that goal. We want their words to feel REAL and ALIVE to us and help us to see shared visions. That is difficult for an AI to do, at least so far, so I encourage you not to surrender your voice to save time + effort. 

Excerpt from Prof. Loewe of St. Edward's University's Policy for Ethical Use of Generative AI Technologies... (Loewe, 2023)

"...Maybe GAIs will give you a useful suggestion, organizational idea, or other help, but their outputs, especially in well-established genres, are often bland pabulum—or worse. 

GAIs such as Chat GPT and Bard are trained on text found online. You already know from using the Internet that much of what appears online is wrong, banal, or generated by copycats; is glib brand-building fluff, clickbait trash, or political hype; or is written in a voice-of-the-committee style. 

As a result of the training data, some GAI output is like an OK-looking but ultimately unsatisfying (or even slightly gross) gumbo made from mystery ingredients. 

GAIs are interesting tools that can help you improve your writing in some ways, but you retain both the privileges and the responsibilities of a human being who can make choices in using words..."

Note that the higher the AI involvement, particularly in critical thinking tasks where you need to be able to differentiate junk from quality - which requires an existing knowledge base - the risk of usage increases as you go along. Fairly low risk exists in asking for proofing to Standard Edited Academic English; increasingly higher risks of inaccuracies, hallucinations, biases, and loss of stylistic impact (voice!) increase as assistance increases.

4) Why AI at all?? Because it's an emerging skill.

Learning to use AI is an emerging skill, and those who develop competencies in this skill will likely be sought out over those who do not for future employment (particularly those with 'prompt engineering skills'; Visé & Klar, 2023).  

If you want to know more including Ideal Class Uses, Limits, Ethics, links to types of GAI for different uses, etc., you can explore Pt. 1 of this experiment, here: https://www.brandyabrown.com/posts/the-ai-in-classes-experiment-pt-1-fall-2023 

We will also explore and discuss in class!

Acknowledgements

While I did not end up using GAI to create this page, I use Grammarly often when preparing course materials (particularly to attempt to be more concise and clear - I am wordy by nature + trade).  I used Gemini Advanced in particular to edit down my original words on ethics this round. But here is an ideal list of what to consider for ethics I use to refine/guide these conversations.

Much of my text for this page was adapted from text provided in the document below by other professors who generously support one another in current efforts to adapt to AI, and I welcome other professors to do the same with my work as well:

Eaton, L. (Ed.). (n.d.). Classroom policies for AI Generative Tools. Crowd-Sourced Classroom Policies for AI Generative Tools. https://docs.google.com/document/d/1RMVwzjc1o0Mi8Blw_-JUTcXv02b2WRH86vw7mi16W3U/edit?fbclid=IwAR1J1sSmlMv6YStCrn25JJINaEHcCHWbz9Trm1Vw-ot-xOf1tNIRjrvI70M

References

Loewe, D. M. (2023, July 15). Policy for ethical use of Generative AI Technologies. https://docs.google.com/document/d/1onwUP12kIqcU2-s-xjEMY-UJ4cWf-8xApCE3gxTcQB0/edit 

McAdoo, T. (2023, April 7). How to cite chatgpt. American Psychological Association. https://apastyle.apa.org/blog/how-to-cite-chatgpt <-- HOW TO CITE CHATGPT! Though my preference would be to use their new option to share a link to your prompts vs. a generic link to ChatGPT, as I feel that's more transparent, i.e. https://chat.openai.com/share/ef0a23e0-9384-4319-8b33-99f76a71bb6b and to just have an AI Disclosure section where you narrate your how + why + what of usage of all GAIs. 

Visé, D. de, & Klar, R. (2023, April 18). Nine in 10 companies want employees with Chatgpt Skills. The Hill. https://thehill.com/policy/technology/3955384-ai-employees-companies-chatgpt-skills/ 

Image sourced from Adobe's Stock (licensed via UArizona). | University  of Arizona Privacy Statement