Skip to Main Content

Understanding & Using AI

A brief introduction to generative artificial intelligence covering how it works, types of tools, and tips for responsible AI-use.

Introduction

This page is specifically geared toward instructors; it collects resources to help you craft AI policies, discuss AI use with your students, and find further readings if you want to dive deeper. 

Crafting AI policies

Educators across disciplines are grappling with how to negotiate and set boundaries on AI-use in their classrooms. As you consider what policies, guidelines, and restrictions feel authentic and meaningful for your course context, you may find it helpful to browse language from other instructors for inspiration. Professors across institutions have submitted their syllabus policies to a publicly available Google Doc (linked below) to serve as inspiration and guidance for others. 

Discussion prompts: Talking about AI in the classroom

Having frank and open conversations with students about AI can be helpful in building consensus in your class community. Below, you'll find some prompts to help you begin engaging students in these conversations in your own classroom.

  1. What does responsible AI-use mean to you? 
  2. What skills does a person need to be able to use AI ethically and effectively? How would you define AI-literacy?
  3. How (if at all) do you think AI should be incorporated into your college curriculum?
  4. Have you used AI tools to help you learn or master new content? Share how you've leveraged these tools to help you learn.
  5. In what ways do you see AI becoming relevant for your chosen career path or area of interest? How has this changed your perception of what skills you need to develop in order to succeed in your chosen career?
  6. In your own use of AI tools what capacities and limitations have you noticed? What have you found that AI tools do really well and what shortcomings have you discovered?
  7. What troubles you about AI? Do you have any concerns you've been grappling with? Jumping off points for discussion could include data privacy, algorithmic bias, labor, the digital divide, etc. 

What about AI detectors?

As of December 2025, URI does not provide a subscription to any AI detection software. 

If you are considering AI detection tools, it's important to establish the following before proceeding:

  • Acceptable levels of AI use: What level of AI assistance will you accept? Is the use of AI-assisted grammar tools acceptable in your course? 
  • Reliability: Has the AI detection tool been evaluated for effectiveness by other users or researchers? Search for articles discussing any tool's reliability before making a decision.
  • Protection of intellectual property:
    • If you are planning on uploading student work to an AI detection tool, do you have permission to upload that work?
    • What happens to that student's work once it has been uploaded? Does it become part of the training data without the student's consent?
  • Protection of student information: Has the content been anonymized to avoid conflicts with regulations?
  • Bias: Much of AI's training data is from content written by English speakers, and the algorithms prioritize certain language based on the programmers' preferences. If English is not a student's first language, and/or if they have used certain features built into the tools they're using to write or translate their words, will this assistance show up as AI-generated? 
  • Evidence: Does the tool provide clear evidence for its claims in the output?

Companies trying to sell AI detection software will make claims that may not be supported by evidence. Thoroughly vet any tools' claims before proceeding, and make sure that your use of AI detectors doesn't compromise students' ownership of their work or violate University policy.