AI Syllabus Guide
Ready-to-use syllabus language across four policy stances — no written assignments, limited AI use (two variants), complete prohibition — plus answers to the eight questions students most often ask.
Drop-in syllabus language for faculty across four policy stances — (1) no written assignments, (2) limited AI use, (3) limited AI use with attribution and a written AI-use statement (Professor Cathie Struve’s Fall 2025 seminar version), and (4) complete prohibition. Faculty are expected to copy these blocks, adapt where noted, and adjust to course specifics.
About this document
This guide helps Penn Carey Law faculty think through the pedagogical issues that current generative AI tools present, in advance of the semester.
This document is not official Law School policy. The point is to make sure every faculty member has at least considered generative AI as part of course planning — and addressed it clearly in the syllabus or other introductory materials.
Things to keep in mind
Generative AI is increasingly present in everyday tools. “AI” is not a single form or application — it is spreading across the digital toolset, and avoiding it is increasingly impractical. A few examples:
- Microsoft 365 Copilot (which uses OpenAI’s GPT models among others) is available to students and faculty through the Law School’s Office 365 — including Word and Excel.
- Students using Google Docs or Sheets can access Google Gemini.
- Westlaw and Lexis offer AI-enabled versions of their legal-research tools (Westlaw’s CoCounsel; Lexis+ with Protégé).
Students will be familiar with AI tools. Fluency varies, but you should assume most students have used general-purpose tools like ChatGPT or Claude. Many will have used legal-specific AI tools (CoCounsel, Lexis+ with Protégé, Bloomberg Law’s AI Assistant, Harvey) during summer or externship work, and some will have continuing access. You will not be able to avoid the topic. We recommend addressing AI explicitly in the syllabus or introductory materials.
Students will want clarity. What clarity looks like depends on the course. For a traditional lecture course with no written assignments before the exam, there’s not much to say — detailed exam-use questions can be deferred, though we suggest a brief note on your tentative exam approach to set expectations. For a course with written assignments — responsive essays, problem sets, research papers, presentations — students need specific policy statements right away. Generative AI can implicate academic integrity in ways that require clear faculty direction.
Consider helpful AI use cases. Even if you broadly limit or prohibit AI for submitted written work, there are use cases worth allowing — and in some cases worth affirmatively suggesting:
- Class preparation. Organizing notes, clarifying concepts, predicting in-class discussion questions.
- Brainstorming paper topics. General-purpose models are effective at generating topic ideas given prompts about areas of law or issues of interest.
- Developing practice questions. AI can produce hypothetical scenarios that help students solidify subject-matter understanding.
- Summarizing materials and creating outlines. Models can distill or outline supplied materials. Remind students that general-purpose AI can hallucinate, and that any AI-generated information should be confirmed.
- Editing and refining writing. Many tools edit well for clarity and organization. This can be especially useful for students whose first language is not English.
Generative AI complicates your guidance. If a student uses an AI tool to create a first draft of a short response paper, then heavily edits it before submission, has the student violated your policy? If you allow student-to-student collaboration, does collaboration with an AI tool count? What if the student writes a first draft alone and then has AI edit it? “Use” of an AI tool covers a wide range of activity — much of it potentially beneficial to learning and to the quality of submitted work. Your policy needs to anticipate that range.
Consider equity issues. As of the current academic year, all Penn Carey Law students can access at least the free public versions of ChatGPT and Claude, plus the AI tools built into Lexis and Westlaw through the Law School’s subscriptions. Through the University’s ChatGPT Edu program, incoming 1L students, teaching assistants for 1L courses, and Littleton Fellows currently receive accounts that provide a secure environment with access to current OpenAI foundational models. Familiarity and comfort still vary. If AI is integrated into your course, consider specifying a particular tool, deciding which advanced legal tools (CoCounsel, Lexis+ with Protégé, etc.) are in or out of scope, and whether to provide an AI guide or background materials. Contact us if you’d like help.
Students will need to learn how to use AI. AI tools will be an integral part of legal practice. Future lawyers need to be fluent — knowing how to use these tools, their strengths and weaknesses, and the privacy, security, and ethical considerations. Think through how AI might affect your course, and whether some uses of AI might improve learning while letting students experiment and build confidence with the tools.
Questions you might get asked
Eight questions students are most likely to raise about an AI policy:
- What is your official policy on the use of generative AI tools in this course?
- Are there any specific assignments or tasks where the use of AI is prohibited or discouraged?
- If we use AI to assist in our work, to what extent must we disclose its involvement in our assignments or projects?
- How do you differentiate between students using AI as a tool to assist in understanding versus students relying too heavily on AI-generated content?
- Can we use generative AI to help draft or refine our papers, arguments, or other assignments? If so, are there any limits or guidelines?
- How might the use of AI affect our grading, especially if an assignment reflects a blend of AI-generated content and our own work?
- Are there any specific AI platforms or tools that you recommend or discourage us from using in this course?
- How should we cite or acknowledge AI-generated content or insights in our work?
A clear AI policy in the syllabus answers most of these on its own. The sample language below is written to do that.
Sample syllabus language
The four blocks below are starting points. Reproduce, adapt, and adjust to your course. The text inside each blockquote is the sample language from the source guide; the surrounding prose is editorial framing.
1. Traditional lecture course with no written assignments
For a doctrinal course where the only graded work is a final exam, the policy can be brief — defer specifics about the exam itself, but flag that AI can still be useful for studying.
Artificial Intelligence Policy. There are no written assignments in this course aside from your final exam; accordingly there is no need for a formal AI policy, other than the policy which will apply to the exam.
I will provide the final exam AI policy at least [N weeks] before the exam. To set expectations now, I currently expect to [ prohibit ] [ permit only specifically-designated ] [ permit ] use of AI tools on the exam.
You may find that the use of AI tools, as with any other supplement or guide, can in some cases improve your learning — for example, using AI to test your understanding of the concepts in the course, or to help prepare for class discussions. I strongly encourage you to carefully and critically evaluate the output of any AI, recognizing that the current generation of these tools is often misleading, incomplete, or wholly inaccurate. Further, like any unofficial supplement or other outside material, AI-generated content may or may not accurately reflect the content covered in this course.
2. Course with written assignments — limited AI use
For a course with written work where you want to permit AI for support tasks (brainstorming, research, editing) but not for first-draft generation, this block is the starting point. It includes a citation format students can follow and language requiring a footnote when AI shaped the final product.
Artificial Intelligence Policy. You may use AI tools to assist in the creation of your written work in this course subject to the general principle that all work submitted by you must be your work product alone. Specifically:
- You may use AI tools to help you brainstorm about topics or refine your topic ideas;
- You may use AI tools to assist you in your research;
- You may use AI tools to help you edit a draft that you have written for clarity or length;
- You may not use AI tools to create “first drafts” or other blocks of textual material.
You must cite any use of a generative AI tool, and include clear and specific details about the tool, its version, the developers or the organization responsible for it, and any pertinent parameters or settings that influenced the generation. Here’s a sample format and an example:
Format:
[AI tool name] ([Version, if available]). Developed by [Developers/Organization Name]. Retrieved [Date], from [URL or source if applicable]. [Specific parameters/settings if necessary].
Example:
ChatGPT (GPT-4o, 2024). Developed by OpenAI. Retrieved [Date], from https://chatgpt.com. Model prompt: “[exact or paraphrased prompt given to the AI]”.
In addition to a formal citation, if the AI tool’s output has influenced the final product of your paper, it is appropriate to describe its role in a footnote of your paper. For instance: “I used ChatGPT (GPT-4o) to generate preliminary research questions for this paper. I prompted the model with ‘[exact or paraphrased prompt],’ then reviewed and verified the outputs against the relevant primary sources before incorporating them into my analysis.”
I strongly encourage you to carefully and critically evaluate the output of any AI, recognizing that the current generation of these tools is often misleading, incomplete, or wholly inaccurate. Further, like any unofficial supplement or other outside material, AI-generated content may or may not accurately reflect the content covered in this course.
It is the responsibility of each student to be aware of this policy and to seek clarification if they are uncertain about what constitutes a violation. If you are unsure about the permissibility of a particular tool or method, you are strongly encouraged to consult with me prior to its use.
3. Limited AI use — Professor Struve’s Fall 2025 seminar version
A second take on the limited-use stance, focused on attribution and a written AI-use statement rather than on a citation format. Created by Professor Cathie Struve for a Fall 2025 writing-intensive seminar.
Generative AI. This paragraph contains our policy on your use of generative AI in connection with this seminar. You need not use generative AI in connection with your work for this seminar. If you do use generative AI, you must provide proper attribution (under the standards discussed in the preceding paragraph) for any language that generative AI supplies to you (whether your use of that language consists of a quote or merely a close paraphrase). And, of course, you are responsible for scrutinizing (and ensuring the accuracy of) any ideas or information that generative AI provides to you. Also, if you use generative AI in connection with your assigned work in this seminar (whether that is the two-page paper, your class presentation, the outline of your final research paper, or your final research paper itself), when you hand in (or present) the work in connection with which you used generative AI you must include an “AI use statement” that explains how and when you used the generative AI.
Professor Struve has also produced a companion document that works through specific examples of AI use in her seminar with sample AI-use statements. Highly recommended. The Lab’s AI Use Policy Templates document reproduces and expands on this framework.
4. Complete prohibition
For a course where AI tools are off-limits, this block states the prohibition and gives concrete examples of what it covers.
A note on enforceability: Faculty adopting this stance should consider whether truly comprehensive prohibition is achievable. Microsoft Word, Westlaw, and Lexis all incorporate AI features that students cannot easily disable. Most faculty adopting a “no AI” stance limit it to drafting and content generation in submitted work, with a separate carve-out for AI-assisted research tools where appropriate. Adapt the block below accordingly, especially the word “preparation.”
Artificial Intelligence Policy. Students are strictly prohibited from using generative artificial intelligence (AI) tools, software, platforms, or any related technology in the preparation, drafting, or completion of any assignments, papers, projects, or examinations submitted for this course.
This prohibition includes but is not limited to:
- Using AI to draft, edit, or review any written assignments or papers;
- Employing AI-generated content as part of any submitted work; and,
- Collaborating with or seeking input from AI platforms to solve or address any course-related questions or problems.
It is the responsibility of each student to be aware of this policy and to seek clarification if they are uncertain about what constitutes a violation. If you are unsure about the permissibility of a particular tool or method, you are strongly encouraged to consult with me prior to its use.
Contact
This guide is maintained by:
- Ambar Larancuent ‘26
- Hailey Parikh ‘27
- Polk Wagner —
pwagner@law.upenn.edu
With thanks to AI Law Lab alumni who contributed to the original guide:
- Meghana Bhimarao ‘25 — AI Law Lab & CTIC Fellow
- Lakshmi Prakash ‘25 — AI Law Lab & CTIC Fellow
Status
Maintained for the Penn Carey Law community. The sample syllabus language above is intended for direct adaptation by faculty.