Professor Struve’s attribution-and-disclosure model for generative AI in a writing-intensive seminar, with sample AI-use disclosures students can adapt to specific use cases. The framework treats AI as a tool that requires attribution and process transparency.

Scope. These templates reflect individual-faculty syllabus language drafted by Professor Struve and reproduced here as a reference. They are not Penn Carey Law’s institutional AI policy and have not been adopted by the Office of Academic Affairs, the Curriculum Committee, or the Office of Student Conduct. Each faculty member is responsible for the AI policy in their own course; students should rely on the syllabus they receive, not on this page.

The syllabus language is reproduced from Professor Struve’s draft seminar syllabus. The sample AI-use disclosures were originally generated in a dialogue with ChatGPT and have been edited for accuracy and pedagogical clarity. Faculty using these templates should copy the blocks where the wording is doing work, then adapt citations, Canvas references, and use cases to their own courses. Note on illustrative citations: the sample disclosures below contain bracketed placeholders for case names, articles, and authors (e.g., [Author, Title, Vol. Journal Page (Year)]). Replace each placeholder with a real source you have read and verified before submitting work.

The framework

Three ideas drive the policy:

  • Attribution applies to AI the same way it applies to any other source. If the student uses generative AI’s words — quoted or closely paraphrased — they cite it. The standards mirror those for human-authored sources under Penn Carey Law’s Plagiarism Policy.
  • Disclosure is broader than attribution. Even when no AI-generated language ends up in the submitted work, if AI shaped the assignment (selected readings, surfaced counterarguments, suggested topics), the student includes an AI-use statement explaining how and why.
  • Verification stays with the student. AI output is treated as a lead, not a citation. Students confirm sources, check facts, and remain responsible for accuracy.

Sample syllabus language

Attribution and AI use (drop-in syllabus block)

Attribution. You must make sure to provide proper attribution of all sources that you use in your paper. You must read and comply with Section I.B.3 of the Law School’s Code of Student Conduct and Responsibility1 and the Law School Policy on Plagiarism.2 As the Policy explains, plagiarism is the use of “another’s words or ideas without attribution.” It is important to have a good understanding of how this translates into actual practice. For this purpose, we ask that you read the University of Wisconsin–Madison Writing Center’s handout titled “Acknowledging, Paraphrasing, and Quoting Sources,” available at https://dept.writing.wisc.edu/wac/acknowledging-paraphrasing-and-quoting-sources/.

Generative AI. This paragraph contains our policy on your use of generative AI in connection with this seminar. You need not use generative AI in connection with your work for this seminar. If you do use generative AI, you must provide proper attribution (under the standards discussed in the preceding paragraph) for any language that generative AI supplies to you (whether your use of that language consists of a quote or merely a close paraphrase). And, of course, you are responsible for scrutinizing (and ensuring the accuracy of) any ideas or information that generative AI provides to you. Also, if you use generative AI in connection with your assigned work in this seminar (whether that is the two-page paper, your class presentation, the outline of your final research paper, or your final research paper itself), when you hand in (or present) the work in connection with which you used generative AI you must include an “AI use statement” that explains how and when you used the generative AI.3

Sample AI-use disclosures

The five disclosures below cover common student use cases — triaging optional readings, checking the clarity of a draft, reviewing slide design, brainstorming a paper topic, and stress-testing an outline. Each is preceded by a short rationale explaining why the use case is acceptable and what guardrails apply.

These were originally generated through a dialogue with ChatGPT (model o3) about specific stages of a writing seminar. Professor Struve includes both the disclosures and the underlying dialogue so students can see the considerations behind each policy. Students should adapt the date, model, and specifics to their own work.

1. Screening optional readings

Use case. A student wants to use AI to summarize the optional readings (not the required ones) so they can decide which optional pieces to engage with most deeply for a presentation.

Why it’s acceptable. This is triage, not substitution. The student must still personally read every required reading, and must skim or read the optional pieces they select to verify the AI summary’s accuracy. Because the AI output shaped the scope of the assignment, the student discloses the use even if no AI-generated wording ends up in the submitted work.

Sample disclosure.

AI use statement for [date] presentation on Smith (1998) and related readings

  • Tool & access date: OpenAI ChatGPT (model o3), accessed 11 July 2025 via chat.openai.com.
  • Purpose: To screen five optional articles listed for Week 4 and decide which to emphasize in my two-page paper and class presentation.
  • Prompts provided (representative): “Please give me a concise 200-word summary of ‘Jones, Law and Markets (Harvard L. Rev. 2001).’ Highlight its thesis, methods, and main findings.”
  • Outputs used: I read the resulting summaries, verified key points by skimming each article’s introduction and conclusion, and then selected Jones (2001) and Lee (2015) as the optional works I would integrate.
  • Extent of incorporation: No AI-generated language appears verbatim or in paraphrase in my submitted paper or slides; the summaries served only to guide my selection of optional readings.
  • Verification steps: Checked page numbers, quotations, and statistics directly against the original PDFs before final submission.
  • Responsibility: I remain responsible for the accuracy of all information presented.

2. Clarity check on a self-written draft

Use case. The student drafts a 2-page paper, uploads it to a generative AI tool, and asks the model to summarize the key points. If the AI’s summary doesn’t reflect what the student intended to convey, they revise the paper to make the intended points clearer.

Why it’s acceptable. The model serves as a clarity-check mirror, not a ghost-writer. Authorship stays with the student. Two conditions: (1) the draft is student-written; nothing from the model flows back into the text except in the student’s own words after reflection. (2) Verification and judgment remain human; the summary is diagnostic, and the student decides what to revise.

Sample disclosure.

AI use statement for two-page paper submitted 14 Oct 2025

  • Tool & date: Anthropic Claude 3.5 Sonnet (web interface), 10 Oct 2025.
  • Purpose: To test whether the draft paper’s main arguments were clearly expressed.
  • Prompt (abridged): “Here is my complete draft (≈650 words). Please list the key points you think the author is making in no more than six bullet points.”
  • How I used the output:
    • Compared the model’s six bullets with the three core claims I intended to convey.
    • Noticed that bullet #4 emphasized historical context I meant to downplay; rewrote ¶2 to tighten focus.
    • Confirmed that bullets #1, #2, #5 matched my intended thesis and supporting evidence; no wording from the AI appears in the final paper.
  • Verification: Re-read the entire paper after revisions to ensure accuracy and coherence.
  • Responsibility: All ideas and language in the submitted version are my own; the AI served only as a comprehension check.

3. PowerPoint design review

Use case. The student uploads draft slides and asks the AI to flag basic design and accessibility problems — overcrowded slides, illegible fonts, low-contrast color schemes.

Why it’s acceptable. The AI critiques layout and readability; the student doesn’t accept any new slide content from the model. Comparable to running PowerPoint’s own Accessibility Checker, but broader. Student judgment prevails on every suggestion.

Sample disclosure.

AI use statement for presentation slides submitted 3 Nov 2025

  • Tool & access date: Microsoft Copilot in PowerPoint, 1 Nov 2025.
  • Purpose: To identify basic design and accessibility issues (over-full slides, small fonts, low-contrast text/background combinations).
  • Prompt (excerpt): “Review the 12-slide deck I’ve uploaded. For each slide list any problems with information density, font size (<18 pt), or colour contrast below WCAG AA. Do not rewrite content; give only diagnostic comments.”
  • Output received: Copilot produced a slide-by-slide checklist (e.g., “Slide 4: 9 bullet points — consider splitting slide; Slide 7: contrast ratio 3.8:1 between text and background”).
  • How I used the output:
    • Reduced bullet points on Slides 4 and 6; increased font size on Slide 9; changed background colour on Slide 7 to meet contrast guidelines.
    • No AI-generated phrasing or graphics were inserted into the deck.
  • Verification: Manually ran PowerPoint’s built-in Accessibility Checker and rehearsed the slideshow to ensure legibility from the back of the classroom.
  • Responsibility: All slide content and design choices in the final version are my own; AI feedback served only as a formatting audit.

4. Brainstorming a paper topic

Use case. The student asks the AI to generate a list of doctrinally focused, scope-appropriate paper topics within a chosen subject area. The student selects from the menu and does all subsequent research and writing.

Why it’s acceptable. Letting a model brainstorm potential research questions is analogous to talking with a librarian or mentor. The student owns the final topic choice, checks feasibility before committing, and discloses the assistance because the AI shaped the assignment.

Sample disclosure.

AI use statement for final paper topic selection (submitted 18 Sept 2025)

  • Tool & date used: OpenAI ChatGPT (model o3), 16 Sept 2025.
  • Purpose: To brainstorm doctrinally focused, 25-page-scale topics within multidistrict litigation (MDL).
  • Representative prompt: “I want to write about multidistrict litigation but need a legal-doctrinal topic suitable for ~25 pages. My strengths: case law analysis and statutory interpretation. I lack experience in empirical methods. Generate 8–10 potential topics that (a) matter to current MDL practice, (b) turn on legal doctrine, and (c) can be handled without empirical or interview work.”
  • Output received: A list of nine topic ideas with 2–3-sentence descriptions (e.g., “The constitutionality of ‘rocket-docket’ scheduling orders in MDLs”; “Revisiting the Lexecon waiver doctrine post-Bristol-Myers”).
  • How I used the output:
    • Screened each idea for novelty and source availability.
    • Selected “Lexecon waivers after Bristol-Myers” as my preliminary topic.
    • Conducted my own case-law survey to confirm depth of material.
    • No wording, phrasing, or citations from the AI list appears in my prospectus or future drafts.
  • Responsibility: Topic selection was informed by the AI brainstorm, but all framing, research, and writing going forward are exclusively my own.

5. Outline diagnostic review

Use case. The student writes their own outline first. Then they upload it to a generative AI tool and ask the model — explicitly not to rewrite anything — to flag logical gaps, unsupported inferences, missing counterarguments, or organizational problems.

Why it’s acceptable. The outline is the moment when a student’s own thinking has to crystallize. AI as a ghost-planner defeats the pedagogical purpose. AI as a diagnostic reviewer — pointing at weaknesses in a student-written structure — preserves the goal while giving the student a mirror for clarity and completeness.

The permitted patterns are review patterns: stress-testing logic, gap-spotting, organization audits, citation checklists. The off-limits patterns are generation patterns: drafting an outline, copy-pasting AI-suggested headings, letting the model insert citations.

Sample disclosure.

AI use statement for outline submitted 21 Oct 2025

  • Tool & date used: Anthropic Claude Sonnet 4 (web interface), 18 Oct 2025.
  • Purpose: To audit my self-written outline for logical gaps and redundant sections.
  • Prompt (abridged): “Below is my 1,100-word outline for a 25-page paper on Lexecon waivers post-Bristol-Myers. Please do not rewrite any part of it. Instead, point out (1) arguments that lack supporting authority, (2) potential counterarguments not yet addressed, and (3) sections whose order might impede reader flow.”
  • Output received: A numbered critique highlighting three undeveloped counterarguments and noting that Sections III.A and III.B overlapped.
  • How I used the output:
    • Added a new sub-heading to engage with sovereign-immunity objections.
    • Merged overlapping sections into a single Part III to streamline progression.
    • No AI-generated wording, headings, or citations appear in the revised outline.
  • Verification: Cross-checked each flagged gap against case law and manually updated the outline.
  • Responsibility: The outline’s structure and content remain entirely my own; the AI served solely as a diagnostic consultant.

Source-tracing for AI-generated ideas

When AI produces a substantive idea — a new argument, a doctrinal counterpoint, a quotation, a historical claim — the student’s obligation goes beyond AI-use disclosure. They also owe attribution to the human author whose work the AI’s output may reflect. Disclosure documents process; citation documents provenance. Both are required.

The following addition to the syllabus operationalizes that obligation.

Syllabus addition (drop-in block)

Source-tracing for AI-generated ideas. If an AI system supplies you with a substantive idea (e.g., a new argument, doctrinal counterpoint, quotation, or historical claim) that you intend to incorporate in your work, you must make a good-faith effort to trace that idea back to a verifiable human source.

  1. Treat the AI output as a lead, not a citation. Prompt the model for clues: “Where in the academic or judicial literature has this argument appeared? Please list specific cases, articles, or books.”
  2. Independently confirm any sources named. Look up the case, article, or book yourself. Read enough of the source to be sure it actually contains the idea.
  3. Cite the human source in the ordinary way. Example: See [Author], [Article Title], [Vol.] [Journal] [Page] ([Year]).
  4. If, after reasonable search, no prior human source emerges, state in your AI use statement: “The model surfaced this argument during my brainstorming. I searched the legal literature (databases A, B; search terms: …) and did not locate a prior published articulation. I have refined the argument and developed the supporting analysis independently.”
  5. Never cite the AI model itself as the intellectual originator of a legal or scholarly claim. Models provide computations over text; citation credit belongs to the human author(s) on whose text the model drew — or to you if no prior source exists.

Rationale to share with students

  • Credit where credit is due. Citing the underlying human author satisfies the scholarly norm that readers should be able to trace an idea to its first articulation.
  • Academic integrity. The AI-use statement discloses process; formal citations document provenance. Both are required for full transparency.
  • Error control. Verifying the source guards against hallucinated citations and misattributed ideas.

Quick-reference checklist for students

  1. Did the model give me a substantive idea I hadn’t already formed?
  2. Can the model (or a database search) point me to a human author?
  3. Have I read and confirmed that source?
  4. Have I cited that source in the paper or footnotes?
  5. Have I described the AI’s role in my AI-use statement?

Combined disclosure with scholarly citation

When the student successfully traces an AI-suggested idea to a human author, the disclosure documents both — the AI’s role as idea-lead, and the human author’s role as intellectual originator.

AI use statement for final research paper (submitted 15 Dec 2025)

  • Tool & date: OpenAI ChatGPT (model o3), accessed 8 Dec 2025.
  • Purpose: To stress-test Section III of my draft, which argues that post-Bristol-Myers personal-jurisdiction limits should not constrain transferee courts from approving Lexecon waivers.
  • Prompts (abridged):
    1. “Here is Section III of my draft (≈1,400 words). List any plausible doctrinal counter-arguments a court might raise.”
    2. Follow-up: “For Counter-Argument #2 you suggested — that Lexecon waivers could violate defendants’ Seventh Amendment rights — where (if anywhere) has this claim appeared in published scholarship or case law? Please name specific sources.”
  • Output received:
    • Five counter-arguments identified.
    • Cited [Author, Article Title, Vol. Journal Page (Year)] as a prior articulation of Counter-Argument #2.
  • How I used the output:
    1. Read the cited article in full to verify the argument’s scope.
    2. Added a new subsection, § III.B.2, responding to that author’s Seventh Amendment critique.
    3. All language in the paper is my own; no AI-generated wording appears verbatim or in paraphrase.
  • Verification: Confirmed page numbers and doctrinal analysis; double-checked Bluebook citation format.
  • Responsibility: AI served solely as a diagnostic aid to locate existing scholarship. The counter-argument’s substantive discussion and all responses are my original work.

Example in-text treatment of the human source.

Some scholars contend that allowing parties to waive § 1407’s remand right could impermissibly burden defendants’ Seventh Amendment jury guarantee.¹ I argue below that this concern misreads both Beacon Theatres and modern MDL practice.

Corresponding footnote.

¹ [Author], [Article Title], [Vol.] [Journal] [Page], [Pin Cite] ([Year]).

The disclosure separates the AI’s role (idea lead) from the human author’s role (intellectual originator), shows the trace-and-verify step, and credits the source in standard scholarly form while the AI-use statement documents process transparency.

Adapting these templates

A few notes for faculty importing this language into their own syllabi:

  • The Penn Carey Law citations in footnotes 1 and 2 link to PCL-specific policies. Adjust if you teach elsewhere.
  • The Wisconsin Writing Center handout is a useful neutral reference for what counts as paraphrase versus quotation. Substitute your own preferred reference if you have one.
  • The sample disclosures are written for a writing-intensive seminar with intermediate deliverables (two-page paper, class presentation, outline, final paper). For a different course structure — exam-only, problem sets, simulations — keep the framework and adapt the use cases.
  • AI tools and model names move fast. The pattern in each disclosure (tool, date, purpose, prompt, output, verification, responsibility) holds even as students substitute whatever model they actually use.

Acknowledgment

These templates and the underlying framework are reproduced from a draft seminar syllabus by Professor Struve. The Lab thanks her for permitting their inclusion in the Toolkit.

Status

Maintained for the Penn Carey Law community. Comments and suggestions: pwagner@law.upenn.edu.


  1. This provision bars “intentional[], knowing[], or reckless[]” plagiarism “in an academic pursuit[,] including any use of another’s work without attribution, whether such use be verbatim or merely conceptual or structural.” See https://www.law.upenn.edu/students/policies/conduct-and-responsibility.php↩︎

  2. The Policy is available at https://www.law.upenn.edu/students/policies/conduct-and-responsibility.php↩︎

  3. Sample AI use statements are posted separately on the Canvas page. ↩︎