Talking points for orientation sessions and 1L AI-introduction events. Designed as a presenter’s guide for in-person delivery rather than independent reading. Pairs with the longer 1L Guidance on ChatGPT Edu Use.

The headline caveat

Approach AI with careful attention and healthy skepticism. That framing should run through the whole session. And remember: course-specific syllabus rules govern — always check your syllabus and ask the professor when in doubt.

How generative AI works (the short version)

LLMs like ChatGPT and Claude are predictive software. They predict what comes next based on your input and patterns in their training data. They are not retrieving verified information.

Implications worth flagging:

  • LLMs shine when the job hinges on open-ended language generation or stylistic variation. They are usually beaten by deterministic software when the task is purely calculational, rule-bound, or pure fact-retrieval.
  • Stand-alone LLMs risk hallucinating case names or mischaracterizing holdings. The model is producing plausible text, not retrieving verified information.
  • AI is good at organizing information you provide — class notes, outlines, study materials. The model still draws on its training even when summarizing your input, so verify before relying on the output.
  • You can use that property to quiz yourself on your own notes — but verify that the questions and answers map onto what you actually need to learn.
  • Models can draft clear, coherent, often-accurate summaries because their training lets them capture main points and rephrase them smoothly.

The overreliance problem

Overreliance on generative AI short-circuits the iterative practice that builds sound legal judgment. You need to develop legal reasoning skills first — otherwise you can’t tell when the AI is wrong, and you’ll keep reinforcing incorrect reasoning patterns.

Three rules of thumb:

  • Read and make sense of cases yourself first. Learn how to think creatively about complex questions before bringing AI in.
  • Use AI for tasks where you can judge whether the output is good. If you don’t know what good looks like, you can’t evaluate the output.
  • Treat human supervision as essential. Use the human–AI–human sandwich: your reasoning first, AI to refine, you to verify.

Bias

Generative AI is trained on internet-scale data, and the biases in that data show up in outputs — including stereotyped framings on race, gender, immigration status, and class. That matters when you’re working in doctrinal areas where bias affects outcomes (criminal procedure, employment, family law, immigration). Check outputs for stereotyped framings before relying on them.

Confidentiality

Don’t paste another person’s work, anything from a confidential source, or anything you wouldn’t want on a public web page. ChatGPT Edu has institutional protections (see the linked guidance). Consumer tools may not. If you’re not sure where a tool stores or trains on your inputs, assume it does.

Academic integrity

Misrepresenting AI use violates academic integrity rules. Failing to verify or critically assess AI output is your own quality problem — and depending on the assignment, it can become an integrity issue if the work is presented as your independent analysis. Always check the syllabus, and when in doubt, ask the professor.

Suggested uses

Brainstorming. Generating ideas, framings, alternative arguments.

Copyediting. Grammar, style, conciseness, punctuation, tone. Ask the model to proofread without changing content. Be careful: even with careful prompting, AI can quietly alter substance. Independently verify everything before submitting.

Summarizing notes into outlines. Convert handwritten or audio notes (your own) into structured study material. Generate flashcards and self-quiz questions from those outlines.

Prompting matters

Because AI is predictive software, how you prompt changes the output. Good prompting can be the difference between a generic response and exactly what you need. Think of the model as an assistant that needs to be told what you want:

  • Be specific and detailed. Vague requests get vague results.
  • Provide context. Help the AI understand your situation and goal.
  • Use examples. Show what good output looks like.
  • Structure your request. Break complex tasks into clear steps.
  • Assign a role. “Acting as a senior partner at a law firm…”
  • Specify format. Bullets, paragraphs, tables, memos.

When the chat goes off the rails

In long chat histories, the model may struggle to remember earlier instructions. (Classic example: you say “no em-dashes” and the em-dashes start showing up again three responses later.) If responses become less helpful, start a new chat.

Status

Maintained for the Penn Carey Law community. Talking points for orientation sessions; pairs with the longer 1L Guidance on ChatGPT Edu Use.