1L Guidance on ChatGPT Edu Use
Written for incoming 1Ls. How LLMs work, their strengths and weaknesses, the human-AI-human sandwich workflow, and prompting tips.
Written for incoming 1Ls. Covers how LLMs work, what they do well, what they fail at, the “human-AI-human sandwich” workflow that keeps the student in the loop, and concrete prompting tips.
First and most important: every course has its own rules about AI. Always check your syllabus and follow your professor’s guidance — that governs. This document is general orientation for cases not covered by a specific course rule.
The frame
Your main task as a 1L is to develop legal reasoning. Generative AI can support that learning, but it cannot replace the work of thinking, analyzing, and writing as a lawyer. Approach AI with attention and skepticism. Use it thoughtfully, not as a shortcut.
How generative AI works
ChatGPT, Claude, and other Large Language Models are predictive tools. They generate likely next text based on patterns in their training data and the prompts you provide — a process that often produces fluent, useful output but is fundamentally different from how a lawyer reasons through a problem. They can produce confident mistakes, called “hallucinations,” because nothing in the underlying mechanism checks output against truth.
Where AI excels
AI is most useful as a study assistant. It can:
- Generate text in different styles, summarize material, and rephrase passages for clarity.
- Help organize your notes and outlines.
- Turn your materials into flashcards, practice questions, and self-quizzes.
- Brainstorm ideas, suggest alternative framings, and refine your writing for grammar and tone.
Where AI falls short
AI should not be trusted for precise fact retrieval — case names, holdings, statutory language. It is unreliable for citation accuracy and Bluebook formatting. Most importantly, it cannot provide genuine legal reasoning or analysis. Those are the core skills you are here to master.
If responses start drifting or losing track of earlier instructions, start a fresh session — it’s cheap and often fixes the problem.
Critical risks
- Hallucination. AI may invent authority or distort holdings. The output sounds confident either way.
- Overreliance. Relying on AI before you’ve done your own reasoning short-circuits the practice you came to law school to get. The skills you’re building — issue-spotting, analogical reasoning, working through ambiguity — develop through struggle, not through reading a polished answer.
- Bias. AI reflects biases in its training data, including stereotyped framings on race, gender, immigration status, and class. That matters when you’re working in doctrinal areas where bias affects outcomes (criminal procedure, employment, family law, immigration). Check outputs for stereotyped framings before relying on them.
- Integrity. Misrepresenting AI output as your own work violates academic integrity rules. Failing to verify AI output is your own quality problem — and depending on the assignment, it can become an integrity issue if the work is presented as your independent analysis.
- Confidentiality. Don’t paste another person’s work, anything from a confidential source, or anything you wouldn’t want on a public web page. ChatGPT Edu has institutional protections; consumer AI tools may not. If you’re not sure where a tool stores or trains on your inputs, assume it does.
Best practices: the human-AI-human sandwich
Use AI within clear limits. Follow a “human–AI–human” process:
- Begin with your own reasoning.
- Use AI to refine, extend, or test it.
- Critically review the output before relying on any of it.
Only use AI for tasks where you have an idea of what a correct answer should look like. Always brief cases yourself before consulting AI. If answers drift, start a fresh session.
Suggested uses
- Summarize your notes into outlines.
- Convert handwritten or audio notes into structured study materials.
- Create quizzes and flashcards from your own outlines.
- Brainstorm writing strategies and arguments.
- Polish prose for clarity, grammar, and tone.
In all cases: verify accuracy, and check that the meaning of your work hasn’t shifted in the process.
Prompting as a skill
Good results depend on how you ask:
- Be specific and detailed. Vague prompts yield vague answers.
- Provide context and goals. Tell the model what you’re trying to accomplish.
- Use examples. Showing the model what good output looks like beats describing it.
- Break complex requests into steps. Tackle one thing at a time.
- Assign a role. “Acting as a senior associate…” or “Acting as my contracts professor…” tightens the output.
- Specify a format. Outline, table, bullet list, memo — say which.
The big takeaway
Generative AI can be a valuable tool, but it is not a substitute for legal reasoning. Its value depends on your ability to supervise it critically. Use it to enhance, not avoid, your own work — and remember that course-specific syllabus rules govern.
Status
Maintained for the Penn Carey Law community.