Policy Development
Building policies that work

What you will learn
- Design AI policies grounded in educational values
- Balance prohibition with productive use
- Create enforceable and consistent guidelines
- Address academic integrity in the AI era
- Build policies that adapt to change
Topics covered
Effective AI policy requires balancing multiple goals: protecting academic integrity, enabling productive learning, ensuring equity, and preparing students for an AI-infused world. This week provides frameworks for developing policies that work in practice.
Policy philosophy
The fundamental question
Schools must answer a core question: What is the purpose of the work we assign?
If the purpose is demonstrating mastery: Students must complete work independently to show what they know.
If the purpose is learning through practice: AI assistance during practice may be acceptable if learning occurs.
If the purpose is producing quality output: AI collaboration may reflect real-world professional practice.
Different assignments may have different purposes, requiring different AI policies.
Values-based policy design
Effective policies start with values:
- Academic integrity and honesty
- Skill development and genuine learning
- Equity and access
- Preparation for future roles
- Safety and well-being
Policies should flow from these values, not from fear of technology.
Acceptable use frameworks
The spectrum of AI use
Rather than blanket bans or permissions, consider a spectrum:
Prohibited use:
- Submitting AI output as original work without disclosure
- Using AI on explicitly prohibited assessments
- Using AI to deceive or misrepresent capability
Permitted with disclosure:
- Using AI for brainstorming and outlining
- Getting feedback on drafts from AI
- Using AI for research and information gathering
Encouraged use:
- Learning to prompt effectively
- Critically evaluating AI output
- Using AI to explore concepts more deeply
Assignment-level policies
Teachers should specify AI expectations for each assignment:
- What AI use, if any, is permitted
- What must be disclosed if AI is used
- What the learning objectives require
This requires clear communication and consistent application.
Disclosure requirements
If AI use is permitted, require disclosure:
- What AI tools were used
- How AI was used (brainstorming, drafting, editing, etc.)
- What portions of the work were AI-assisted
- How the student contributed beyond AI assistance
Disclosure normalizes honest AI use while maintaining transparency.
Academic integrity standards
Redefining cheating
Traditional definitions of cheating may not apply cleanly to AI:
- Using a calculator was once cheating in math
- Spell-check was once prohibited in writing
- Research databases were once considered unfair advantage
The question is not whether AI is cheating, but what learning we require.
Clear boundaries
Establish clear boundaries that focus on:
- Deception: Misrepresenting the nature of work
- Learning: Whether required skills were developed
- Fairness: Whether all students had equal access
Consequence frameworks
Consequences should be:
- Proportionate to the violation
- Educational rather than purely punitive
- Consistent across similar violations
- Clear in advance
First violations might involve resubmission with reflection. Repeated or egregious violations warrant stronger consequences.
Age-appropriate guidelines
Elementary school
Young children need:
- Supervised AI interaction only
- Focus on understanding AI is a tool, not a person
- Protection from inappropriate content
- Emphasis on foundational skills without AI
Middle school
Emerging independence requires:
- Graduated introduction to AI tools
- Explicit instruction on appropriate use
- Discussion of AI limitations and biases
- Beginning critical evaluation skills
High school
Preparation for higher education and careers needs:
- Sophisticated understanding of AI capabilities
- Experience with professional AI use cases
- Strong critical evaluation skills
- Understanding of discipline-specific AI ethics
Enforcement strategies
The detection problem
Accept that detection is imperfect:
- AI detection tools are unreliable
- False accusations harm innocent students
- Cat-and-mouse games are unwinnable
Alternative approaches
Instead of detection-focused enforcement:
- Design assignments that are harder to complete with AI alone
- Require process documentation alongside final products
- Include in-class components that verify understanding
- Focus on patterns rather than individual instances
Building a culture of integrity
Long-term success requires:
- Clear expectations communicated consistently
- Consequences that feel fair
- Recognition that most students want to do right
- Support for students who struggle with temptation
Policy communication
Student communication
Students need to understand:
- Why policies exist
- What specifically is permitted and prohibited
- How to ask questions when uncertain
- What happens if they violate policy
Parent communication
Parents need to understand:
- The school’s approach to AI
- Why policies differ from complete prohibition
- How to support appropriate use at home
- What expectations apply to homework
Staff communication
Teachers need:
- Clear guidance on implementing policy
- Authority to set assignment-specific rules
- Support for addressing violations
- Resources for AI-resistant assignment design
Keeping policies current
Built-in review processes
AI capabilities change rapidly. Policies should include:
- Scheduled annual review
- Process for addressing emerging issues
- Flexibility for teacher judgment
- Mechanisms for student and parent input
Principles over specifics
Design policies around principles that will endure:
- Focus on learning objectives
- Emphasize transparency and honesty
- Prioritize equity and access
- Prepare students for their future
Specific tool restrictions become outdated quickly; principles do not.
Common policy mistakes
Mistake 1: Blanket prohibition
Complete bans are unenforceable and counterproductive. Students will use AI regardless; prohibition just drives it underground.
Mistake 2: No policy at all
Absence of policy creates confusion and inconsistency. Teachers make conflicting decisions. Students lack clear guidance.
Mistake 3: Detection-dependent enforcement
Policies that rely on detecting AI use set up failure. Detection is unreliable and creates adversarial relationships.
Mistake 4: Ignoring equity
Policies that assume equal access to AI tools disadvantage students without resources.
Key takeaway
Effective AI policy is grounded in educational values, not fear of technology. Focus on learning objectives, transparency, and fairness. Build policies that can adapt as technology evolves. Communicate clearly to all stakeholders and enforce consistently but proportionately.
Workshop: Policy Framework Development
Draft an AI policy framework for your institution that addresses key stakeholder concerns while remaining practical and enforceable.
Deliverables:
- Draft AI acceptable use policy
- Academic integrity guidelines
- Age-level differentiation guide
- Enforcement and consequence framework