AI

Building AI literacy across campus

Most universities train students first, then watch everything stall. Start with administrative staff who run the infrastructure, move to faculty with practical support, then scale to students. This sequence makes campus-wide AI literacy actually possible instead of just another abandoned initiative.

Most universities train students first, then watch everything stall. Start with administrative staff who run the infrastructure, move to faculty with practical support, then scale to students. This sequence makes campus-wide AI literacy actually possible instead of just another abandoned initiative.

Key takeaways

  • Start with staff, not students - Administrative teams need AI skills first to support faculty and handle operational implementation
  • Only 37% of institutions support upskilling - Most campuses talk about AI transformation while leaving their people unprepared
  • Measurement drives adoption - Track specific productivity gains and workflow improvements, not just completion rates
  • Pilots rarely scale without planning - Design for campus-wide rollout from day one, not as an afterthought
  • Need help implementing these strategies? Let's discuss your specific challenges.

Everyone’s scrambling to get AI into classrooms.

Faculty workshops. Student training programs. Pilot projects testing the latest tools. But UNESCO found two-thirds of higher education institutions are still just developing guidance - not actually building campus ai literacy at scale.

Here’s what nobody mentions: most universities start in the wrong place.

They rush to train students or faculty first. Then they discover their administrative systems can’t support the rollout. Registration can’t handle the new course structures. IT can’t provision the tools. Finance can’t process the contracts. The whole thing stalls.

Start with staff. Then faculty. Then students.

Why staff literacy unlocks everything else

Your administrative teams run the infrastructure that makes campus ai literacy programs possible.

They manage student information systems. Process course enrollments. Provision technology accounts. Handle compliance documentation. Support faculty with technical issues. When EDUCAUSE surveyed institutions, they found only 37% were actually supporting upskilling for faculty or staff.

That gap kills AI initiatives before they start.

I’ve watched universities launch ambitious AI programs where faculty can’t get their tools provisioned for weeks. Students submit AI-assisted work that registrar systems aren’t configured to track. Compliance teams panic because nobody trained them on AI policy enforcement.

Training staff first means your operations team understands the tools before faculty need support. Your IT group can troubleshoot issues because they’ve already worked with the systems. Your registrar knows how to handle the new workflows because they helped design them.

This isn’t revolutionary. It’s basic change management that higher education keeps ignoring.

The faculty readiness gap is wider than you think

Faculty face a specific problem: 45% globally haven’t received any AI training, yet over half feel uncertain about pedagogical applications.

But here’s what makes this harder. Faculty aren’t just users - they’re curriculum designers, assessment creators, and academic integrity enforcers. They need deeper understanding than basic tool training.

Deloitte’s research emphasizes gauging readiness through internal reviews and peer learning from institutions that succeeded. Not generic workshops. Actual implementation knowledge.

What works: start with early adopters who want to experiment. Give them real support - not just access, but technical backing from your now-trained staff. Let them develop examples and document what works. Then scale through peer learning sessions where faculty share actual classroom experiences.

Queen Mary University runs four sessions yearly where faculty demonstrate their AI integrations. Not theory. Working examples. Stanford created their AIMES initiative with a library of real faculty examples and self-paced courses.

The pattern: successful campus ai literacy programs for faculty focus on practical application within their disciplines, supported by staff who already understand the systems.

Students need context, not just tools

Here’s the uncomfortable reality: 90% of students already use AI tools. But only 42% feel their faculty provide adequate guidance.

The problem isn’t access. It’s context.

Students need to understand when AI helps learning versus when it replaces thinking. How to verify AI outputs. What constitutes appropriate academic use versus misconduct. The difference between AI as research assistant versus AI as ghostwriter.

But you can’t teach this effectively if your faculty aren’t confident with the tools. And faculty can’t be confident without staff support for the systems.

This is why the sequence matters.

University of Florida built their AI across curriculum initiative by integrating training throughout programs, not as standalone workshops. Students learn AI literacy within their major coursework, taught by faculty who’ve experimented and found what works, supported by systems that staff configured properly.

The alternative looks like what happens at most institutions: students use AI anyway, faculty guess at appropriate policies, and staff scramble to retrofit systems that were never designed for this.

Measurement that actually drives improvement

Most institutions track the wrong metrics. Course completion rates. Tool adoption percentages. Number of trained faculty.

These numbers look good in reports. They don’t tell you if campus ai literacy is actually improving outcomes.

What to measure instead: specific productivity gains, workflow efficiency, adoption in actual practice, documented improvements in work quality. Research shows organizations should focus on labor output versus cost, error reduction, faster decision-making.

For staff: time saved on routine tasks, accuracy improvements in data processing, reduction in support tickets after faculty become self-sufficient.

For faculty: improved student engagement metrics, reduced time spent on administrative tasks, better assessment design efficiency, increased ability to provide personalized feedback.

For students: measurable improvements in research quality, better critical analysis of sources, increased engagement with learning materials, documented understanding of AI limitations.

Track these monthly. Not annually. Not at semester end. Monthly progress updates that show whether your campus ai literacy initiative actually changes how people work and learn.

The data helps you spot problems early. That registrar workflow that still takes too long? The faculty group that isn’t engaging? The student cohort struggling with academic integrity? You can’t fix what you don’t measure.

Scaling and sustaining campus-wide programs

Everyone wants to move from pilot to campus-wide rollout. Harvard Business Review found most pilots don’t scale because they’re set up in controlled environments that don’t match reality.

Design for scale from the start.

When you train that first staff cohort, document everything. Not just what you taught, but what questions came up, what confused people, what tools worked, what explanations resonated. Turn this into a repeatable program.

When faculty early adopters experiment, capture their lessons. What worked in Biology won’t transfer directly to English, but the implementation patterns might. Share frameworks, not prescriptions.

When student programs launch in one department, plan for others. Your English department doesn’t need the same AI literacy content as Computer Science, but the structure - embedded in coursework, taught by prepared faculty, supported by trained staff - that scales.

Singapore’s National Digital Literacy Programme maintains over 80% student usage and 70% teacher usage by thinking about scaling from day one, not retrofitting successful pilots.

The key: don’t just share what worked. Share why it worked. Help new groups adapt your approach to their context rather than copying your exact implementation.

But initial momentum is easy. Maintaining campus ai literacy as permanent institutional capability is hard.

Most initiatives lose steam after the first year. Early adopters move on to new projects. Leadership priorities shift. Budget pressures mount. Without deliberate sustainability planning, your AI literacy program becomes just another abandoned initiative.

What sustains programs: embedded in job descriptions and tenure requirements, connected to existing professional development structures, included in onboarding for new hires, measured as part of regular performance reviews, funded through operational budgets not pilot grants.

For staff: AI literacy becomes an expected competency, like email proficiency was 20 years ago. New hires receive training during onboarding. Annual development plans include AI skill building. Promotions recognize those who develop expertise and help others.

For faculty: departments include AI integration in teaching expectations. Tenure review processes acknowledge pedagogical innovation with AI. Research support includes AI tool training. Course development time accounts for AI integration planning.

For students: every program includes foundational AI literacy, delivered within major coursework. No separate required course - that becomes outdated immediately - but integrated skills throughout the curriculum that evolve as tools change.

Budget for ongoing training, not one-time rollouts. Technology changes. Tools evolve. Your campus ai literacy program needs sustainable funding to keep pace.

The institutions succeeding long-term treat AI literacy like information literacy or digital literacy - a fundamental capability that requires continuous attention, not a project with an end date.

If you’re launching campus ai literacy initiatives, get your sequence right.

Train administrative staff first. They need to understand the tools and support the infrastructure before faculty depend on them. Give them real projects where they use AI for their actual work - not theoretical exercises.

Then move to faculty with practical, discipline-specific support. Don’t just show them tools. Help them integrate AI into their actual teaching and research workflows. Let early adopters experiment and share lessons.

Only then scale to students, with training embedded in their coursework and taught by faculty who’ve figured out what works for their discipline.

Measure what matters - productivity, quality, adoption in practice - not completion rates and attendance numbers.

Design for campus-wide scale from day one. Document everything. Share frameworks that others can adapt, not rigid procedures they must follow exactly.

Budget for sustainability. AI literacy isn’t a project. It’s a capability your institution needs permanently.

The universities that get this right won’t be the ones with the flashiest AI tools or the most ambitious announcements. They’ll be the ones where staff confidently support systems, faculty effectively integrate tools, and students thoughtfully apply AI to their learning.

Start there.

About the Author

Amit Kothari is an experienced consultant, advisor, and educator specializing in AI and operations. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.