OPERATIONALIZING THE AI-PACE FRAMEWORK

Teach medical students to practice with AI, not just use it

The first platform that embeds AI competency training into the daily clinical workflow. No new courses. No faculty workshops. Just structured, assessed, longitudinal AI education woven into the curriculum your school already runs.

500–900
AI evaluation cycles per student per year
3
Curriculum models supported
12
AI-PACE milestones tracked

Your students are already using AI.
Nobody is teaching them to do it well.

Every morning, M3s pull up ChatGPT during pre-rounding. They generate differentials, draft assessments, look up treatment protocols. It's invisible, unstructured, and unassessed. Five pressures are converging.

01

The Preparedness Gap

230 million people per week consult ChatGPT about health. Patients arrive at appointments having already queried AI. Physicians are expected to evaluate AI recommendations with zero formal training.

02

The Crowded Curriculum

Adding a standalone AI course is politically and logistically impossible. AI competency must embed into existing courses, but no tooling exists to make that work.

03

The Faculty Crisis

Only 12% of faculty feel "very familiar" with AI. Students are more AI-competent than their instructors. Faculty won't attend workshops. They need tools that build literacy as a side effect of saving time.

04

The Assessment Vacuum

No standardized instrument measures AI competency in medical students. Schools that want to demonstrate AI education to LCME have no structured data to show.

05

The De-skilling Risk

Studies show trainees reliant on AI perform significantly worse when it's removed. Medical educators need deliberate frameworks to build competence, not dependence.

06

The Invisible AI Use

Students use AI daily during pre-rounding. No one sees it. No one assesses it. No one documents whether they can evaluate what the AI gives them. The gap is urgent and unmeasured.

Three applications. One competency layer.

EdAI MedSchool operates at three timescales: daily practice, periodic assessment, and longitudinal tracking. Each is a standalone application sharing a unified competency database.

DAILY PRACTICE

PreRounds

For M3–M4 students on clinical rotations

Structures the daily pre-rounding workflow into an interpret-then-compare training loop. Students commit their own differential before AI access, then reconcile.

  • Pre-AI assessment commit (timestamped)
  • Structured AI consultation with prompt tracking
  • Side-by-side reconciliation with rationale
  • Patient communication preparation
  • Clerkship director visibility dashboard
PERIODIC ASSESSMENT

OSPREY

For high-fidelity competency evaluation

AI-driven patient avatar simulation where the AI clinical tool is a variable the student must evaluate. The only product where AI itself is under assessment.

  • Interpret-then-Compare scenarios
  • AI Override detection (AI gives wrong diagnosis)
  • AI-Assisted Communication evaluation
  • AI Failure Recognition (SDOH, bias, hallucination)
  • Faculty scenario authoring
INSTITUTIONAL BACKBONE

MedSchool Suite

For faculty, administration, and curriculum

Content Engine, LMS, Question Bank, Failure Lab, Competency Portfolio, and Accreditation Dashboard. Adapts to Traditional, Systems-Based, or PBL curriculum models.

  • AI-assisted MCQ and case generation for faculty
  • Failure Lab case conferences
  • Competency Portfolio with EPA heatmaps
  • LCME-ready Accreditation Dashboard
  • Faculty AI Literacy credentialing
Daily
PreRounds
500–900 events/year
Periodic
OSPREY
2–5 sessions/year
Continuous
Suite
MCQ, Failure Lab, LMS
Aggregated
Portfolio
Longitudinal heatmap
Reported
LCME
Accreditation data

Built on published science, not marketing.

EdAI MedSchool is the first implementation of the AI-PACE framework (McGrath et al., UC Berkeley / UC Davis, 2026) — the only peer-reviewed model for longitudinal AI competency in medical education.

Cognitive

AI fundamentals, health data science, strengths and limitations, ethics and legal frameworks.

LMS · Content Engine · Question Bank

Psychomotor

Algorithm appraisal, prompt engineering, clinical workflow integration, AI-patient communication.

PreRounds · OSPREY · OSCE Prep

Affective

Trust calibration, bias identification, patient-centered values, AI failure recognition.

PreRounds · Failure Lab · Discussions

Embedded

Longitudinal integration across pre-clinical, clinical, residency, and practice. No standalone courses needed.

Competency Portfolio · Accreditation Dashboard

Different problems. Same platform.

For the Dean's Office

"How do we meet accreditation standards without overhauling the curriculum?"

The Accreditation Dashboard generates LCME-ready data: curriculum maps, EPA heatmaps, assessment coverage, and faculty development metrics. No new courses needed — AI competencies embed into existing courses through the Content Engine and PreRounds.

For Faculty

"I don't have time to learn another platform."

The Content Engine saves 60% of exam-writing time. Faculty learn AI literacy as a side effect of generating questions. A pharmacology course director enters 8 learning objectives and gets 40 USMLE-style items in an afternoon. No workshops. No mandatory training.

For AI Champions

"I've been trying to get this school to take AI education seriously."

The 90-day pilot playbook: recruit one overwhelmed course director, demonstrate time savings, expand to three departments, then make the institutional case with accreditation data. Lead with efficiency, not enthusiasm. Research co-authorship with AI-PACE framework authors is on the table.

For Students

"I'm already using ChatGPT. What does this add?"

A Competency Portfolio that documents your AI skills with performance data — not a line on your CV. When program directors ask about AI competency, you have 500+ documented evaluation cycles showing your clinical reasoning, trust calibration, and prompt engineering progression.

Start with one department. Expand when they ask for more.

No institutional procurement required for the first two tiers. Fund from departmental course development budgets.

Faculty Starter
$2,400
per department / year
  • Content Engine (MCQ + vignette generation)
  • 5 faculty seats
  • Question bank for 1 course
  • Faculty AI Literacy tracking
  • Curriculum-aware prompt templates
Institutional
$25–50
per student / year
  • Full platform — all modules
  • OSPREY simulation
  • Competency Portfolio
  • Accreditation Dashboard
  • Curriculum template seeding

See what your students are already doing with AI.

We'll show you PreRounds with your rotation schedule, the Content Engine with your learning objectives, and the Accreditation Dashboard with your curriculum structure.