The first platform that embeds AI competency training into the daily clinical workflow. No new courses. No faculty workshops. Just structured, assessed, longitudinal AI education woven into the curriculum your school already runs.
Every morning, M3s pull up ChatGPT during pre-rounding. They generate differentials, draft assessments, look up treatment protocols. It's invisible, unstructured, and unassessed. Five pressures are converging.
230 million people per week consult ChatGPT about health. Patients arrive at appointments having already queried AI. Physicians are expected to evaluate AI recommendations with zero formal training.
Adding a standalone AI course is politically and logistically impossible. AI competency must embed into existing courses, but no tooling exists to make that work.
Only 12% of faculty feel "very familiar" with AI. Students are more AI-competent than their instructors. Faculty won't attend workshops. They need tools that build literacy as a side effect of saving time.
No standardized instrument measures AI competency in medical students. Schools that want to demonstrate AI education to LCME have no structured data to show.
Studies show trainees reliant on AI perform significantly worse when it's removed. Medical educators need deliberate frameworks to build competence, not dependence.
Students use AI daily during pre-rounding. No one sees it. No one assesses it. No one documents whether they can evaluate what the AI gives them. The gap is urgent and unmeasured.
EdAI MedSchool operates at three timescales: daily practice, periodic assessment, and longitudinal tracking. Each is a standalone application sharing a unified competency database.
Structures the daily pre-rounding workflow into an interpret-then-compare training loop. Students commit their own differential before AI access, then reconcile.
AI-driven patient avatar simulation where the AI clinical tool is a variable the student must evaluate. The only product where AI itself is under assessment.
Content Engine, LMS, Question Bank, Failure Lab, Competency Portfolio, and Accreditation Dashboard. Adapts to Traditional, Systems-Based, or PBL curriculum models.
EdAI MedSchool is the first implementation of the AI-PACE framework (McGrath et al., UC Berkeley / UC Davis, 2026) — the only peer-reviewed model for longitudinal AI competency in medical education.
AI fundamentals, health data science, strengths and limitations, ethics and legal frameworks.
Algorithm appraisal, prompt engineering, clinical workflow integration, AI-patient communication.
Trust calibration, bias identification, patient-centered values, AI failure recognition.
The Accreditation Dashboard generates LCME-ready data: curriculum maps, EPA heatmaps, assessment coverage, and faculty development metrics. No new courses needed — AI competencies embed into existing courses through the Content Engine and PreRounds.
The Content Engine saves 60% of exam-writing time. Faculty learn AI literacy as a side effect of generating questions. A pharmacology course director enters 8 learning objectives and gets 40 USMLE-style items in an afternoon. No workshops. No mandatory training.
The 90-day pilot playbook: recruit one overwhelmed course director, demonstrate time savings, expand to three departments, then make the institutional case with accreditation data. Lead with efficiency, not enthusiasm. Research co-authorship with AI-PACE framework authors is on the table.
A Competency Portfolio that documents your AI skills with performance data — not a line on your CV. When program directors ask about AI competency, you have 500+ documented evaluation cycles showing your clinical reasoning, trust calibration, and prompt engineering progression.
No institutional procurement required for the first two tiers. Fund from departmental course development budgets.
We'll show you PreRounds with your rotation schedule, the Content Engine with your learning objectives, and the Accreditation Dashboard with your curriculum structure.