Last updated: March 2026

9

Dimension 9 of 9

Safeguarding & Risk

Safeguarding and risk encompasses AI-specific risks identified in safeguarding policy, risks from AI-generated content such as deepfakes and misinformation, student data privacy risks arising from AI tool use, and reporting processes for AI-related safeguarding concerns. This is a non-negotiable foundation dimension.

Why this matters

Regardless of a school's maturity in other dimensions, safeguarding must be addressed. A school that is leading in curriculum integration but exploring in safeguarding has a serious problem. AI introduces genuinely novel risk categories — deepfakes, social engineering, AI-generated harmful content, data privacy — that existing online safety policies were not designed to address. The most effective long-term safeguard is a student who can recognise, evaluate, and respond to AI-related risks independently.

The 5 maturity levels

Schools progress through five maturity levels, from initial exploration to sector leadership. Each level builds on the previous one.

1

Level 1: Exploring

Team briefed

No consideration of AI-specific safeguarding risks. Safeguarding policies do not address AI-specific risks such as deepfakes, data privacy, or AI-generated harmful content.

Key indicators

  • No AI risk mentioned in safeguarding policy
  • No deepfake awareness or training
  • No AI-specific data privacy assessment
  • Safeguarding training does not cover AI scenarios
2

Level 2: Developing

Formal risk assessment

General online safety policy with brief AI mention. AI is mentioned only briefly, without specific risk controls or response procedures.

Key indicators

  • Online safety policy exists with brief AI reference
  • No dedicated AI risk assessment
  • Safeguarding team aware but not specifically trained
  • No AI-specific incident reporting pathway
3

Level 3: Established

Living risk register

AI-specific risks identified in safeguarding policy with controls. Staff are trained and students educated on AI safety.

Key indicators

  • Safeguarding policy includes dedicated AI risk sections
  • AI risk assessment conducted and documented
  • Staff trained on AI safeguarding concerns
  • AI-specific incident reporting process exists
4

Level 4: Advanced

Digital resilience programme

Comprehensive AI risk register with monitoring and incident response. The school has a sophisticated, actively managed approach to AI safeguarding.

Key indicators

  • Comprehensive AI risk register maintained
  • Incident response procedures tested through drills
  • Student digital resilience programme operational
  • Regular risk review cycle (at least termly)
5

Level 5: Leading

Proactive anticipation

Proactive safeguarding with student-led digital resilience. The school takes an educative approach that builds autonomous safety skills.

Key indicators

  • Proactive risk anticipation of emerging AI capabilities
  • Student-led safety initiatives active
  • Contributing to sector safeguarding practice
  • Published safeguarding approach shared with others

What we look for

When auditing this dimension, we examine your school’s documents for evidence across these key areas:

Safeguarding policy that addresses AI-specific risks

Risks from AI-generated content (deepfakes, misinformation) addressed

Student data privacy risks from AI tools identified and mitigated

A process for reporting AI-related safeguarding concerns

Framework alignment

This dimension is benchmarked against leading international frameworks to ensure your audit reflects global best practice.

CIS Safeguarding and GenAI Guidance

Council of International Schools guidance on safeguarding considerations specific to generative AI in school contexts.

EDSAFE SAFE Framework

Framework for evaluating and managing AI safety in educational environments across multiple risk domains.

Singapore MOE AIEd Ethics Framework

Ministry of Education framework addressing ethical and safety considerations for AI in education.

UK DfE AI Guidance

Department for Education guidance on managing AI risks in schools, including safeguarding considerations.

Common gaps

These are the most frequent gaps we see when auditing schools in this dimension:

Assuming existing online safety policies cover AI risks — they almost certainly do not

Focusing only on student AI use while ignoring safeguarding implications of staff AI use

Not training the safeguarding team specifically on AI-related incidents

Ignoring data privacy implications of AI tools

Reactive rather than proactive approach — waiting for an incident before addressing AI safeguarding

Not educating students about AI-specific risks including deepfakes and social engineering

How this connects to other dimensions

No dimension exists in isolation. Understanding these connections helps schools prioritise their improvement journey.

Foundation for all dimensions — safeguarding must be addressed before expanding AI use

Critical for Student AI Literacy — student safety must be ensured during AI education

Critical for Technical Infrastructure — technical controls are a key safeguarding layer

Depends on Institutional Readiness — resources and leadership commitment required

Find out your school’s safeguarding & risk score

Upload your school’s policy documents and receive a detailed assessment across all 9 dimensions, with evidence-based scores and actionable improvement plans.

Run your free audit