Article 11 Compliance Document

AI System Technical Documentation

This document outlines the architecture, risk management, and operational procedures of the We Are Over The Moon (WAOTM) platform, prepared in accordance with the technical documentation requirements of the EU Artificial Intelligence Act (Article 11) for High-Risk AI Systems.

1. System Description & Intended Purpose

The We Are Over The Moon (WAOTM) platform is an AI-powered recruitment and assessment suite designed to facilitate skills-based hiring. The system evaluates candidate profiles, assesses competencies, and provides structured insights to employing organizations to support human-driven recruitment decisions.

Core AI Components

  • CV Analysis: Extracts professional experience, structures unstructured data, and matches identified skills against vacancy requirements.
  • Cognitive Assessment: Evaluates problem-solving logic and structured thinking via adaptive interactive challenges.
  • Cultural Fit Analysis: Analyzes responses against established organizational culture models (e.g., Competing Values Framework).
  • Voice Interview & Video Pitch Analysis: Transcribes audio/video inputs and performs semantic analysis on candidate answers to assess communication clarity and core competencies.
  • Case Study Evaluation: Assesses candidate choices and open-text reflections in simulated scenario environments.
  • Candidate Ranking: Aggregates individual assessment scores into a weighted overall score to suggest candidate suitability.
  • Conversational Chatbot: Provides automated assistance to candidates and companies navigating the platform.

Intended Use Limitation: The AI system is explicitly designed as an advisory tool. It is strictly prohibited from making autonomous hiring, rejection, or compensation decisions without human review.

2. Risk Classification

Under the guidelines of the European Union Artificial Intelligence Act (EU AI Act), the WAOTM platform is classified as a High-Risk AI System under Annex III, Category 4 (Employment, workers management, and access to self-employment).

This classification applies because the system is intended to be used for the recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, and evaluating candidates during interviews or tests. As a provider of a high-risk system, WAOTM is mandated to enforce strict risk management, data governance, and transparency requirements.

3. Risk Management Measures

A comprehensive risk management system is integrated throughout the lifecycle of the WAOTM platform to minimize risks to fundamental rights, privacy, and fair opportunity.

  • Human Oversight Design: The interface forces human validation at critical decision points. Automated rejection pipelines are structurally disabled.
  • Confidence Scoring: Every generative AI output is evaluated internally for certainty. Low-confidence outputs are automatically flagged for manual review before being presented to a client.
  • Immutable Audit Logging: Every AI-generated assessment, summary, or ranking creates an immutable log detailing the inputs, processing context, and outputs.
  • Explicit Candidate Consent: Candidates must provide active, informed consent specifically for AI processing before any evaluation occurs, accompanied by clear opt-out pathways.
  • Algorithmic Bias Monitoring: Regular batch testing of prompts against diverse synthetic applicant profiles ensures the model does not disproportionately penalize demographic markers.

4. Data Governance

Data handling within the WAOTM ecosystem complies entirely with the General Data Protection Regulation (GDPR) and the specific data quality requirements of the EU AI Act.

  • Data Collection: Limited strictly to professional qualifications, assessment responses, and explicitly requested cultural/cognitive markers.
  • Data Minimization: Personal Identifiable Information (PII) such as name, age, gender, and contact details are abstracted and excluded from the AI processing context payload wherever technically feasible.
  • Storage & Residency: All data is encrypted at rest (AES-256) and in transit (TLS 1.3), hosted on secure servers located exclusively within the European Economic Area (EEA).
  • Retention Policies: Candidate data is retained only as long as necessary for the specific recruitment cycle, with a default maximum retention of 12 months unless extended by explicit candidate request or legal obligation.
  • Model Training Limitation: WAOTM does not use customer or candidate data to train base foundational models. Interactions are processed ephemerally via commercial APIs with strict zero-retention agreements.

5. Human Oversight Mechanisms

Human oversight (Article 14) is guaranteed through technical constraints and organizational protocols:

  • Advisory Nature: Platform UI labels clearly designate AI-generated insights as "Advisory." The system technically cannot execute a state change (e.g., "Hired", "Rejected") without a human user's direct click action.
  • Company Dashboard Audit Log: Employers have access to an AI Audit Log viewer, allowing human recruiters to trace how an AI evaluation reached its conclusion, including the raw text analyzed.
  • Candidate Review Requests: Candidates have a built-in mechanism to contest an AI evaluation. Triggering this flag suspends the AI score and places the candidate in a "Human Review Required" queue for the employer.

6. Accuracy & Robustness

The platform relies on state-of-the-art foundational Large Language Models (LLMs), specifically integrating enterprise tiers of Anthropic Claude and OpenAI GPT architectures, optimized for logical reasoning and structured data extraction.

  • Input Validation: Submitted documents and text are sanitized and checked for malicious injection attempts or formatting anomalies prior to processing.
  • Structured Output Enforcement: The system forces the generative models to reply in strict JSON schemas. If a model hallucinates a non-compliant structure, the system safely aborts and retries or falls back to an error state.
  • Error Handling: Network timeouts, API limits, or content-filter triggers are caught gracefully, displaying a user-friendly message rather than failing silently or assuming a negative candidate outcome.

7. Transparency & Logging

WAOTM ensures that both candidates and employers are fully aware of when and how AI is utilized:

  • Disclosure Banners: Persistent UI markers indicate when a user is interacting with an AI (e.g., the chatbot) or viewing an AI-generated report.
  • Comprehensive Logging: The `ai_audit_logs` database table records every consequential AI decision, capturing the model used, decision type, input summary, output summary, and a calculated confidence score.
  • Public AI Transparency: The platform maintains a public-facing AI Transparency page written in plain language, explaining candidates' rights and the limits of the technology used.

8. Monitoring & Incident Response

Post-market monitoring is continuously executed to ensure the ongoing safety and compliance of the AI system.

  • Real-time Anomaly Detection: The engineering team monitors API failure rates, structured output compliance drops, and sudden shifts in average candidate scores that may indicate a model degradation.
  • Monthly Review: System performance, user feedback, and candidate "Human Review Requests" are aggregated and reviewed monthly by the compliance and product teams.
  • Incident Escalation: In the event of a suspected severe bias incident or data breach, an automated kill-switch allows administrators to halt all AI processing while preserving system access for human operators.

9. Version History

VersionDateAuthorDescription of Changes
1.0January 2026Compliance TeamInitial technical documentation publication for EU AI Act compliance.