Director of AI Quality & Safety, Legal & Regulatory

Wolters Kluwer

Wolters Kluwer

Software Engineering, Legal, Data Science, Quality Assurance, Compliance / Regulatory

Posted on May 11, 2026

As Wolters Kluwer Legal & Regulatory executes its North Star to become the Intelligent Orchestration Platform for legal and regulatory work, the quality of AI‑generated outputs – particularly for research, analysis, and reasoning – becomes mission‑critical. As AI systems increasingly surface legal answers, interpretations, and recommendations at the point of work, trust depends on correctness, grounding, traceability, and consistency of those outputs. The Director of AI Quality & Safety ensures that AI‑driven research and decision support remain reliable, auditable, and compliant as agentic automation scales, reinforcing governed execution and making quality and trust durable differentiators at the orchestration layer.

The Director of AI Quality & Safety is accountable for establishing and operationalizing a comprehensive quality and safety framework across AI-enabled products, content systems, and agentic workflows within WK Legal & Regulatory. The role ensures that AI systems are reliable, auditable, compliant, and aligned with defined quality standards, while reducing production defects and AI-related risks.

This position sits at the intersection of product, engineering, data science, and regulatory compliance, with a mandate to define measurable standards and enforce quality and product standards across the AI lifecycle.

Key Responsibilities

1. AI Quality & Evaluation Frameworks

  • Design and implement standardized evaluation frameworks for AI models and agentic systems (e.g., LLMs, RAG pipelines, autonomous agents)

  • Define, build, and improve evaluation frameworks with SMEs for output correctness and factuality, task completion accuracy, robustness and edge-case handling

  • Establish benchmark datasets and continuous evaluation pipelines (offline + online)

  • Drive adoption of evaluation tooling and methodologies across product teams

2. Software Quality & Reliability (AI Systems)

  • Extend the rigor of traditional software QA practices to the outputs of AI-driven systems (probabilistic outputs, non-determinism)

  • Define SLAs/SLOs specific to AI performance (e.g., hallucination rate, response reliability, latency under load)

  • Partner with Engineering to integrate quality gates into CI/CD pipelines

  • Coordinate root-cause analysis for AI-related production issues and track implementation of systemic fixes

3. Content Correctness & Validation

  • Establish frameworks for validating legal and regulatory content generated or transformed by AI systems

  • Collaborate with editorial and domain experts to define “ground truth” and validation protocols

  • Implement human-in-the-loop and automated validation mechanisms where appropriate

  • Ensure traceability between AI outputs and authoritative sources

4. Safety, Compliance & Governance

  • Define and enforce AI safety standards aligned with regulatory requirements (e.g., EU AI Act, data protection laws) and internal WK risk and compliance policies

  • Implement formal standards and controls alongside internal teams for bias detection and mitigation, harmful or unsafe output prevention, and data privacy and secure handling

  • Ensure auditability of AI systems (logging, explainability, decision traceability)

  • Act as primary liaison with Risk, Legal, and Compliance on AI-related matters

5. Measurement, Benchmarking & Reporting

  • Define quality and safety standards and KPIs that apply across the AI lifecycle - including model and data selection, prompt and workflow design, and deployment - working in partnership with Product and Engineering

  • Build dashboards and reporting mechanisms for executive visibility

  • Track and benchmark performance over time and across product lines

  • Track development of, and evaluate products against, external industry benchmarks and work with recognized benchmarking bodies to represent WK interests

  • Drive continuous improvement loops based on measurable outcomes

  • Drive QA & KPIs awareness in LR businesses and provide comms support with key findings & insights that can be used for external comms & thought leadership

  • Partner with Sales, Marketing, and Customer Support on external benchmark communication and AI‑related incident response messaging

6. Governance & Operating Model

  • Define operating model for AI quality & safety across CPO and DXG

  • Introduce review boards, approval processes, and escalation mechanisms

  • Provide guidance and enablement to product teams on quality and safety best practices

  • Build and lead a small, high-impact team as the function scales

Success Criteria

  • Clear, standardized quality and safety metrics are defined and consistently enforced across all AI-enabled products

  • AI system behavior is measurable, benchmarked, and governed through repeatable frameworks

  • Significant reduction in production defects and AI-related incidents

  • High confidence in content correctness and traceability for legal/regulatory use cases

  • Full auditability and compliance alignment for AI systems across jurisdictions

Required Qualifications

  • 10+ years in product quality, AI/ML systems, or related domains, with at least 3–5 years in AI-focused roles

  • Demonstrated experience designing evaluation frameworks for AI/ML systems (e.g., LLM evaluation, model validation)

  • Strong understanding of modern AI architectures (LLMs, RAG, agents), software quality engineering principles, and data and content validation workflows

  • Experience with regulatory or compliance-heavy environments (preferred: legal, financial, healthcare)

  • Proven ability to operate cross-functionally at senior levels

Preferred Qualifications

  • Familiarity with emerging AI governance standards and regulations (e.g., EU AI Act)

  • Experience implementing human-in-the-loop systems at scale

  • Background in experimentation platforms, benchmarking, or observability for AI systems

  • Advanced degree in Computer Science, Data Science, Law, or related field

Key Competencies

  • Systems thinking (ability to unify software, AI, and content quality domains)

  • Analytical rigor and metric-driven decision making

  • Risk awareness and regulatory sensitivity

  • Influence without authority in a matrixed organization

  • Pragmatic execution with high standards for quality

Questions?

Reach out to Silvie Roelans (Talent Acquisition Consultant) on silvie.roelans@wolterskluwer.com

Our Interview Practices

To maintain a fair and genuine hiring process, we kindly ask that all candidates participate in interviews without the assistance of AI tools or external prompts. Our interview process is designed to assess your individual skills, experiences, and communication style. We value authenticity and want to ensure we’re getting to know you—not a digital assistant. To help maintain this integrity, we ask to remove virtual backgrounds and include in-person interviews in our hiring process. Please note that use of AI-generated responses or third-party support during interviews will be grounds for disqualification from the recruitment process.

Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process.