Home » About Us » Company AI Usage and Compliance Policy

Company AI Usage and Compliance Policy

Effective Date: 17th April 2023
Last Updated: 30th April 2025
Next Scheduled Review: April 2026
Aligned with: Ofqual’s 2024 Policy on Regulating the Use of AI in the Qualifications Sector

Summary:
This policy outlines how Complete Training responsibly uses Artificial Intelligence (AI) to support its operations, uphold academic integrity, and protect learners. It details our approach to fairness, transparency, regulatory compliance, staff training, and learner rights in a rapidly evolving technological landscape.

Introduction

At Complete Training, we are committed to the responsible, ethical, and transparent use of Artificial Intelligence (AI) to enhance the delivery and assessment of all our training courses and accredited qualifications, while upholding fairness and protecting our learners.

This policy outlines our approach to the use of AI in qualification delivery, assessment, internal operations, and regulatory compliance. Our practices are designed to maintain fairness, protect learners, uphold the validity of qualifications, meet the expectations of awarding organisations and regulatory bodies, and align with educational best practices.

A Personal Statement from Us at Complete Training

We recognise that Artificial Intelligence technologies are rapidly evolving, and their implications for the qualifications and assessment sector are still being understood. At Complete Training, we are actively learning and adapting our practices as AI continues to develop and as further guidance is issued by regulatory bodies and awarding organisations. We share the inherent uncertainties and a degree of wariness that come with adopting new technologies in such a critical area.

Our commitment remains steadfast in prioritising fairness, protecting our learners, upholding the validity of our qualifications, meeting the expectations of our partners, and aligning with educational best practices. This policy reflects our current understanding and approach, which will be continuously reviewed and updated as our knowledge evolves and the landscape of AI in education matures.

Sincerely,
Karon Goodman & Jo Oldfield
Directors
Complete Training


1. Fairness and Validity in AI Use

1.1 Human Oversight
We do not rely solely on AI for any critical decision-making activities, including grading, marking, assessment, or feedback production. Human oversight is consistently maintained across all processes involving AI to ensure accuracy and fairness.

1.2 Bias Prevention
We regularly review and audit our AI tools to identify and mitigate any risks of bias, ensuring that no group of learners is disproportionately affected by their use.

1.3 Specific AI Tools
Currently, we utilise AI-powered plagiarism detection software to uphold academic integrity.


2. Transparency of AI Use

2.1 Clear Communication
Learners, staff, and stakeholders are clearly informed if AI technologies are used during assessment or qualification processes, such as plagiarism detection. We will communicate the purpose and function of these tools in the learner welcome pack.

2.2 Plain Language
We explain AI processes and decisions in simple, accessible language suitable for regulators, learners, and other stakeholders, avoiding technical jargon where possible.


3. Strengthened Malpractice Safeguards

3.1 Policy Development
Our malpractice policies have been updated to specifically address AI-related risks, particularly in the context of online and remote assessments, including the submission of work that has been generated or significantly altered using AI tools without disclosure. See our AI and Academic Integrity Policy for Learners.

3.2 Human Review in Monitoring
Where AI is used in plagiarism detection, it always supplements human review by qualified assessors; AI is never relied upon as the sole assessment mechanism. AI is not used to generate personalised feedback or evaluation of learner performance.

3.3 Staff Training
Our staff receive comprehensive training to recognise and manage AI-related malpractice, including methods for identifying AI-generated learner responses and understanding the limitations of AI detection tools.


4. Collaboration with Awarding Organisations

4.1 Regulatory Compliance
We ensure that all applications of AI are fully aligned with the guidance and requirements of our awarding organisations and Ofqual, maintaining strict adherence to regulatory standards.


5. Documentation and Risk Management

5.1 Record Keeping
We maintain detailed records of:

  • What specific AI tools are used in our operations.
  • The specific purpose and function of each AI tool.
  • How each AI tool aligns with relevant regulatory requirements and awarding body guidelines.
  • The safeguards and quality controls implemented for each AI tool.

5.2 Risk Assessments
We conduct thorough risk assessments before implementing any new AI-based solutions to identify and mitigate potential negative impacts on learners or the integrity of our qualifications. Any third-party AI tools are vetted for GDPR compliance and data security standards before use.


6. Regulatory Engagement and Monitoring

6.1 Ongoing Monitoring
We actively monitor updates and guidance issued by Ofqual and other awarding bodies regarding AI use in education to ensure our practices remain compliant and reflect the latest best practices.


7. Building Staff Capability and AI Awareness

7.1 Staff Development
All staff members, including administration, tutors, and invigilators, receive ongoing training on:

  • Understanding the fundamentals of AI and its current limitations in an educational context.
  • How AI is specifically used within our training delivery and assessment processes.
  • Ethical and compliant practices for utilising and identifying AI.

7.2 Culture of Accountability
We maintain a culture that encourages the careful, responsible exploration of AI technologies, always prioritising learner protection, the validity of our qualifications, and regulatory compliance. We emphasise that AI tools are intended to compliment, not replace, human expertise.


8. Data Privacy and Security

8.1 Our Commitment
We are committed to protecting the privacy and security of learner data handled by AI tools. Our data handling practices comply with relevant data protection regulations. We ensure that any AI tools used adhere to appropriate data security standards and that learner data is processed ethically and transparently.


9. Accessibility

9.1 Our Approach
We strive to ensure that our use of AI promotes inclusivity and accessibility for all learners. We will evaluate the potential impact of AI tools on learners with disabilities and aim to implement solutions that are compatible with accessibility standards and provide reasonable adjustments where necessary.


10. Learner Rights and Recourse Regarding AI Use

10.1 Right to Information
Learners have the right to be informed about when and how AI is and has been used in our assessment processes. This information will be provided clearly in the learner welcome pack and feedback communications.

10.2 Right to Review
In critical assessment decisions, learners have the right to have the AI’s output reviewed and considered by a senior qualified assessor. AI will not be the sole determinant of a learner’s outcome in assessments.

10.3 Right to Raise Concerns
Learners have the right to raise concerns or ask questions about the use of AI in their learning or assessment without fear of reprisal. Details on how to submit such queries are provided in the Learner Welcome Pack.

10.4 Recourse Process
If a learner believes that an AI-driven process has negatively impacted them unfairly, they have the right to follow a clear and accessible appeals process. This process will involve human review of the AI’s application and the learner’s case. Details of the specific procedures for initiating an appeal are outlined in the Learner Welcome Pack.


11. Continuous Improvement and Evaluation

11.1 Our Ongoing Efforts
We are committed to continuously evaluating the effectiveness and impact of our AI implementations. This includes monitoring the accuracy, fairness, and user experience associated with AI tools. Feedback from learners and staff will be used to inform ongoing improvements and adjustments to our AI usage.


12. Examples of AI Use in Our Services

12.1 Current and Future Applications
Currently, we utilise AI-powered plagiarism detection software to aid in our assessment of the integrity of submitted work. We are exploring the potential of AI to provide more personalised learning resources in the future, always with human oversight. We have utilised AI for the production of slide decks for training presentations, information within resource packs, and producing promotional material for our company including job advertisement copy.


Policy Governance

This policy is reviewed annually or sooner in response to technological developments, regulatory updates, or changes to awarding organisation requirements.


Contact Us

If you have any questions regarding this Company AI Usage and Compliance Policy, please contact us here.

Share this page:
Scroll to Top