Course Category: Software Testing
Course Duration: 4 Days
Hours: 28 Contact Hours

Register Now

About the Course

The ISTQB Certified Tester AI Testing is for individuals who want to expand their skills in testing AI based systems and/or AI for testing. This includes people in roles such as testers, test analysts, data analysts, test engineers, test consultants, test managers, user acceptance testers and software developers.

This certification is aimed at improving and attesting to a basic understanding of testing AI-based systems and/or AI for testing.

Course Benefit

With the ISTQB Certified Tester AI Testing you will:

  • Understand the current state and expected trends of AI.
  • Experience the implementation and testing of an ML model and recognize where testers can best influence its quality.
  • Understand the challenges associated with testing AI-based systems, such as their self-learning capabilities, bias, ethics, complexity, non-determinism, transparency and explainability.
  • Contribute to the test strategy for an AI-based system.
  • Design and execute test cases for AI-based systems.
  • Recognize the special requirements for the test infrastructure to support the testing of AI-based systems.
  • Understand how AI can be used to support software testing.

Pre-requisites

A candidate aspiring to take the CT-AI certification is required to be certified in ISTQB Certified Tester Foundation Level (CTFL) or ISEB Foundation Certificate in Software Testing.

There are no pre-requisites to attending the course only for education and knowledge purposes.

Who should attend?

The Certified Tester AI Testing is aimed at anyone involved in testing AI-based systems and/or AI for testing. This includes people in roles such as testers, test analysts, data analysts, test engineers, test consultants, test managers, user acceptance testers and software developers. This certification is also appropriate for anyone who wants a basic understanding of testing AI-based systems and/or AI for testing, such as project managers, quality managers, software development managers, business analysts, operations team members, IT directors and management consultants.

Training and Exam Duration

Training: 4 Days
The course material shall be issued before the course starts.

Exam: 60 minutes
The exam is held separately from the training course.
Most course participants take the exam within two weeks or earlier from the course completion date.

Exam Pattern

The specialist stream AI Testing exam consists of 40 multiple-choice questions, with a pass mark grade of 65% to be completed within 60 minutes.

Course Content

1. Introduction to AI

  • Definition of AI and AI Effect
  • Narrow, General and Super AI
  • AI-Based and Conventional Systems
  • AI Technologies
  • AI Development Frameworks
  • Hardware for AI-Based Systems
  • AI as a Service (AIaaS)
  • Pre-Trained Models

2. Quality Characteristics for AI-Based Systems

  • Flexibility and Adaptability
  • Autonomy
  • Evolution
  • Bias
  • Ethics
  • Side Effects and Reward Hacking
  • Transparency, Interpretability and Explainability
  • Safety and AI

3. Machine Learning (ML) – Overview

  • Forms of ML
    • Supervised Learning
    • Unsupervised Learning
    • Reinforcement Learning
  • ML Workflow
  • Selecting a Form of ML
  • Factors Involved in ML Algorithm Selection
  • Overfitting and Underfitting
  • Overfitting
  • Underfitting
  • Hands-On Exercise: Demonstrate Overfitting and Underfitting

4. ML – Data

  • Forms of ML
    • Supervised Learning
    • Unsupervised Learning
    • Reinforcement Learning
  • ML Workflow
  • Selecting a Form of ML
  • Factors Involved in ML Algorithm Selection
  • Overfitting and Underfitting
  • Overfitting
  • Underfitting
  • Hands-On Exercise: Demonstrate Overfitting and Underfitting

5. ML Functional Performance Metrics

  • Confusion Matrix
  • Additional ML Functional Performance Metrics for Classification, Regression and Clustering
  • Limitations of ML Functional Performance Metrics
  • Selecting ML Functional Performance Metrics
  • Benchmark Suites for ML

6. ML – Neural Networks and Testing

  • Neural Networks
  • Coverage Measures for Neural Networks

7. Testing AI-Based Systems Overview

  • Specification of AI-Based Systems
  • Test Levels for AI-Based Systems
    • Input Data Testing
    • ML Model Testing
    • Component Testing.
    • Component Integration Testing
    • System Testing
  • Acceptance Testing
  • Test Data for Testing AI-based Systems
  • Testing for Automation Bias in AI-Based Systems
  • Documenting an AI Component
  • Testing for Concept Drift
  • Selecting a Test Approach for an ML System

8. Testing AI-Specific Quality Characteristics

  • Challenges Testing Self-Learning Systems
  • Testing Autonomous AI-Based Systems
  • Testing for Algorithmic, Sample and Inappropriate Bias
  • Challenges Testing Probabilistic and Non-Deterministic AI-Based Systems
  • Challenges Testing Complex AI-Based Systems
  • Testing the Transparency, Interpretability and Explainability of AI-Based Systems

9. Testing AI-Specific Quality Characteristics

  • Adversarial Attacks and Data Poisoning
    • Adversarial Attacks
    • Data Poisoning
  • Pairwise Testing
  • Back-to-Back Testing
  • A/B Testing
  • Metamorphic Testing (MT)
  • Experience-Based Testing of AI-Based Systems
  • Selecting Test Techniques for AI-Based Systems

10. Test Environments for AI-Based Systems

  • Test Environments for AI-Based Systems
  • Virtual Test Environments for Testing AI-Based Systems

11. Using AI for Testing

  • AI Technologies for Testing
  • Using AI to Analyze Reported Defects
  • Using AI for Test Case Generation
  • Using AI for the Optimization of Regression Test Suites
  • Using AI for Defect Prediction
  • Using AI for Testing User Interfaces
    • Using AI to Test Through the Graphical User Interface (GUI)
    • Using AI to Test the GUI