Course Category: Software Testing
Course Category: Software Testing
Course Duration: 2 Days
Hours: 14 Contact Hours
Processworks Sdn. Bhd. is a registered HRDF Training Provider
There are many training courses offering a solid grounding in the theory and practice of software testing but software testers attending these courses often struggle to apply what they have learnt when they return to the workplace. This gap between theory and practice is known as the “testing gap”.
This two day course is designed to bridge the “testing gap”. It is aimed at software testers who need a practical approach to User Acceptance Testing (UAT) and System Integration Testing (SIT) that can be applied to software projects in the real world.
The course is presented by an experienced software tester who shares a wealth of practical tips and recent project experience with participants during the course.
- The problem with software testing
- A framework for better understanding software testing
- Planning User Acceptance Testing (UAT) and System Integration Testing (SIT)
- Feature testing
- End-to-end testing
- Date-based testing
- Exploratory testing
- managing the UAT and SIT Effort
Classification of Knowledge Levels
- Level 1: Remember (K1) – the student will recognise, remember and recall a term or concept.
- Level 2: Understand (K2) – the student can select the reasons or explanations for statements related to the topic, and can summarize, compare, classify, categorize and give examples for the testing concept.
- Level 3: Apply (K3) – the student can select the correct application of a concept or technique and apply it to a given context.
Who Should Attend?
- Test Managers, Test Engineers, Testers, Quality Assurance Staff
- Business Analysts, Business Systems Analysts, Systems Analysts, Functional Analysts
- User Representatives, Project managers, Program Managers
- Software Engineers, Developers, Requirements Engineers, Requirements Analysts, Human Factors Specialists
- Process Engineers, Software Engineering Process Group (SEPG) Staff, Methodologists, Process Improvement Staff
- Confusing terminology
- Software testing popular myths and incorrect beliefs
- The testing “gap”
- Product vs. project life cycles
- Views of software quality
- Measure of “excellence”
- Fit for intended purpose
- Conform to specification
- Absence of defects and other quality goals
- Provides value
- Summarising views of quality in the quality triangle
- Aligning software testing objectives with views of quality
- Validating that software is fit for its intended purpose
- Verifying that software conforms to its specification
- Identifying defects
- Measuring product attributes
- Building confidence
- The need for a multidimensional view of software testing
- Levels of software testing
- Unit and component testing level
- Integration testing level
- Understanding multiple levels of integration
- Vertical vs. horizontal integration
- System testing level
- The “recursive” nature of test levels
- Identifying test items and their relationship to test levels
- Selecting a test basis
- Requirements as a test basis
- Program code as a test basis
- Variations on requirements and code
- Models as a test basis
- Experience as a test basis
- Designing test cases
- What is a test case?
- Test to pass or “positive” test cases
- Test to fail or “negative” test cases
- Overview of test case design techniques
- Who executes the tests?
- Automating test execution
- Applying the software testing framework
- The standard User Acceptance Testing (UAT) strategy
- The standard System Integration Testing (SIT) strategy
- Customising UAT and SIT strategies
- Risk-based testing
- Product risks
- Project risks
- Developing UAT and SIT Test Plans
- Identifying test activities
- Estimating test effort
- Assigning resources
- Developing a test schedule
- Test environment and tools
- What are features?
- Features vs. components
- What is feature testing?
- Testing features in isolation
- Testing vertical integration
- When is feature testing performed?
- The role of feature testing during SIT
- The role of feature testing during UAT
- Modelling features
- Why model features?
- How to model features
- Developing a feature test specification
- Feature testing tools and test automation
- Planning and managing feature testing
Pre-requisites for Course & Certification
- What is end-to-end testing?
- Business object life cycles and scenarios
- What is end-to-end testing?
- Testing features in a specific sequence
- Testing horizontal integration
- When is end-to-end testing performed?
- The role of end-to-end testing during SIT
- The role of end-to-end testing during UAT
- Modelling business object life cycles
- Business object states
- External and internal events that trigger changes of state
- Grouping states
- Conditions and actions
- Modelling decisions
- Generating test scenarios
- Basing test scenarios on the life cycle of a business object
- Prioritising test scenarios based on risk
- Developing an end-to-end test specification
- End-to-end testing tools and test automation
- Planning and managing end-to-end testing
- What is date-based testing?
- When is date-based testing performed?
- Date-based testing and feature testing
- Date-based testing and end-to-end testing
- Date-based events
- Relative and absolute dates
- Anniversaries and time periods
- Adding date-based events to the life cycle models
- System clocks
- Real time clocks
- Proxy clocks
- Date-Based testing combined with feature and end-to-end testing
- Developing a date-based test specification
- Date-based testing tools and test automation
- Planning and managing date-based testing
- What is exploratory testing?
- When is exploratory testing performed?
- Exploratory testing and feature testing
- Exploratory testing and end-to-end testing
- Exploratory testing and date-based testing
- Skills required for exploratory testing
- Test specifications vs. checklists
- Exploratory testing tools and test automation
- Planning and managing exploratory testing
- Managing issues
- The importance of bug “triage”
- Bug tracking
- Requirements and change management
- Tracking progress
- Using “burn down” charts to track test execution progress
- Using the rate of finding bugs to track product quality
- Using the gap between finding and fixing bugs to track product readiness
- Finding a voice for product confidence