Standards for Test Scripts
Purpose
The purpose of this document is to create a better idea for the Test Team in writing Test Scripts.
Entry & Exit Criteria for Test Scripts
Entry Criteria / Inputs | Exit Criteria / Outputs |
· Approved FDR · Approved TDR · Remedy Tickets related to BDEs · Approved EID | · Approved Test Cases · Approved Test Scripts |
Steps for Creating Effective Test Scripts
Application Knowledge
· Obtain a comprehensive understanding of the functionality from functional and technical documentation.
· In case of BDEs, go through the description provided in the Remedy Tickets for the respective BDEs along with the FDR and TDR.
- List out all the questions that you need to get answered by the development team with respect to the functions to be tested, after discussing within the testing team.
Test Scripts
- The test scripts should specify about functionality, boundary conditions and performance issues.
- When writing test scripts it is important to bear in mind that the team members may not be familiar with the function to be tested. Therefore, instructions should be clear and in a step-by-step format, which includes all basic procedures.
- Write test scripts in plain, simplistic language.
- Use of wording, acronyms, technical terms and jargons should be used accordingly.
- The test scripts should specify about the testing scenarios that will expose errors, omissions and unexpected results.
- Incorporate Re-usable scripts to avoid lengthy Test Scripts.
- Run through the re-usable scripts to verify whether the scripts cover up all the steps necessary for the Test scenario.
- Modifications may also be made during test execution both in re-usable script as well as in Test Scripts subject to notification to the Team members regarding the modifications.
- Provide the appropriate input for re-usable script in the Test Script.
Example:
Step Name | Description | Expected Result |
Step 1 | Execute the reusable script 'Log into Siebel'. Note: User login – sadmin, Password – xm0716 | User is logged in with administrative rights. |
- Provide unique input to each Test Script, so that mix up of all sort of input is incorporated.
Example:
Test Scenario 1:
Step Name | Description | Expected Result |
Step 1 | Execute the re-usable script "Create a Consumer Account" Note: Create an invoice monthly billing profile | Consumer account is created. |
Test Scenario 2:
Step Name | Description | Expected Result |
Step 1 | Execute the re-usable script "Create a Consumer Account" Note: Create a credit card billing profile | Consumer account is created. |
Communication
- Review the created Test Cases within the team (Peer Review).
- Review the Test Cases with the development team. Update the review comments provided by the development team accordingly.
- Fix up an appointment with the development team to clarify the doubts listed, while going through the FDRs and TDRs.
Avoid Common Mistakes
- Making cases too long
- Incomplete, incorrect, or jumbled setup
- Leaving out a step
- Naming fields that changed or no longer exist
- Unclear whether tester or system does action
- Unclear what is a pass or fail result
- Failure to clean up
Guidelines
A test script is a set of actions with expected results based on requirements for the system. The scripts can own the following attributes:
Quality Attributes
- Accurate - tests what the description says it will test.
- Economical - have only the steps needed for its purpose.
- Repeatable, self standing - same results to be displayed no matter who tests it. If only the writer can test it and get the result, or if the test gets different results for different testers, it needs more work in the setup or actions.
- Appropriate - for both immediate and future testers.
- Traceable - to a requirement.
- Self cleaning - returns the test environment to clean state. It should return the test environment to the pre-test state. For example, it should not leave the test system set at the wrong date.
Structure and testability
- Should have a name and number.
- Should state the purpose that includes what requirement is being tested.
- Should specify actions and expected results for every step.
- Should state if any proofs, such as reports or screen grabs, need to be saved
- Use active case language.
- Should not exceed 15 steps.
The definition of testability is easy to test -- accurately. Easy can be measured by how long it takes to execute the test, and whether the tester has to get clarification during the testing process. Accurately means that if the tester follows the directions, the result of pass or fail should be correct.
No comments:
Post a Comment