Friday, September 26, 2008

Software Testing Process

Software Testing Process

1 The Approach

This section will not detail more on the testing process but will throw some light on a standard testing process. Any testing process will start with the planning of a test (Test Plan), building a strategy (How To Test), Preparation of test cases (What To Test), Execution of test cases (Testing) and end in reporting the results (Defects).

The above process need not be wholly iterative but execution of test cases and reporting results can be considered iterative in most cases. But now comes a question of how different is a web testing process over other testing process? Practically it is not difference it is the priority areas which needs to be set for a web testing. For web testing the following few key focus areas like Compatibility, Navigation, User Interaction, Usability, Performance, Scalability, Reliability, and Availability etc can be considered during the testing phase.

1.1 The Do’s

This section will have a list of areas or tasks, which one has to follow in a web testing process. Though it might be common with other testing process it is suggested that these areas are given enough attention.

2 Plan & Strategy

Neatly document the test plan and test strategy for the application under test. Test Plan serves as the basis for all testing activities throughout the testing life cycle. Being an umbrella activity this should reflect the customer’s needs in terms of milestones to be met, the test approach (test strategy), resources required etc. The plan and strategy should give a clear visibility of the testing process to the customer at any point of time.

Functional and Performance test plans if developed separately will give lot more lucidity for functional and performance testing. Performance test plan is optional if the application does not entail any performance requirements.

2.1 The Do’s

Ø Develop test plan based on a approved Project Plan

Ø Document test plan with major testing milestones

Ø Identify and document all deliverables at the end of these milestones

Ø Identify the resources (Both Hardware/Software and Human) required

Ø Identify all other external systems, which is going to interact with the application. For example the application can get the data from any other mainframe server. Identification of such systems will help one plan for integration testing as well

Ø If performance testing is under the scope of testing then clearly identify the application performance requirements like number of hits/second, response time, number of concurrent user etc. Details about different testing methodologies (Spike testing, Endurance testing, stress testing, capacity testing) during the performance testing phase can also be documented.

Ø Get the test plan approved.

Ø Include Features to be tested to communicate to customer what all will be tested during the testing life cycle

Ø Include Features not tested to communicate to customer what all will not be tested during the testing life cycle. (As part of risk management)

2.2 The Don’ts

Ø Do not use draft (unapproved) test plans for reference

Ø Do not ignore the test strategies identified in the test plan during testing.

Ø Do not make changes to any approved test plan without official change request

3 Test Case Design

Do not mix the stages of testing (Unit testing, Integration testing, System testing, Functional testing etc) with the types of testing (Regression testing, Sanity testing, User Interface testing, Smoke testing etc) in the test plan. Identify them uniquely with their respective input and exit criteria.

Any good and complete testing is as good as its test cases, since test cases reflect the understandability of the test engineer over the application requirements. A good test case should identify the yet undiscovered errors in testing.

3.1 The Do’s

Ø Identify test cases for each module

Ø Write test cases in each executable step.

Ø Design more functional test cases.

Ø Clearly identify the expected results for each test case

Ø Design the test cases for workflow so that the test cases follow a sequence in the web application during testing. For example for mail applications say yahoo, it has to start with a registration process for new users, then signing up, composing mail, sending mail etc.

Ø Security is high priority in web testing. Hence document enough test cases related to application security.

Ø Develop a trace ability matrix to understand the test case coverage with the requirements

3.2 The Don’ts

Ø Do not write repetitive UI test cases. This will lead to high maintenance since UI will evolve over due course.

Ø Do not write more than one execution step in each test case.

Ø Do not concentrate on negative paths for User acceptance test cases if the business requirements clearly indicate on the application behavior and usage by the business users.

Ø Do not fail to get the test cases reviewed by individual module owners of the development team. This will enable the entire team to be in the same line.

Ø Do not leave any functionality uncovered in the test cases unless and until if it is specified in the test plan as features not tested.

Ø Try not to write test cases on error messages based on assumptions. Document error message validation test cases if the exact error message to be displayed is given in requirements.

4 Testing

This phase is little crucial in terms of customer standpoint, be it internal or external. All the efforts put in earlier phases of testing is going to reap results only in this phase.

A good test engineer should always work towards breaking the product right from the first release till the final release of the application (Killer attitude). This section will not just focus on testing but all the activities related to testing is it defect tracking, configuration management or testing itself.

4.1 The Do’s

Ø Ensure if the testing activities are in sync with the test plan

Ø Identify technically not strong areas where you might need assistance or trainings during testing. Plan and arrange for these technical trainings to solve this issue.

Ø Strictly follow the test strategies as identified in the test plan

Ø Try getting a release notes from the development team which contains the details of that release that was made to QA for testing. This should normally contain the following details

o The version label of code under configuration management

o Features part of this release

o Features not part of this release

o New functionalities added/Changes in existing functionalities

o Known Problems

o Fixed defects etc.

Ø Stick to the input and exit criteria for all testing activities. For example, if the input criteria for a QA release is sanity tested code from development team, ask for sanity test results.

Ø Update the test results for the test cases as and when you run them

Ø Report the defects found during testing in the tool identified for defect tracking

Ø Take the code from the configuration management (as identified in plan) for build and installation.

Ø Ensure if code is version controlled for each release.

Ø Classify defects (It can be P1, P2, P3, P4 or Critical or High or Medium or Low or anything) in a mutual agreement between the development team so as to aid developers prioritize fixing defects

Ø Do a sanity testing as and when the release is made from development team.

4.2 The Don’ts

Ø Do not update the test cases while executing it for testing. Track the changes and update it based on a written reference (SRS or functional specification etc). Normally people tend to update the test case based on the look and feel of the application.

Ø Do not track defects in many places i.e. having defects in excel sheets and in any other defect tracking tools. This will increase the time to track all the defects. Hence use one centralized repository for defect tracking

Ø Do not get the code from the developers sandbox for testing, if it is a official release from the development team

Ø Do not spend time in testing the features that are not part of this release

Ø Do not focus your testing on the non critical areas (from the customers perspective)

Ø Even if the defect identified is of low priority, do not fail to document it.

Ø Do not leave room for assumptions while verifying the fixed defects. Clarify and then close!

Ø Do not hastily update the test cases without running them actually, assuming that it worked in earlier releases. Sometimes these pre conceived notions would be a big trouble if that functionality is suddenly not working and is later found by the customer.

Ø Do not focus on negative paths, which are going to consume lots of time but will be least used by customer. Though this needs to be tested at some point of time the idea really is to prioritize tests.

5 Test Results

What comes next after the testing is complete. Can it be considered completed?

The answer is No. Any testing activity at the end should always be accompanied with the test results. The test result can be of both defects and the results from the test cases, which were executed during testing.

5.1 The Do’s

Ø Ensure that a defect summary report is sent to the Project Lead after each release testing. This on a high level can discuss on the number of open/reopen/closed/fixed defects. To drill down the report can also contain the priority of open and reopen defects.

Ø Ensure that a test summary report is sent to the Project Lead after each release testing. This can contain details about the total number of test cases, total number of test cases executed, total number of passed test cases, total number of failed test cases, total number of test cases that were not run (This here means the test cases were not able to run here either due to non-availability of production environment or non-availability of real time data or some other dependencies. Hence looking at the non-run test cases should give a clear picture what areas were not tested. This should not contain information on the test cases which were not run due to lack of time), total number of test cases that were not applicable.

Ø On a high level if the above details can be tracked for all releases then this should give a clear picture on the growing stability of the application.

Ø Track metrics as identified during the plan stage

5.2 The Don’ts

Ø Do not attempt to update anyone with huge information on test results. It has to be precise. You need not give information of the test execution steps which failed during testing as this will be a tedious process for one to sit and go through these test cases

Ø Finally what it comes out is how easily is the test result information can be interpreted. Hence do not leave room for assumptions while interpreting the test metrics. Make it simple!

6 Conclusion

As a conclusion testing should focus on 100% test coverage rather than 100% test case coverage because at times 100% test coverage cannot fully guarantee that the application is thoroughly tested, it is the test coverage which really matters. Testing is a not a monotonous jobs anymore as it imposes lot more challenges than it was thought to be. The Quality of the product is finally the outcome of how good was the testing done. At last the effectiveness testing is what it counts when comes to customer satisfaction. Testing Gives Boulevard learn new tools, which can help one in the testing process. But testing can only identify the presence of defects and cannot certify the absence of defects. Hence it really counts to deliver a defect free product.

Thursday, September 4, 2008

Product Testing

As the name itself mentions, this category deals with the testing of the complete product inside out. The product is tested in different stages and in different methods.

During the initial testing of a new product, the Products are given to the QA department as modules. Modules will basically be the major functionality that the Product has to perform. Consider an example where in a Package comprises of more than one Product. In this case, the Modules will be either a Product or one of the major functionalities that one of the Products has to do.

The Test Cycle for Modules or Product is based on the features that need to be tested and the number of resources of the team that is looking into it. When a Package consists of more than one Product it is natural that the Testing team be divided into multiple teams each looking into a particular Product. Each of the team will be lead by a Team lead who co-ordinates with the Manager or Developer throughout the life of the Product.

When testing a Product, the features of the Product are verified thoroughly and confirmed that it is as per the specifications made by the Client or as per the Requirement spec that was given at the time of developing the Product. To accomplish this, Test Cases are created and run. Issues noticed are reported to the developers and to the concerned higher officials and they are kept track of for further verifications.

In the Next cycle of testing, fixes made for the issues faced in the previous cycle are verified and also the complete product is verified again to make sure that what was working earlier is still working.

After a couple of cycles, a DRM (Defect Resolution Meeting) is conducted to reach to a conclusion on certain defects that have not been fixed.

In-between this, the modules are integrated and the complete product is developed. Now all the modules have to be rechecked to make sure that issues fixed are still in fixed state and those features those were working properly is still working properly. This check is known as Regression Testing. When modules are integrated, there will be a communication happening between these modules. The output of one of the modules could be the input of the other modules. So here the testing is done with the actual input and not with a dummy input. This leads to testing the integrated modules a bit more thoroughly. This is known as the Integrated Testing.

A Build (Installation media) might be made at this level and released to the QA department for testing. Once this is done then onwards when a major number of fixes have been made, then instead of receiving fixes in the form of files, files will be given only for fix verification and then another Build is made and released for testing.

This cycle is repeated until the Product reaches a Stable state where in it can be released to the market. The QA Manager, Product Manager, Development Leads and QA Leads decide the stability of the product.

Once a release has been made to the market, then the Product development does not stop there. Requirements can come in from the Clients and Issues that were not fixed for this release will be looked into. This results again in a Development cycle of the product, which means that QA cycle is also to begin.

Based on the requirements from the Clients and Issues that are going to be fixed for the next release, QA starts updating their present Checklists and Test Cases or starts creating new ones before the first Deliverable from the Development team. And then the Test Cycle begins till the next release.

Now this testing has to be performed on all the OS (Operating Systems) that the company proposes to support because some of the functions used in creating the Product might not be available or might not behave the expected way in the other OS. It has been noticed that incase the developer works on Windows NT, and then there could be some issues that are found only on other OS apart from Windows NT. Moreover, there could be some dependency files required for the Product to work that could have been missed out from the developer.

It is mostly in this category that the different kinds of testing is done, like Automation Testing, White-Box testing, Code Coverage, Memory Testing etc.