Software Development Life Cycle

Introduction to Testing

This section is exclusively about 'Testing' wherein you find some information related to services, concerns and solutions, nonetheless some information is also useful for novice testing engineers. Testing a software module or a product is basically to ensure implemented features fulfil thier requirements and also ensure robust enough against load, stress and user interaction in practical possible scenarios.

Test report important attributes

The following attributes are used to make a good Test Case document and report.

Test Case ID, Title, Steps to Execute, Expected Result, Actual Result, Result Snap, Priority, Severity, Probability, Status, and Remarks

In addition to above, very important attributes from Quality Assurance point of view are "Prepared by", "Reviewed by" and "Executed by"

Concern-1: Exhaustive Test Cases Vs Insufficient Time

Management demands or desires and encourages for exhaustive test cases to rule out defects escaping from testing effort. At the same time management gives insufficient time for testing and expects tested product on hand for delivery.

Here I encourage test engineers to write Objective Test Cases instead of conventional Test case for each step. Advantage is less number of test cases to take up for testing and effective testing with better result comparatively conventioanl test cases.

What is Objective Test Case?

Let us say a New File to be saved after editing. Instead of writing individual test case for creating file, editing file, saving file, make a single test case covering the stepts of creation of file, editing and saving the file. All the features are covered and tested while achieving the objective of saving edited new file. Here one can write multiple Objective Test Cases with different scenario.

With my experience about 1000 test cases can be reduced to about 100 Objective Test Cases. The objective test case is highly helpful for test automation as cases are reduced and hence automation can be organized easily.

 

Concern-2: Confusion over Priority and Severity

I use the words severity and priority in the following way for an embedded software products. I hope with the help of below information one can imagine several combinations of Severity and Priority.
 
Priority values can be High, Medium, and Low. Usually the priority is set or choosen by development team based on client requirement or approach that demands to implement solution one after another to avoid side effects. Some times management uses the word Top priority, I saw ultimately every remaining defect will become Top priority to fix.
 
Severity values can be Critical, Major, and Minor. This severity field shall be set by trained testing engineer based on their definitions.
 
The default priorites can be set by testing department against each severity of defects as following.
 
Severity  Priority
------------------------
Critical - High
Major   - Medium
Minor   - Low
 
Note: Usually priority keeps change by development team while Severity always remain the same throughout life cycle of a defect.
 
The following definition can be given to serverity values.
Critical: It is a show stopper, like system is crashed/hung/dead lock/dead slow or user is unable to proceed further or requirement is not implemented.
 
Major: Though system is running and requirement is implemented but functionality is not as expected or performance is very slow. "Wrong version number is printed" can be a major defect.
 
Minor: GUI related having text size, font or graphic placement or color problem.
 
However the above definitions some times may change according to end user reactions. For instance a product is having small scratch on screen then user does not prefer to buy. In this case the severity is critical but not minor.
 
In addition to Severity, I encourage to use Probability field also that enables us to negotiate product delivery on time when certain open known defects still present. 

Concern-2: When to stop testing?

the following are helpful to make a decision on stopping testing.

1. The execution of identified all type of test cases shall be over.
2. There is no more time to test.
3. There is no more effort sanctioned for testing (as there is no end for testing).
4. There is no more change required in tested code
 
Ultimately if regression test and smart regression test are over then by negotiating open defects based on their severity and probability the tested version can be accepted for delivery provided client shall ready to accept the tested software product with known open defects if any.

Concern-3: Why the smoke test is useless?

The problem with smoke test is free hand testing. The smoke test does not have time limit, direction and consistency. Instead, I suggest make Entry Level Objective Test cases. This will surely helpful to ensure whether the product can be handed over for demo or accepting for regression test. The duration shall not exceed two hour covering all critical and major features of the product with the help of hardly 15 to 20 Objective Test Cases.

Please note that only product / application familiar and experienced test engineer shall be allowed for Entry Level testing so that he/she can observe quickly unexpected behaviour of the product / application functionality.

Concern-4: Last minute code is modified! No Time for Regression Test!

The most challenging situation is last minute changes after completion of thorough regression test. Practically situations do arise and hence no need to blame any actor of the SDLC phases. We introduced the concept of smart regression when some code is modified after regression test cycle. The scienfic approach is that well experienced developer should idenfiy the effected features/functionaly and modules with the change in code. Developer has to freez the code after unit and integration test. Test engineer has to execute test cases of respective effected features/modules. To ensure overall system/application Entry Level Objective Testing is mandatory to give quality assurance with regression test report and smart regression test reports.

Concern-5: Multiple releases! Phobia of knowing defects found

Some project leaders with phobia of defects make multiple releases as they are nervous about statics that show more defects found in his/her project or product. Typically those project leaders fix the defects found and make release instantly and beg testing team to take up latest version for afresh testing. The result is crazy.

Software Test Life Cycle shall be defined well and the process shall be shared with all the actors of the SDLC. 

Concern-6: When to accept or reject a software product for regression testing?

Software Test Life Cycle expects from development team certain things before submiting product or work product for  regression testing. Expects a release notes with version number covering requirements change note, unit test report, integration test report, technical change note, requirements implemented, defects fixes and effected features or functionality.

PROCESS TO TAKE UP SOFTWARE PRODUCT FOR REGRESSION TEST

  • step-1: Entry Level Objective Testing, If result having critical defects then product shall be rejected
  • step-2: Regression Test or Smart Regression Test, If the product is submitted for first time try to execute all the applicable test cases for respective implemented modules and features of software product or work product.
  • step-3: Track initially defects on spread sheet, after approval of development team, the accepted defects shall be posted on central repository to make it online for all the actors of the SDLC.
  • step-4: Generate the statics, do analysis, conduct meeting to inform the status of defects after regression cycle.

RULES THAT SHALL BE STRICTLY FOLLOWED FOR CLARITY AND ASSURANCE OF TESTING

  • Rule-1: Regression test shall be done on single version only.
  • Rule-2: Never accept intermediate version during regression cycle by stopping running version.
  • Rule-3: Two parallel testing versions can be possible.
  • Rule-4: Smart Regression cycle time shall be reduced if the product is improved with each cycle of testing.
  • Rule-5: Each new defect found shall be converted into test case and added to test cases document. That shall be covered in next test cycles.

Concern-7: Which model is best suitable for testing?

There are variety of models that can be tried by organization for testing. I feel testing shall be highly flexible to mould into any of the model that is demanded by software projets, products or services.

Global Testing Team: An independent testing team with organization testing goals defined and practiced. The team has authentication to accept or reject the submitted software deliverables. This model helps when organization establish their units in different geographical locations for business expansion and continuety. Usually UAT (User Acceptance Test) is conducted with the help of Global Testing Team in which highly experienced and professionals are involved. The team members of gloabl team shall be able replicate real time environment or atleast move to those conditions to test the software product for acceptance.

Local Testing Team: An independent testing team with local business unit testing goals defined and practiced. The team has authentication to accept or reject the submitted work products/products for regression testing. In this model a team of testing engineers are assigned with projects that are undertaken by the local business unit. That means regardless of project the test engineer is slip over projects that demand his/her services and expertise. In this model testing resources can be utilized effectively and efficiently.

Distributed Testing Team: In this model test engineers are distributed across all the developement teams so as to work closely with software engineers. However the distributed team shall report to test manager to ensure qualiy control.

Dedicated Testing Team: A long term and large size project having numerous components to develop with atleast 25 sofware engineers involved require a dedicated testing team. In this model project managers and test managers closely work to together to achieve the goals.

Integrated Testing Team: In this model, project manager becomes whole reponsible for quality deliverables including testing. The testing engineers are tightly integrated with developement team. This model does not give confidence to management about the quality of the deliverables as there is a possibility of defects encapsulation or skipping test cases due to delivery pressures.  

 

Concern-8 How to close Potential Non-repeatable bug?

A process shall be defined for closing Non-repeatable (NR) bug found in software. In fact the so called NR defect is actually potentially non repeatable defect (PNR) its final stage is either non repeatable or repeatable.

What would be general or common mistake made while trying PNR defect to repeatable is that PNR defect is always tried with free hand testing in same or similar ways by different people instead of different possible well thought test cases.

  • Step-1: A dedicated file shall be opened to track the history of PNR bug with available information as an input.
  • Step-2: Collect more and more possible information from Originator who reported the defect.
  • Step-3: Speak to the feature or developer who can reproduce it based on understanding the possibility of occurrences in given context.
  • Step-4: Make different possible test cases based on behavior and context in which the defect might have occurred.
  • Step-5: Execute all the possible test cases and rule out the potential occurrences of the PNR defect.
  • Step-6: Convert PNR as NR and then Close it based on test report with all possible test cases tried out
  • Step-7: Re-open PNR bug file if it occurs again accidently and then repeat from Step-1

Note: A defect which is not repeatable is not a defect and hence no need to try to fix it.

I came across some exceptional software engineers who can visualize the possibility of occurrence of such defects, they used to reproduce and fix so called NR defects or at least refine the code to rule out such defects occur in future.

Usual Mistakes while closing defects:

Some test engineers test for a defect on latest version if it is not found then they prefer to close the defect as it is fixed. Interestingly if you ask developer he/she may reply that it was not at all attempted to fix. Here test engineer is supposed investigate defect exists from which version onwards.

 

Concern-9: How to make static test plan?

Test plan is transformed from state of static to dynamic and then again back to static. However initial static stage is based on initial visibility and final static stage is as per final requirements. In other words, requirements,experience and exposure cause the state of test plan to change. 

Practically a test plan becomes dynamic so as to meet dynamic requirements and unexpected practical scenarios to cover.

The basic approach for any plan requires to start with base lined plan or frozen plan to implement. Upon demand the plan should be changed dynamically otherwise QC and QA may become out of control. In other words the people with more domain experienced and expetise can make better static test plan as early as possible.

Concern-10: What are software testing metrics required for management?

There are numerous metrics can be named for STLC. However management usually is not interested all the text book stuff for quick decision making. Metrics can be categorized as far as software testing is concerned, metrics for rate of numerous Test Cases drafting, execution, Defects Found, Defects Fixed. Number of applicable test cases identified, executed and remaining cases to execute.

Number of test cases of High, Medium, and Low those are applicable to execute. Number of known Critical, Major, and Minor defects found, fixed and open.

Several other comparisons may require like Average Number of Test Cases against number of Requirements, Components of Product/Application.

Other metrics like defects found against each version and components. The Pareto analysis is highly helpful to address which components have more defects and hence fixing of defects can be directed so as to stabilize the product/application as quickly as possible.

Test management metrics such as Earned Value, Actual Cost, Cost Performance, Schedule Performance, Budget at Completion, Estimate to Complete, Estimate at Completion, Schedule Variance, and Cost variance etc.,

Concern-11: How to weigh the product in terms of faulty?

"An important metric is the number of defects found in internal testing compared to the defects found in customer tests which indicate the effectiveness of the test process itself." - Chinjju Elizabeth Koshy

The above statement gave me thought to share my personal experience to you.

This is one metric which helps us to assess the efficiency of internal testing team when it compares to Internal Testing, UAT and Customer Reported Defects.

There is another kind of metrics required for a product / application. I was personally involved while top management assessment about product released to market. Giving appropriate weights to reported defects can give better visibility to assess.

Suppose if there are 9 defects reported together from Internal Testing team (3 defects), UAT team(3), and by end users(3). The indication or impression will be just 9 defects assuming weight of defect is same for each defect regardless of test level and defect severity.

If weights are assigned rationally for each level of testing that creates real emergency to fix the open defects. For instance a critical defect shall have 10 as weight while Minor defect shall be given 1 as weight that means if one critical defect is found then it is equal to ten minor defects.

For example: A product is released to market having some open known defects and that received some more new defects from field. How to weigh the product in terms of faulty or open known defects?

Weights can be given as following:

Internal Team Finding Weights: Critical - 10, Major - 5, Minor -1
UAT Finding Weights: Critical - 20, Major - 10, Minor -2
End User Finding Weights: Critical - 40, Major - 20, Minor -4

Suppose there are three defects found by Internal Team which are critical, major and minor then cumulative weight is 10 + 5 + 1 = 16
Similarly there are three defects reported from UAT team which are also critical, major and minor then cumulative weight is 20 + 10 + 2 = 32
Similarly there are also three defects reported from field which are also critical, major and minor then cumulative weight is 40 +20 + 2 = 64

So the net weight is 16 + 32 + 64 = 112

The impression now is that though there are nine defects exist in the product of specific version but in terms of product fault, it is rated as 112 which is equal to 112 minor defects or 11 critical defects.

Concern-12: Which is best Automation testing (or) Manual testing?

Combination of both Manual and Automation testing with a % of test cases can give best results. At times automation testing is mandatory to cover all the permutations and combinations of scenarios where ever manual testing is near impossible with too much volume. However manual testing can not be ruled out where human judgment is mandatory.

Logically speaking if a component of a product used by a system then it means that automation is a mandatory, if it is used by human then manual testing is mandatory.

For instance, sending SMS to millions of mobile numbers by clicking one soft key, here automation testing is mandatory. Sending SMS for selected mobile numbers, here both manual and automation can be preferred. Product look & feel and use & feel testing should be manual testing. Source code profiling should be done using automation, while application response time to user interaction would be done using both manual and automation testing.

Some time cost enables management to take decision on automation or manual testing. Suppose automation costs more and time taken to develop then manual testing is preferred.

So it (automation or manual testing) is subjective and cost-effective solution and it has its own importance.

 

Concern-13: Process to Estimate time for testing application.

The following process can be used to estimate time for testing an application.

  1. Identify the number of applicable test cases
  2. Estimate the effort (time) required to execute applicable test cases
  3. Prioritize the applicable test cases to execute
  4. Identify the set of test cases which can be executed parallel
  5. Identify the inputs dependencies and their lead time
  6. Make critical path based on inputs lead time, Priority of sequential test cases and Parallel test cases
  7. Identify the skills and resources (people, tools and test equipment) availability to support the critical path otherwise change the critical path as per skills and resources availability.

 

Concern-14: Difference between Test strategy and test methodology

Test strategy is macro level while test methodology is micro level.

Test strategy is defined and implemented by management with alternative plans whereas selecting or defining suitable methodology for testing is done by test lead.

Test strategy is always to be defined based on assumptions before realizing required inputs. The strategy can be more precise with realized inputs. As far as strategy is concerned resources planning is crucial and the main challenge is that utilization of planned resources in case any delay happens on interdependencies.

When to start and which plan to be initiated is a part of test strategy. How to test and which method gives best result is part of test methodology.

Sometimes implementation of a plan can be suspended by management as part of their strategy.

 

Concern-15: What are the checks required before accepting module from a developer?

The following can be part of check list before accepting module from a developer:

1. Is the unit test done and report produced?
2. Is the integration test done and report produced?
3. Are the open defects posted on centralized defects tracking database?
4. Are there any defects fixed in this release?
5. Is the version number updated?
6. Is technical notes provided?
7. What are effected functionality of complete product/application due to changes made in the release?
8. Is the latest source code reviewed and updated on configuration management tool?

Concern-16: Approach to complete testing within 40% of estimated time

I would like to share my experience as it happens most of the cases.

Testing activity takes its own time regardless of pressures and prioritization. Usually application is developed under more pressure with less time prone to more number of  defects. Obviously testing team always gets time for iterations as application gets rejected. So keep it in view practically, one should prioritize test cases to conclude and reject the defective application, and rest of time parallel to development, test cases with less priority are to be executed quickly.
The situation of defective application gives signals to management that it requires more time to stabilize and make release.

Pressure does not work in this case: If an embedded device should undergo 24 hrs thermal test, one can not complete within one hour by keeping 24 devices in 24 furnaces. It becomes one hour with 24 devices test only.

Suggestion: within stipulated what we should be able to achieve we shall make it clear to management and then try for assigned challenging task. If we can't do then some other people exist in world to convince the client and achieve it.

 

 

Concern-17: Is retesting part of Regression Test?

No, Logically speaking regression test has definitive time line while one can not estimate retest as no one knows initially how many defects can be found and would be fixed for retest. And hence retest should be a smart regression test which is another cycle followed by regression test.

In Addition to my previous comments, Regression cycle is one-go test on a single version (i.e. during regression cycle, version should not be changed).  Retesting after fixes means the version will be changed.
Obviously that new version needs another cycle of test which I call it as smart regression test.

Sometimes it is tricky, software version does not change but low-level or hardware may change then it demands complete regressiontest again.

 

Concern-18: Which is more important Positive Test or Negative Test?

I can say both has its own weightage in completing testing to certifity the application/product is robust enough in identified practical possible conditions. However priority is always given to positive test cases as matter of fact all most all requirements come under positive cases. The negative cases are identified mostly by testing people.

Note: if you start with -ve testing then obviously you are encouraging developers to fix those bugs instead of focusing actual requirements implementation in which both management and customer are highly interested. At the same time if your application crashes/hangs then you will be answerable.

Take the example while application is running for test; First case only, if you either turn off power or remove hard disk connection or remove critical connection of inputs then raising a defects against these test cases means nothing but disturbing the development team. And hence positive cases are first priority.

 

Concern-19: How to prepare/execute test cases when requirement change frequently ?

Stabilizer concept has to be introduced while nature of requirements prone to dynamic changes. A version called Dynver (dynamic version) shall be introduced which adopts all the changes under agile model. It is useful for R&D to absorb all the changes till stabilize.

By tracking the changes one can identify stabilized requirements those must be incorporated and aligned into main line version of scripts by updating corresponding documents to become eligible to execute on production version.
 
The potential gap that can make your scripts obsolete that is information not reached to testing team in time from development team. Close interaction is helpful to reduce the gap before attempting the latest changes.

Concern-20: Is there any difference between user/customer and business user ?

We may differentiate user/customer and business user technically and logically as an intended users.
User can be any one who starts using an application or a product for his/her own purpose (i.e. testing, evaluation, tool, comparision, Bi-product). We can find user within the company, at client place and in the specific market. Whereas customer is one who pays for an application or a product and uses it real time and therefore customer can be an end user. Coming to business user, I would say s/he plays as client role, in the process s/he uses an application or a product as part of UAT before making release into market or business user could be an user who uses an application or product for their business purpose or use it as a base to adopt its features to their own product.