Different Types of Software Testing

Software Testing is a process of executing a program or application with the intent of finding the software bugs.

  • It can also be stated as the process of validating and verifying that a software program or application or product.
  • Meets the business and technical requirements that guided its design and development.
  • Works as expected.
  • Can be implemented with the same characteristic.

Different Testing Types

  1. Unit Testing (White Box Testing – Developer)
  2. Integration Testing
  3. Shakeout testing
  4. Smoke Testing
  5. Functional Testing
    (i) Positive Testing
    (ii) Negative Testing
  6. Regression Testing
    (i) Regression (Fix)
    (ii) Regression (Risk)
  7. System testing
  8. Load Testing
  9. Stress Testing
  10. Performance Testing
  11. End to End Testing
  12. User Acceptance Testing
  13. Black Box Testing
  14. White Box Testing
  15. Alpha Testing
  16. Beta Testing
  17. 508 Compliance Testing-JAWS, Windows Eye—Screen Reader

1. Unit Testing

It is a test to check the code whether it is properly working or not as per the requirement.

This is a white box testing technique where the developer has to look into the code and find out which unit/statement/chunk of the code is malfunctioning and hence is needed to possess knowledge of coding and logic i.e. internal working of the code.

Test Objective: The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. The test is accurate only if the tester knows what the program is supposed to do.

Technique: Execute each function, using valid and invalid data, to verify the following:

  • The expected results occur when valid data is used.
  • The appropriate error / warning messages are displayed when invalid data is used.
  • Each business rule is properly applied.
  • Each requirement is properly implemented.

Completion Criteria: All planned tests have been executed. All identified defects have been addressed and reconciled.

2. Integration Testing

It is a test to check whether all the modules are combined together or not and working successfully as specified in the requirement document or user stories. (Done by Developer)

Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested.

Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests those aggregates, and delivers as its output the integrated system ready for system testing.

Test Objective: Integration testing identifies problems that occur when units are combined. To ensure the viability of each before combining units, developers know that any errors discovered when combining units are likely related to the interface between units.

Technique:

  • Testing in which software components are combined and tested progressively until the entire system has been integrated.
  • The expected results occur when valid data is used.
  • The appropriate error / warning messages are displayed when invalid data is used.
  • Each business rule is properly applied.

Completion Criteria: All planned tests have been executed. All identified defects have been addressed and reconciled.

3. Shakeout Testing

This test is basically carried out to check the networking facility, database connectivity and the integration of modules.

  • The Configuration Management Team (CMT), who prepare builds for test environments, normally does this test.
  • They also test whether the major components of the software are not broken.
  • This test is done BEFORE the Build is deployed in the test environment.
  • After the Shakeout Testing, the next step is Smoke Testing (which is done by the testers after the build is deployed in the test environment)

4. Smoke Testing

Smoking Testing is done when the build is just prepared (fresh build) and deployed in the test environments to check whether the major level of functionality of the application is working or not.

  • This is basically an ad hoc test to check roughly to make sure the major functionalities are not broken.
  • It is the preliminary test carried out by the QA tester.
  • After the Smoke Test, the testers perform Functional Testing.

Smoke Testing is the initial level of testing effort to determine if the AUT or new software version is performing well enough for its major level of testing effort.

Test Objective: The Test Team will navigate through the main application functionality to ensure build stability and to accept the build for additional system testing efforts.

Technique:

  • It verifies the major functionality at high level in order to determine if further testing is possible.
  • The Smoke test scenarios should emphasize breadth more than depth.
  • All components should be touched, and every major feature should be tested briefly.
  • If test fails, the build is returned to developers untested.

Completion Criteria: All planned tests have been executed. All identified defects have been addressed and reconciled.

5. Functional Testing

Functional testing verifies that the system accepts the proper data and processes and retrieves the data based on the appropriate business rules.

  • It is a test to check whether each and every function of that application is working as per the requirement document.
  • It is a major test where 80% of the tests are done.
  • In this test, the Test Cases are executed (or run).

Functional Testing will also include retesting of the defects identified in prior cycles, using similar test scripts and processes.

Test Objective: Ensure proper functionality, including navigation, data entry, processing, and retrieval.

Technique: Execute each function, using valid and invalid data, to verify the following:

  • The expected results occur when valid data is used.
  • The appropriate error/warning messages are displayed when invalid data is used.
  • Each business rule is properly applied.
  • Data integrity rules are followed.
  • Data access rules are followed.
  • There is a logical progression from one module to the next.
  • Standardization of screen layouts, buttons, and data fields is present.
  • Each requirement is properly implemented.
  • Whenever defects are fixed and deployed to system test, the test scripts directly related to the corresponding defects will be run.

Completion Criteria: All allocated requirements listed in the RTM is properly executed. All identified defects have been addressed and reconciled.

Functional Testing can be done following two ways:

(i) Positive Testing: Positive Testing is the testing process where the system validated against the valid input data. In this testing tester always check for only valid set of values and check if a application behaves as expected with its expected inputs and outputs.

(ii) Negative Testing: Negative Testing is the testing process where the system validated against the invalid input data. A negative test checks if a application behaves as expected with its negative inputs and outputs.

6. Regression Testing

When a new functionality is added to the software, we need to make sure that the added new functionality does not break the other parts of the application. Or when defects (bugs) are fixed, we need to make sure that the bug fix has not broken the other parts of the application. To test this, we perform a repetitive test, which is called regression test.

Regression testing ensures that a change or fix has not caused faults to appear in unchanged parts of the system.

There are two types of regression testing that need to be performed for each build of the code deployed to system test environment.

  1. Regression (Fix): Whenever defects are fixed and deployed to system test, the test scripts directly related to those defects will be run. This effort will only cover selected data paths within the application. The effort and resources will be identified by calculating the time required to run the test scripts.
  2. Regression (Risk): During defect fixes the development team may unintentionally introduce new errors into the code. The Test team will conduct an impact and risk analysis for the defects and identify additional test scripts that should be run based on code complexity, defect density, and code priority. The impact analysis provides the number of test scripts and the level of effort to run the test scripts for the current build as well as for the previous builds.

The listing of files and versions of design artifacts and code included in each build will be designated in the Version Description Document (VDD).

For both of the above regression tests, current test cases may be re-used and new regression test cases will be developed and added to the regression deck, if needed. Wherever possible, automated test scripts will be used to run regression testing.

The above mentioned regression tests will be performed for each cycle within a build as well as for previous builds. The final cycle of regression testing before production will cover the test scripts that are not touched in any of the regression cycles to ensure all the test scripts are run at least twice before production deployment

Test Objective: Regression testing verifies the changes applied to the software during development do not have a negative impact on existing functionality. Regression test cases will be developed in such a way that the test covers overall functional scenarios of the application.

Technique: During defect fixes the development team may unintentionally introduce new errors into the code. The Test team will conduct an impact and risk analysis for the defects and identify additional test scripts that should be run based on code complexity, defect density, and code priority.

The right candidates for regression testing can be selected by considering:

  • Which functionality is most important to the project intended?
  • Which functionality is most visible to the user?
  • Which functionality has the largest safety impact?
  • Which functionality has the largest financial impact on users?
  • Which aspects of the application are most important to the customer?
  • Which parts of the code are most complex, and thus most subject to errors?
  • Which part of the application were developed in rush and panic mode?
  • Which part of the requirements and design are unclear or poorly thought out?
  • What do the developers think are the highest-risk aspects of the application?
  • What kinds of problems would cause the most customer service complaints?
  • What kinds of tests could easily cover multiple functionalities?

Wherever possible, automated test scripts will be used to run regression testing. The regression test scripts will be performed for each cycle within a build as well as for previous builds.

The final cycle of regression testing before production will cover the test scripts that are not touched in any of the regression cycles to ensure all the test scripts are run at least twice before production deployment.

Completion Criteria: All planned tests have been executed. All identified defects have been addressed and reconciled.

7. System Testing

In system testing we test the complete system as a whole to check whether the system is properly working or not as per the requirements.

Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved.

For example, I can say in my current application has several features like Register, Student Loans, Banking, Credit Card, Insurance, Upromise Rewards, Plan for College. All these modules completely integrated as a whole system to verify/ensure that it meets the specified requirements is the System Testing.

When testers complete testing, the application (software) has to be tested in the real environment. What it means is, since the testers test it in the test environment with the test data, we have to make sure that the application works well in the real environment with the real data.

In test environment, some of the things cannot be simulated or tested. Although the test environment is very similar to the production (real) environment, we need to make sure that we get a smooth delivery in the real system as well (As servers are different and database is different, things may not work as expected when the application is moved from test environment to production environment)

8. Load Testing

To check the user’s response time for number of users using any one scenario (single business process) of the same application at the same time.

Load testing provides flexible user scenarios and realistic load types to help application load testing. The load tests can help simulate real time user activities and load test hundreds or even thousands of concurrent users under dynamic load conditions.

LoadRunner is able to simulate this load by using a “virtual user (vUser)”. LoadRunner only sends and receives the CALL between the server and client, never actually displaying them in the browser.

9. Stress Testing

The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.

Stress testing is seen as part of the process of performance testing which tries to identify the breaking point in a system under test by overwhelming its resources or by taking resources away from it. This helps to ensure that the system fails and recovers gracefully.

Stress Testing is done to uncover memory leaks, bandwidth limits, transactional problems, resource locking, hardware limitations and synchronization problems that occur when an application is loaded beyond the limits determined by the performance statistics.

Stress testing tools generate extremely high load on the Web server by simulating multiple client connections.

10. Performance Testing

To check the user’s response time for number of users using multiple scenarios (multiple business process) of the same application at the same time.

Performance tests are conducted in a production-like test environment with full-volume data. These tests validate the application, hardware, and network behavior under a user load that is based on production estimates.

In order to create a well-rounded performance testing effort, the test team will identify highly utilized critical business paths a user would take in the system. The list of paths will be captured in the Performance Test Scenario.

Performance testing tool LoadRunner allows to perform Load testing/ Stress testing of web applications/web sites with accuracy and ease. Comprehensive reports helps to identify critical performance issues and optimize the user experience of the web applications/web sites before it goes into deployment.

After each phase Load, Stress, and Performance is completed, the test team will generate a Performance Test Report based on the data collected during the test.

11. End to End Testing

Testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system is behaving as expected.

For example, I performed a simplified end-to-end testing of an email features one of my last project involve:

  • Logging in to the application.
  • Accessing the inbox.
  • Opening and closing the mailbox.
  • Composing, forwarding or replying to email.
  • Checking the sent items.
  • Logging out of the application.

12. User Acceptance Testing

User Acceptance Testing or UAT means testing software by the user/client to determine whether it can be accepted or not. During UAT, actual software users test the software to make sure it can handle required tasks in real-world scenarios, according to specifications.

UAT is also known as Beta Testing, Application Testing or End User Testing.

In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. In this testing, the tester may do the testing or the clients may have their own testers (For example, banks may have their own teller employees who can test the application).

13. Black Box Testing

It is test where a tester performs testing without looking into the code. (OR it is a testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored.

Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behavior of the program is evaluated and analyzed.)

14. White Box Testing

It is a test where a tester looks into the code and performs the testing. Testing based on an analysis of the internal structure of the component or system.

White Box Testing method are used at Unit Testing of the code. But this is different as Unit testing done by the developer & White Box Testing done by the testers.

This method is learn the part of the code and find out the weakness in the software program under test.

15. Alpha Testing

In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user.

Any type of abnormal behavior of the system is noted and rectified by the developers.

Alpha Testing is performed by the IN-House developers. After alpha testing the software is handed over to software QA team, for additional testing in an environment that is similar to the client environment.

16. Beta Testing

In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites.

As the users explore the software, in case if any exception/defect occurs that is reported to the developers.

Beta Testing is performed by end user. So that they can make sure that the product is bug free or working as per the requirement. IN-house developers and software QA team perform alpha testing. The public, a few select prospective customers or the general public performs beta testing.

17. 508 Compliance Testing

Section 508 Compliance Testing ensures that the software meets 508 Compliance requirements.

Technique: The Section 508 Compliance requirements will be verified using a two-step approach to assessing compliance of web pages.

Step 1. Consists of a manual assessment, using prepared 508 test scripts and Section 508 Web Standards Compliant Evaluation Forms.

  • Test scripts and evaluation forms will prompt the tester to review the web page for each of the 16 Section 508 Website standards.
  • 508 test scripts will have a combination of user and code checkpoints.
  • The user checkpoints assist in the assessment of a site on the web from a user’s perspective.

The code checkpoints assist in the assessment of a web page’s source code from a developer’s perspective. After running each 508 test script, results will be entered in the Section 508 Web Standards Compliant Evaluation Forms (Appendix C). Results will be reported as Fully Compliant (FC), Partially Compliant (PC), Non-Compliant (NC) or Not Applicable (N/A).

  • Fully Compliant indicates that every instance of the check is compliant. For example, if every image on the page has alt tags associated with it, check 1 would receive a Fully Compliant.
  • Partially Compliant indicates that only some of the instances of the check are compliant. Using the example from above, if only some of the images have alt tags, check 1 would receive a Partially Compliant.
  • Non-Compliant indicates that none of the instances of the check are compliant. Again using the example from above, if none of the images have alt tags, check 1 would receive a Non-Compliant. If no instances of the check exist on the page, the check would receive a Not Applicable.

Step 2. Involves evaluating the web page with assistive technology such as a screen reader (JAWS 4.5) and noting any issues and comments related to the compatibility on the evaluation form and within TestDirector.

Completion Criteria: All planned tests have been executed. All identified defects have been addressed and reconciled.

18. Database Testing

Database Testing includes performing data Validity, data Integrity testing, Performance check related to database and testing of procedures, triggers and functions in the database.

It’s basically testing data while travelling from front-end to back-end, back-end to front-end or in backend to backend only.

What to Test in a Database?

  • All the functionality related to database on every action performed in the application includes deletion, addition or save options.
  • The Inserted records are added into the Database with the correct value.
  • Checking the Updated records gets updated in the Database.
  • Deleted record gets removed from the database.

About the Author

Leave Comment