Software Testing

Views:
 
Category: Education
     
 

Presentation Description

Basic Concepts of Testing

Comments

Presentation Transcript

Software Testing:

By Ravi Kumar(03-Jul-2012) Software Testing

Contents:

Testing Overview Testing Types Testing Methods Levels of Testing Testing Phases Testing Life Cycle Test Case Design Strategies SDLC Block Diagram Defect Life Cycle Contents

Testing overview:

What is Testing? Testing is the process of evaluating a system or its component(s) with the intent to find that whether it satisfies the specified requirements or not. This activity results in the actual, expected and difference between their results. In simple words testing is executing a system in order to identify any gaps, errors or missing requirements in contrary to the actual desire or requirements. According to ANSI/IEEE 1059 standard, Testing can be defined as “A process of analyzing a software item to detect the differences between existing and required conditions (i.e., defects/errors/bugs) and to evaluate the features of the software item”. Testing overview

Testing overview:

When to Start Testing? An early start to testing reduces the cost, time to rework and error free software that is delivered to the client. However in Software Development Life Cycle (SDLC) testing can be started from the Requirements Gathering phase and lasts till the deployment of the software. It also depends on the development model that is being used. Testing overview

Testing overview:

When to Start Testing? Testing is done in different forms at every phase of SDLC like during Requirement gathering phase, the analysis and verifications of requirements also considered as testing. Reviewing the design in the design phase with intent to improve the design is also considered as testing. Testing performed by a developer on completion of the code is also categorized as Unit type of testing. Testing overview

Testing overview:

When to Stop Testing? Unlike when to start testing it is difficult to determine when to stop testing, as testing is a never ending process and no one can say that any software is 100% tested. Following are the aspects which should be considered to stop the testing: Testing Deadlines. Completion of test case execution. Completion of functional and code coverage to a certain point. Bug rate falls below a certain level and no high priority bugs are identified. Management decision. Testing overview

Testing overview:

Testing helps to Verify and Validate if the software is working as it is intended to be worked. Verification and Validation are the basic elements of Software Quality Assurance (SQA) activities. Testing overview

Testing overview:

Difference between Testing, Quality Assurance and Quality Control Most people are confused with the concepts and difference between Quality Assurance, Quality Control and Testing. Although they are interrelated and at some level they can be considered as the same activities, but there is indeed a difference between them. Testing overview

Testing overview:

Difference between Testing and Debugging Testing: It involves the identification of bug/error/defect in the software without correcting it. Normally professionals with a Quality Assurance background are involved in the identification of bugs. Testing is performed in the testing phase. Debugging: It involves identifying, isolating and fixing the problems/bug. Developers who code the software conduct debugging upon encountering an error in the code. Debugging is the part of White box or Unit Testing. Debugging can be performed in the development phase while conducting Unit Testing or in phases while fixing the reported bugs. Testing overview

Testing types:

Manual Testing This type includes the testing of the Software manually i.e. without using any automated tool or any script. In this type the tester takes over the role of an end user and test the software to identify any un-expected behavior or bug. There are different stages for manual testing like Unit Testing, Integration Testing, System Testing and User Acceptance Testing. Testers us test plan, test cases or test scenarios to test the software to ensure the completeness of testing. Manual testing also includes exploratory testing as testers explore the software to identify errors in it. Testing types

Testing types:

Automation Testing Automation Testing which is also known as “Test Automation”, is where the tester writes scripts and uses another software to test the software. This process involves automation of a manual process. Automation Testing is used to re-run the test scenarios that were performed manually, quickly and repeatedly. Apart from Regression Testing, Automation Testing is also used to test the application from load, performance and stress point of view. It increases the test coverage, improve accuracy, saves time and money in comparison to manual testing. Testing types

Testing types:

What to Automate It is not possible to automate everything in the software. However the areas at which user can make transactions such as login form or registration forms etc, any area where large amount of users can access the software simultaneously should be automated. Further more all GUI items, connections with databases, field validations etc. can be efficiently tested by automating the manual process. Testing types

Testing types:

When to Automate Test Automation should be used by considering the following for software, Large and critical projects. Projects that require testing the same areas frequently. Requirements not changing frequently. Accessing the application for load and performance with many virtual users. Stable software with respect to manual testing. Availability of time. Testing types

Testing methods:

Black Box Testing The technique of testing without having any knowledge of the internal structures of the application is Black Box Testing. The tester is ignorant to the system architecture and does not have access to the source code. Typically, when performing a black box test, a tester will interact with the system’s user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon. Testing methods

Testing methods:

Advantages of Black Box Testing Well suited and efficient for large code segments. Code Access not required. Clearly separates user’s perspective from the developer’s perspective through visibly defined roles. Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language or operating systems. Testing methods

Testing methods:

Disadvantages of Black Box Testing Less Coverage compare to White Box Testing, since only a selected number of test scenarios are actually performed. Inefficient testing, due to the fact that the tester only has limited knowledge about an application. Blind Coverage, since the tester can not target specific code segments or error prone areas. The test cases difficult to design. Testing methods

Testing methods:

White Box Testing White Box Testing is the detailed investigation of internal logic and structure of the code. White box testing is also called Glass box testing or Open box testing. In order to perform white box testing to an application, the tester needs to possess knowledge of the internal working of the code. The tester needs to have a look inside the source code and find out which unit of the code is behaving inappropriately. Testing methods

Testing methods:

Advantages of White Box Testing As the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively. It helps in optimizing the code. Extra lines of code can be removed which can bring in hidden defects. Due to the tester’s knowledge about the code, maximum coverage is attained during test scenario writing. Testing methods

Testing methods:

Disadvantages of White Box Testing Due to the fact that a skilled tester is needed to perform white box testing, the costs are increased. Sometimes it is impossible to look into every corner to find out the hidden errors that may create problems as many paths will go untested. It is difficult to maintain white box testing as the use of specialized tools like code analyzers and debugging tools are required. Testing methods

Testing methods:

Grey Box Testing Grey Box Testing is a technique to test the application with limited knowledge of the internal workings of an application. Unlike black box testing, where the tester only tests the application’s user interface, in grey box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scenarios when making the test plan. Testing methods

Testing methods:

Advantages of Grey Box Testing Offers combined benefits of black box and white box testing wherever possible. Grey box testers don’t rely on the source code, instead they rely on interface definition and functional specifications. Based on the limited information available, a grey box tester can design excellent test scenarios especially around communication protocols and data type handling. The test is done from the point of view of the user and not the designer. Testing methods

Testing methods:

Disadvantages of Grey Box Testing Since the access to source code is not available, the ability to go over the code and test coverage is limited. The tests can be redundant if the software designer has already run a test case. Testing every possible input stream is unrealistic because it would take an unreasonable amount of time, therefore many program paths will go untested. Testing methods

Testing methods:

Visual difference between the three Testing methods. Testing methods

Testing methods:

Comparison between the three Testing methods. Testing methods

Levels of Testing:

Levels of testing include the different methodologies that can be while conducting Software Testing. Following are the main levels of Software Testing: Functional Testing. Non-functional Testing. Functional Testing This is a type of black box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to confirm to the functionality it was intended for. Functional Testing is conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. Levels of Testing

Levels of Testing:

There are five steps that are involved when testing an application for functionality. Step I – The determination of the functionality that the intended application is meant to perform. Step II – The creation of test data based on the specifications of the application. Step III – The output based on the test data and the specifications of the application. Step IV – The writing of Test Scenarios and the execution of test cases. Step V – The comparison of actual and expected results based on the executed test cases. Levels of Testing

Levels of Testing:

Types of Functional Testing: Unit Testing This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use the test data that is separate from the test data of the quality assurance team. The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality. Levels of Testing

Levels of Testing:

Limitations of Unit Testing Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software. The same is the case with unit testing. There is a limit to the number of scenarios and test data that the developer can use to verify the source code. Integration Testing The testing of combined parts of an application to determine if they function correctly together is Integration Testing. Levels of Testing

Levels of Testing:

There are two methods of doing Integration Testing, Bottom-Up Integration testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds. Top-Down Integration testing, the highest-level modules are tested first and progressively lower-level modules are tested after that. In a comprehensive software development environment, bottom-up testing is usually done first, followed by top-down testing. Levels of Testing

Levels of Testing:

System Testing This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. Why is System Testing so Important? System Testing is the first step in SDLC, where the app is tested as a whole. The application is tested thoroughly to verify that it meets the functional and technical specifications. The application is tested in an environment which is very close to the production environment where the application will be deployed. System Testing enables us to test, verify and validate both the business requirements as well as the Applications Architecture. Levels of Testing

Levels of Testing:

Regression Testing Whenever a change in a software application is made it is quite possible that other areas within the application have been affected by this change. To verify that a fixed bug hasn’t resulted in another functionality is Regression Testing. The intent of Regression Testing is to ensure that a change, such as a bug fix did not result in another fault being uncovered in the application. Levels of Testing

Levels of Testing:

Why is Regression Testing so Important? Minimize the gaps in testing when an application with changes made has to be tested. Testing the new changes to verify that the change made did not affect any other area of the application. Mitigates Risks when regression testing is performed on the application. Test coverage is increased without compromising timelines. Increase speed to make the product. Levels of Testing

Levels of Testing:

Acceptance Testing This is arguably the most important type of testing as it is conducted by the Quality Assurance Team who will test whether the application meets the intended specifications and satisfies the client’s requirements. The QA team will have a set of prewritten scenarios and test cases that will be used to test the application. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors, or interface gaps, but also to point out any bugs in the application that will result in system crashes or major errors in the application. By performing acceptance tests on an application the testing team will assume how the application will perform in production. There are also legal and contractual requirements for acceptance of the system. Levels of Testing

Levels of Testing:

Alpha Testing Testing of a software application or product conducted at developer’s site by the customer. This test is the first stage of testing and will be performed amongst the teams (developer & QA). Unit testing, Integration testing and System testing when combined is known as Alpha Testing. Beta Testing Testing conducted at one or more customer sites by the end user of a delivered software product system. This test is performed after Alpha Testing has been successfully performed. Levels of Testing

Levels of Testing:

Non-Functional Testing Non-functional testing of software involves testing the software from the requirements which are non-functional in nature related but important as well such as performance, security, user interface etc. Some of the important and commonly used non-functional testing types are mentioned as follows, Performance Testing It is mostly used to identify any performance issues rather than finding the bugs in software. Levels of Testing

Levels of Testing:

Performance Testing There are different causes which contribute in lowering the performance of software: Network delay. Client side processing. Database transaction processing. Load balancing between servers. Data rendering. Performance testing is considered as one of the important and mandatory testing type in the terms of following aspects: Speed (i.e. Response Time, Data Rendering and Accessing) Capacity Stability Scalability Levels of Testing

Levels of Testing:

It can be either qualitative or quantitative testing activity and can be divided into different sub types such as Load Testing and Stress Testing. Load Testing A process of testing the behavior of the software by applying maximum load in terms of software accessing and manipulating large input data. It can be done at both normal and peak load conditions. This type of testing identifies the maximum capacity of software and its behavior at peak time. Most of the times, load testing is performed with the help of automation tools such as Load Runner AppLoader etc. Levels of Testing

Levels of Testing:

Stress Testing This testing type includes the testing of software behavior under abnormal conditions. Taking away the resources, applying load beyond the actual load limit is stress testing. The main intent is to test the software by applying the load to the system and taking the over the resources used by the software to identify the break point. This testing can be performed by testing different scenarios such as: Shutdown or restart of Network ports randomly. Turning the database on or off. Running different processes that consume resources such as CPU, Memory, server etc. Levels of Testing

Levels of Testing:

Usability Testing It is a black box technique and is used to identify any error(s) and improvements in the software by observing the users through their usage and operation. Difference between UI and Usability Testing UI testing involves the testing of Graphical User Interface of the software. This testing ensures that the GUI should be according to requirements in terms of color, alignment, size and other properties. On the other hand Usability testing ensures that a good and user friendly GUI is designed and is easy to use for the end user. UI testing can be considered as a sub part of Usability testing. Levels of Testing

Levels of Testing:

Security Testing Security testing involves the testing of software in order to identify any flaws and gaps from security and vulnerability point of view. Following are the main aspects which Security testing should ensure: Confidentiality. Integrity. Authentication. Availability. Authorization. Non-repudiation. Software is secure against known and unknown vulnerabilities. Software is according to all security regulations. Injection flaws Buffer overflows vulnerabilities. Levels of Testing

Levels of Testing:

Portability Testing Portability testing includes the testing of software with intend that it should be re-usable and can be moved from another software as well. Following are the strategies that can be used for portability testing. Transferred installed software from one computer to another. Building executable (.exe) to run the software on different platforms. Portability testing can be considered as one of the sub parts of System testing, as this testing type includes the overall testing of software with respect to its usage over different environments. Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing. Levels of Testing

Levels of Testing:

Portability Testing Following are some pre-conditions for Portability testing: Software should be designed and coded, keeping in mind Portability Requirements. Unit testing has been performed on the associated components. Integration testing has been performed. Test environment has been established. Levels of Testing

Testing Phases:

Testing Phases Requirements Review Design Review Code Walkthrough Code Inspection Integration Testing Unit Testing System Testing Performance Testing User Acceptance Testing Alpha Testing Beta Testing Installation Testing

Testing Phases:

Testing Phases

Testing Life Cycle:

Software Testing Life Cycle Software Testing Life Cycle (STLC) identifies what test activities to carry out and when to accomplish those test activities. Even though testing differs between organizations, there is a testing life cycle. Software Testing Life Cycle consists of six (generic) phases: Test Planning Test Analysis Test Design Construction and Verification Testing (Execution and Defect Reporting of Functional Level) Final Testing and Implementation Post Implementation Testing Life Cycle

Testing Life Cycle:

Testing Life Cycle Plan: Device a plan. Do: Executive the plan. Check: Check the result. Act: Take the necessary action.

Testing Life Cycle:

Software Testing has its own life cycle that intersects with every stage of the Software Development Life Cycle (SDLC). The basic requirements in software testing life cycle is to control/deal with software testing - Manual, Automated and Performance. Test Planning This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point. Testing Life Cycle

Testing Life Cycle:

Test Planning Activities at this stage would include preparation of high level test plan. According to IEEE test plan template, the Software Test Plan (STP) is designed to prescribe the scope, approach, resources and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing and the risks associated with the plan. In Test Planning following are the major tasks: Defining scope of testing Identification of approaches Defining risk Identifying resources Defining time schedule. Testing Life Cycle

Testing Life Cycle:

Test Analysis Once test plan is made and decided upon, next step is to delve a little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate. Proper and regular meetings should be held between testing & dev teams, project managers, Business Analysts to check the progress of all the activities in the project and ensure the completeness of the test plan. In this stage we need to develop functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases. Identify which test cases to automate, begin review of documentation i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas of Stress and Performance Testing. Testing Life Cycle

Testing Life Cycle:

Test Design Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If we have thought of automation then we have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass/fail criteria are defined here. Schedule for testing is revised (if necessary) and finalized and test environment is prepared. Testing Life Cycle

Testing Life Cycle:

Construction and Verification In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported. Testing Life Cycle

Testing Life Cycle:

Testing In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases-> Report Bugs-> Revise test cases (if needed)-> Add new test cases (if needed)-> Bug fixing-> Retesting (test cycle 2, test cycle 3….). Final Testing and Implementation In this phase we have to execute remaining stress and performance test cases, documentation for testing is completed/updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions. Testing Life Cycle

Testing Life Cycle:

Post Implementation In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage. Testing Life Cycle

Test Case Design Strategies:

Boundary value analysis and Equivalence partitioning both are test case design strategies in black box testing. Equivalence Partitioning In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements. In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing. Test Case Design Strategies

Test Case Design Strategies:

Equivalence Partitioning Equivalence Partitioning uses fewest test cases to cover maximum requirements. E.g. If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data. Using Equivalence Partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class. So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs. Test Case Design Strategies

Test Case Design Strategies:

Equivalence Partitioning Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning: One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient. Input data class with all values below lower limit. i.e. any value below 1, as a invalid data test case. Input data with any value greater than 1000 to represent third invalid input class. So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result. We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised. Test Case Design Strategies

Test Case Design Strategies:

Boundary Value Analysis It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. Boundary value analysis testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain. Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes. Test Case Design Strategies

Test Case Design Strategies:

Boundary Value Analysis Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis: Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case. Test data with values just below the extreme edges of input domain i.e. values 0 and 999. Test data with values just above the extreme edges of input domain i.e. values 2 and 1001. Boundary value analysis is often called as a part of stress and negative testing. Test Case Design Strategies

Test Case Design Strategies:

There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments. E.g. If you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes. Test Case Design Strategies

SDLC Block Diagram:

SDLC Block Diagram

Defect Life Cycle:

Defect Life Cycle

PowerPoint Presentation:

Thank You

authorStream Live Help