Testing Fundamentals

Views:
 
Category: Entertainment
     
 

Presentation Description

No description available

Comments

By: kishorebm2020 (48 month(s) ago)

hi. da PPT wat u r giving is very helpfull for the starters. so, i feel happy if u could send me da PPT for preparation

By: subusmail (49 month(s) ago)

very good stuff.. can i get a copy over mail subusmail@yahoo.com

By: santanu (70 month(s) ago)

Hi. It would be great if u can share this ppt with me santanukumar.da
sh@gmail.com

Presentation Transcript

Software Testing: 

<![CDATA[ Software Testing A Crash Course in SW testing Techniques and Concepts]]>

Overview: 

<![CDATA[ Overview Introduction to testing concepts Levels of testing General test strategies Testing concurrent systems Testing of RT systems ]]>

Testing is necessary: 

<![CDATA[ Testing is necessary To gain a sufficient level of confidence for the system Risk information Bug information Process information Perfect development process infeasible Building without faults implies early testing Formal methods not sufficient Can only prove conformance to a model Perfect requirements are cognitive infeasible Error prone]]>

Testing for quality assurance: 

<![CDATA[ Testing for quality assurance Traditionally testing focuses on functional attributes E.g. correct calculations Non-functional attributes equally important E.g. reliability, availability, timeliness]]>

How much shall we test?: 

<![CDATA[ How much shall we test? Testing usually takes about half of development resources Stop testing is a business decision There is always something more to test Risk based decision. Tester provides risk estimation]]>

When do we test?: 

<![CDATA[ When do we test? The earlier a fault is found the less expensive it is to correct it Testing is not only concerning code. Documents and models should also be subject to testing. As soon as a document is produced, testing can start.]]>

Levels of Testing: 

<![CDATA[ Levels of Testing Component/Unit testing Integration testing System testing Acceptance testing Regression testing]]>

Component Testing (1/2): 

<![CDATA[ Component Testing (1/2) Require knowledge of code High level of detail Deliver thoroughly tested components to integration Stopping criteria Code Coverage Quality ]]>

Component Testing (2/2): 

<![CDATA[ Component Testing (2/2) Test case Input, expected outcome, purpose Selected according to a strategy, e.g., branch coverage Outcome Pass/fail result Log, i.e., chronological list of events from execution]]>

Integration Testing (1/2): 

<![CDATA[ Integration Testing (1/2) Test assembled components These must be tested and accepted previously Focus on interfaces Might be interface problem although components work when tested in isolation Might be possible to perform new tests]]>

Integration Testing (2/2): 

<![CDATA[ Integration Testing (2/2) Strategies Bottom-up, start from bottom and add one at a time Top-down, start from top and add one at a time Big-bang, everything at once Functional, order based on execution Simulation of other components Stubs receive output from test objects Drivers generate input to test objects Note that these are also SW, i.e., need testing etc.]]>

System Testing (1/2): 

<![CDATA[ System Testing (1/2) Functional testing Test end to end functionality Requirement focus Test cases derived from specification Use-case focus Test selection based on user profile]]>

System Testing (2/2): 

<![CDATA[ System Testing (2/2) Non-functional testing Quality attributes Performance, can the system handle required throughput? Reliability, obtain confidence that system is reliable Timeliness, testing whether the individual tasks meet their specified deadlines etc.]]>

Acceptance Testing: 

<![CDATA[ Acceptance Testing User (or customer) involved Environment as close to field use as possible Focus on: Building confidence Compliance with defined acceptance criteria in the contract]]>

Re-Test and Regression Testing: 

<![CDATA[ Re-Test and Regression Testing Conducted after a change Re-test aims to verify whether a fault is removed Re-run the test that revealed the fault Regression test aims to verify whether new faults are introduced Re-run all tests Should preferably be automated]]>

Strategies: 

<![CDATA[ Strategies Code coverage strategies, e.g. Decision coverage Data-flow testing (Defines -> Uses) Specification-based testing, e.g. Equivalence partitioning Boundary-value analysis Combination strategies State-based testing]]>

Code Coverage (1/2): 

<![CDATA[ Code Coverage (1/2) Statement coverage Each statement should be executed by at least one test case Minimum requirement]]>

Code Coverage (2/2): 

<![CDATA[ Code Coverage (2/2) Branch/Decision coverage All decisions with true and false value Subsumes all statements coverage MC/DC for each variable x in a boolean condition B, let x decide the value of B and test with true and false value Example: if(x1 and x2) { S } Used for safety critical applications]]>

Mutation testing: 

<![CDATA[ Mutation testing Create a number of mutants, i.e., faulty versions of program Each mutant contains one fault Fault created by using mutant operators Run test on the mutants (random or selected) When a test case reveals a fault, save test case and remove mutant from the set, i.e., it is killed Continue until all mutants are killed Results in a set of test cases with high quality Need for automation]]>

Specification-based testing (1/2): 

<![CDATA[ Specification-based testing (1/2) Test cases derived from specification Equivalence partitioning Identify sets of input from specification Assumption: if one input from set s leads to a failure, then all inputs from set s will lead to the same failure Chose a representative value from each set Form test cases with the chosen values]]>

Specification-based testing (2/2): 

<![CDATA[ Specification-based testing (2/2) Boundary value analysis Identify boundaries in input and output For each boundary: Select one value from each side of boundary (as close as possible) Form test cases with the chosen values]]>

Combination Strategies (1/5): 

<![CDATA[ Combination Strategies (1/5) Equivalence partitioning and boundary analysis give representative parameter values Test case often contain more than one input parameter How do we form efficient test suites, i.e., how do we combine parameter values?]]>

Combination Strategies (2/5): 

<![CDATA[ Combination Strategies (2/5) Each choice Each chosen parameter value occurs in at least one test case Assume 3 parameters A, B, and C with 2, 2, and 3 values ]]>

Combination Strategies (3/5): 

<![CDATA[ Combination Strategies (3/5) Pair-wise combinations Each pair of chosen values occurs in at least one test case Efficiently generated by latin squares or a heuristic algorithm Cover failures due to pairs of input values Latin square]]>

Combination Strategies (4/5): 

<![CDATA[ Combination Strategies (4/5) All combinations, i.e., n-wise Each combination of chosen values occurs in at least one test case Very expensive]]>

Combination Strategies (5/5): 

<![CDATA[ Combination Strategies (5/5) Base choice For each parameter, define a base choice, i.e., the most likely value Let this be base test case and vary one value at a time In the example, let A-1, B-2, and C-2 be base choices]]>

State-Based Testing: 

<![CDATA[ State-Based Testing Model functional behavior in a state machine Select test cases in order to cover the graph Each node Each transition Each pair of transitions Each chain of transitions of length n]]>

Concurrency Problems: 

<![CDATA[ Concurrency Problems (Logical) Parallelism often leads to non-determinism in actual execution E.g. Synchronization errors may occur in some execution orders Order may influence the arithmetic results or the timing Explosion in number of required tests Need confidence for all execution orders that might occur]]>

Silly example of Race Conditions with shared data: 

<![CDATA[ Silly example of Race Conditions with shared data W(A,1) R(B,1) W(B,1) R(A,1) W(A,1) R(B,1) W(B,1) R(A,0) W(A,1) R(B,0) W(B,1) R(A,1) ]]>

Observability Issues: 

<![CDATA[ Observability Issues Probe effect (Gait,1985) “Heisenbergs's principle” - for computer systems Common “solutions” Compensate Leave probes in system Ignore Must observe execution orders Gain coverage ]]>

Controllability Issues: 

<![CDATA[ Controllability Issues To be able to test correctness of a particular execution order we need control Input data to all tasks Initial state of shared data/buffers Scheduling decisions Order synchronization/communication between tasks]]>

Few testing criteria exists for concurrent systems: 

<![CDATA[ Few testing criteria exists for concurrent systems Number of execution orders grow exponentially with # synchronization primitives in tasks Testing criteria needed to bound and selecting subset of execution orders for testing E.g. Branch / Statement coverage not sufficient for concurrent software Still useful on serializations Execution paths may require specific behavior from other tasks Data-flow based testing criteria has been adapted E.g. define-use pairs]]>

Summary :Determinism vs. Non-Determinism: 

<![CDATA[ Summary : Determinism vs. Non-Determinism Deterministic systems Controllability is high input (sequence) suffice Coverage can be claimed after single test execution with inputs E.g. Filters, Pure “table-driven” real-time systems Non-Deterministic systems Controllability is generally low Statistical methods needed in combination with input coverage E.g. Systems that use random heuristics Behavior depends on execution times / race conditions]]>

Test execution in concurrent systems: 

<![CDATA[ Test execution in concurrent systems Non-deterministic testing “Run, Run, Run and Pray” Deterministic testing Select a particular execution order and force it E.g. Instrument with extra synchronizations primitives (No timing constraints make this possible) Prefix-based Testing (and Replay) Deterministically run system to a specific (prefix) point Start non-deterministic testing at that specific point]]>

Real-time systems testing: 

<![CDATA[ Real-time systems testing Inherits issues from concurrent systems Problems becomes harder due to time-constraints More sensitive to probe-effects Timing/order of inputs become more significant Adds new potential problems New failure types E.g. Missed deadlines, Too early responses… Test inputs  Execution times Faults in real-time scheduling Algorithm implementation errors Assumption about system wrong]]>

Real-time systems testing: 

<![CDATA[ Real-time systems testing Pure time-triggered systems Deterministic Test-methods for sequential software usually apply Fixed priority scheduling Non-deterministic Limited set of possible execution orders Worst-case w.r.t timeliness can be found from analysis Dynamic (online) scheduled systems Non-deterministic Large set of possible execution orders Timeliness needs to be tested]]>

Testing Timeliness: 

<![CDATA[ Testing Timeliness Aim : Verification of specified deadlines for individual tasks Test if assumptions about system hold E.g. worst-case execution time estimates, overheads, context switch times, hardware acceleration efficency, I/O latency, blocking times, dependency-assumptions Test system temporal behavior under stress E.g. Unexpected job requests, overload management, component failure, admission control scheme Identification of potential worst-case execution orders Controllability needed to test worst-case situations efficiently ]]>

Testing Embedded Systems: 

<![CDATA[ Testing Embedded Systems System-level testing differs Performed on target platform to keep timing Closed-loop testing Test-cases consist of parameters sent to the environment simulator Open-loop testing Test-cases contain sequences of events that the system should be able to handle Environment Simulator Real-time (control) system Test parameters Real-time (control) system Test Cases]]>

Approach of TETReS: 

<![CDATA[ Approach of TETReS Real-time database background Dynamic scheduling, flexibility requirements, mixed load… Two general approaches Investigate how architectural constraints impacts testability Keep flexibility – Limit/avoid testing problems E.g. Database environment provide some nice properties Investigate methods for supporting automated testing of timeliness Testing criteria for timeliness Generation of tests automatically Automated, prefix-based test execution]]>

Approach of TETReS test generation and execution: 

<![CDATA[ Approach of TETReS test generation and execution Open-loop System-level testing of timeliness Model-based test-case generation Architecture information E.g. Locking policies, Scheduling etc… Timing assumptions/constraints of tasks Assumptions about normal environment behavior Mutation-based testing criteria Prefix-based Test-case execution]]>

Summary: 

<![CDATA[ Summary Motivation Test methods Examples on different test strategies has been presented Concurrency and Real-time issues TETReS approach ]]>

Slide42: 

<![CDATA[ Mutant Generator Model Checker Test Specs. Generator Test coordinator State Enforcer Test Driver Testing Criteria Formal modeling Task unit Testing RTS Spec. Mutated Specs. Counter Example Traces Test Specs Input Data to Tasks Parameterized event sequences Pre-State description]]>

authorStream Live Help