realdolmen CONCEPTS, METHODOLOGY AND MANAGEMENT OF SOFTWARE TESTING

99 Slides8.41 MB

www.realdolmen.com CONCEPTS, METHODOLOGY AND MANAGEMENT OF SOFTWARE TESTING 07/07/2016 3/20/23 SLIDE 1

EXPECTATIONS 3/20/23 SLIDE 2

AGENDA Structured testing basics Testing within the common development lifecycles RealDolmen Vision on testing Exploratory testing To find a bug (or not) 3/20/23 SLIDE 3

AGENDA Structured testing basics Defining ‘Testing’ History Testing terms Why When 3/20/23 SLIDE 4

STRUCTURED TESTING – DEFINING ‘TESTING’ Terms part of testing Finding defects, gaining insight in the quality of software Preventing defects Gaining confidence in the product Verifying whether the pre-determined requirements and specifications are met Checking for completeness and correctness Advising about quality and risks 3/20/23 SLIDE 5 Terms not part of testing A phase after development Something we will do when we have the time Defining the requirements Debugging of defects Release / Acceptation Deployment Capture all defects Operational support Expensive

STRUCTURED TESTING – HISTORY 3/20/23 SLIDE 6

Historical (1) Inception Test In original IT-projects testing was considered as: a necessary evil, providing negative feedback Breaking down developers work And was asked to join in as late as possible 03/20/2023 Analysis Develop Design

Historical (2) Analysis During the late ‘80s testing was more considered as: A proper means to validate, providing a Go/NoGo feedback Evaluating developers work But based upon a shared knowledge and process. 03/20/2023 Develop Review Design Test

Historical (3) More recently, the way of thinking has not as such been renewed, but best practices were implemented in a better way: Testers were involved from the start of the project, For each activity in development, a testing activity occurs Bi-directional communication takes place with mutual respect 03/20/2023

Modern Testfare Now, the latest methodologies bring us the opportunity to start with modern Testfare: Development Lifecycle and Test Lifecycle entwined forming an Application Lifecycle, Mutual activities with the same process and same goal, Mixed teams from different origin, Working together as one Within the same toolset Within the same environment 03/20/2023

STRUCTURED TESTING – HISTORY Learning Oriented Primary goal is gaining insight into software context Learning paradigm Evolutions Introduction of context-driven testing by Cem Kaner, James Bach, Brian Marick and Bret Pettichord The introduction of Session Based Testing by Jonathan Bach and James Bach All of the above resulting in the principles of Rapid Software Testing (RST) by James Bach (expanded in collaboration with Michael Bolton) The rise of Agile Development and it’s manifesto Automated regression testing 3/20/23 SLIDE 16

Need for clear and agreed definitions !!! Testing QA The process consisting of all application lifecycle activities, both static and dynamic, concerned with planning, preparation, verification, validation and evaluation of software and related products to determine that they satisfy the specified requirements and demonstrate they are fit for purpose. (variation on the definition as given within the Software Testing Based on Experience) QC Audit 03/20/2023

Need for clear and agreed definitions !!! Testing QA High level activities to ensure that processes and procedures are used to uphold quality of the services/software/documentation used or delivered within the project. QC Audit 03/20/2023 (Ex: Document Review process, Contractual deliverable acceptance process, standard naming conventions, Implementation of Communication Plan)

Need for clear and agreed definitions !!! Testing Verification activities to control that the (modules of) the services/software/documentation have a sufficient level of quality. QA (Unit Code validation, GUI w3 validation, AnySurfer control) QC Audit 03/20/2023

Need for clear and agreed definitions !!! Testing An inspection, correction, and verification of all project activities, conducted by an independent qualified (external) consultant QA QC Audit 03/20/2023

TESTING TERMS White box testing versus Black box testing White box Based on knowledge of the internal specifications Code based INPUT OUTPUT Black box Without taking into account any knowledge of the internal specifications Requirement based INPUT 3/20/23 SLIDE 21 OUTPUT

TESTING TERMS Retesting versus Regression testing Retesting When a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed Also called ‘confirmation testing’ Regression testing Testing of a previously tested program performed when the software or its environment is changed To ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made 3/20/23 SLIDE 22

TESTING TERMS Retesting versus Regression testing 3/20/23 SLIDE 23

TESTING TERMS Verification versus validation Verification ‘Are we building the product right?’ Process oriented – Meets a set of design specifications Used to identify defects in the product early in the life cycle Validation ‘Are we building the right product?’ Involves actual testing and is done after verification Product oriented – Meets the operation need of the user 3/20/23 SLIDE 24

TEST LEVELS AND TEST TYPES Test levels Basically to identify missing areas and prevent overlap and repetition between the development life cycle phases Each test level is characterized by a different purpose, test basis, executive role, etc 4 Main test levels Test Type Focused on a particular test objective, which could be the testing of A function to be performed by the component or system A non-functional quality characteristic, such as reliability or usability The structure or architecture of the component or system Related to changes – Confirming defects have been fixed (retesting) – Looking for unintended changes (regression testing) The extensiveness of all these tests is just another reason why it is important bringing testers in as early as possible. When a program is more thoroughly tested, a greater number of bugs will be detected which ultimately results in higher quality software 3/20/23 SLIDE 25

TEST DESIGN TECHNIQUES: WHAT? Test Execution Schedule Data required to execute test cases, procedures and scenarios Test Data Test Procedure Test Script Test Case 3/20/23 SLIDE 26 Logic and chronologic order of test procedures, preconditions(including initial data setup and test environment setup) and post conditions for test execution purposes Logic order of test cases, start situation, concrete actions and controls Automated test procedure Description of input, process and predicted outcome

OVERVIEW AGT: Algoritm Testing DTT: Decision Table Testing ECT: Elementary Comparison Testing SYN SEM PCT User interface Object to be tested SYN: Syntactic Testing DCT SEM: Semantic Testing RLT: Real-Live Testing ECT DTT DCT DCT AGT ECT DTT PIT DTT AGT RLT 3/20/23 SLIDE 27 PCT: Process Cycle Testing PIT: Program Interface Testing ECT Admin. Org. DCT: Data Cycle Testing PIT AGT PCT System X

STRUCTURED TESTING: WHY? Software is developed by humans who make mistakes Error Human action producing an incorrect result e.g. in program code (synonym: mistake) Defect Manifestation of an error in software (synonym: Fault / Bug) Failure Deviation of software from expected delivery of service INSERTS ERROR 3/20/23 SLIDE 28 DEFECT MAY CAUSE FAILURE

AWARENESS 3/20/23 SLIDE 29

STRUCTURED TESTING: WHY? Successful Testing? High People think “We are here” Testing Quality A lot of Defects A low # of Defects Software Quality Low High A low # of Defects A low # of Defects Low Maybe one is only here 3/20/23 SLIDE 30

STRUCTURED TESTING: WHY? Optimizing total cost of Quality Distribution of bugs within project phases Source: Bender RBT Inc. Important to include testing as early as possible! 3/20/23 SLIDE 31

STRUCTURED TESTING: WHY? Cost of Correcting a Defect In each stage of software development 14.102 14.000 X 100 !! 7.136 7.000 X 50 !! 977 1.000 139 Determining Requirements 455 Design Coding Testing Maintanance Source: Compuware 3/20/23 SLIDE 32

STRUCTURED TESTING: WHY? – DEVOPS PARADOX Misunderstood requirements Users/ Stakeholders Define REQUIREMENTS PRODUCT BACKLOG Can’t get actionable feedback Operations Ideation New learnings are not captured Develop From Idea to working software Implement Monitor Monitor Churn in requirements/priorities Quality is an afterthought No traceability Loss of focus Unmet user expectations OPS OPS BACKLOG BACKLOG Development & testing Operations/readiness requirements not met 3/20/23 SLIDE 33 Operati ons WORKING SOFTWARE Production incidents are hard to debug and resolve Operat Operate software eWorking in production Value realized Working software in production Disparate management tools Value realized

STRUCTURED TESTING: WHY? ADDED VALUE? Testing I ran some tests But I can’t remember what happened or how I could repeat those tests Was this testing valuable? Structured Testing The process of testing only has value if it generates INFORMATION I ran 100 tests and the system failed 23 times I can’t remember how I did it or where the system failed Was this test valuable? I found 56 defects So what? Project Intelligence Risk mitigation Critical for Management To understand what has been tested in the past, in order to improve what will be tested in the future 3/20/23 SLIDE 34

STRUCTURED TESTING: WHY? – WHEN ARE YOU DONE TESTING Difficult to determine Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done Common factors in deciding when to stop are Deadlines, e.g. release deadlines, testing deadlines Test cases completed with certain percentage passed Test budget has been depleted Coverage of code, functionality, or requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period ends 3/20/23 SLIDE 35

STRUCTURED TESTING: WHY? – WHEN ARE YOU DONE TESTING 3/20/23 SLIDE 36

QUALITY ATTRIBUTES – PRODUCT QUALITY MODEL 3/20/23 SLIDE 37

QUALITY ATTRIBUTES – QUALITY IN USE MODEL 3/20/23 SLIDE 38

STRUCTURED TESTING: WHEN? 3/20/23 SLIDE 39

STRUCTURED TESTING: WHEN? Deadline Planning Planning 3/20/23 SLIDE 40 Analysis/Design Analysis/Design Development Development Test Test Deployment

STRUCTURED TESTING: WHEN? This is what it should be like Planning Analysis/Design Development Test As early as possible, not a phase after development 3/20/23 SLIDE 41 Deployment

AGENDA Structured testing Defining ‘Testing’ History Testing terms Why When Overview most common used development lifecycles Sequential models (V-model) Iterative and incremental models (Agile) RealDolmen Vision (Scripted Testing vs) Exploratory Testing To find a bug (or not) 3/20/23 SLIDE 42

DEVELOPMENT LIFECYCLES Overview most common used development lifecycles Development life cycle models describe phases of the development cycle and the order in which those phases can be managed. There are different models, and many companies adopt their own, but all have very similar patterns. The general phases within a basic model Requirement gathering and analysis Design Implementation or coding Testing Deployment Maintenance 3/20/23 SLIDE 43

DEVELOPMENT LIFECYCLES – SEQUENTIAL Sequential models Variations Waterfall V-model Each phase must be completed before the next phase begins Common disadvantages: Adjusting scope during life cycle can kill a project No working software is produced until late during the lifecycle High amounts of risk and uncertainty 3/20/23 SLIDE 44

V-MODEL – SEQUENTIAL MODEL User Requirements System Integration Testing IT PROJECT TEAM RESPONSIBILITY System Requirements BLACKBOX System Testing Component Integration Testing Global Design Component Testing Detailed Design Code/Build 3/20/23 SLIDE 45 Acceptance Testing BUSINESS RESPONSIBILITY 45 Test Levels Test Types Regression Testing Retesting White / Black Box Testing Entry / Exit Criteria WHITEBOX

DEVELOPMENT LIFECYCLES – ITERATIVE / INCREMENTAL Iterative and incremental models Variations RUP AGILE (XP, Scrum) RAD The development lifecycle is divided into smaller, more easily managed iterations / increments Regression testing is very important Each iteration / increment passing the same phases Generates working software quickly and earlier during the lifecycle More flexible, less costly to change scope and requirement 3/20/23 SLIDE 53

AGILE - THE AGILE MANIFESTO The Agile Manifesto Written in February 2001, at a summit of seventeen independent-minded practitioners of several programming methodologies A consensus around four main values 3/20/23 SLIDE 54 Individuals and interactions Processes and tools Delivering working software Contract negotiation IMPORTANT Customer collaboration LESS Comprehensive IMPORTANT documentation Responding to change Following a plan

AGILE - THE AGILE MANIFESTO Principles to support core values 1. Customer satisfaction by rapid delivery of useful software 2. Welcome changing requirements, even late in development 3. Working software is delivered frequently (weeks rather than months) 4. Working software is the principal measure of progress 5. Sustainable development, able to maintain a constant pace 6. Close, daily cooperation between business people and developers 7. Face-to-face conversation is the best form of communication (co-location) 8. Projects are built around motivated individuals, who should be trusted 9. Continuous attention to technical excellence and good design 10. Simplicity—the art of maximizing the amount of work not done—is essential 11. Self-organizing teams 12. Regular adaptation to changing circumstances 3/20/23 SLIDE 55

AGILE – ITERATIVE / INCREMENTAL MODEL Agile software development A collection of frameworks, methodologies, models, values and practices that are used in software development to plan, build and operate software in a way that better suites the current environment and business requirements Most used frameworks in this methodology are Scrum, Kanban and Lean The Agile philosophy Aims at continuously delivering business value through quality software, with a high ability to respond to changing environments The underlying principles can be categorized in Open communication among all stakeholders and the customer Putting the team in control of the work in a fail safe environment Early and successive development, testing and release cycles Visibility on processes, policies and work in progress Focus on the complete flow of value 3/20/23 SLIDE 56

AGILE – ITERATIVE / INCREMENTAL MODEL 3/20/23 SLIDE 57

AGILE - THE AGILE TESTING QUADRANTS Business-Facing Automated Manual & Manual Exploratory Testing Functional Tests Scenarios Examples Usability Testing UAT (User Acceptance Testing) Simulations Alpha/Beta Q2 Q3 Q1 Q4 Performance & Load Testing Unit Tests Security Testing Component Tests Automated 3/20/23 SLIDE 58 Critique Product Supporting the Team Prototypes “ility” Testing Technology-Facing Tools

AGILE - THE AGILE TESTING QUADRANTS Advantage Testing needs to be integrated much tighter within the team responsible for developing the software product. As a result, this will improve communication with the other stakeholders Using the Agile Testing quadrants will help make sure that all necessary testing is taken into account and done at the right time Perfectly possible to apply them in other development models 3/20/23 SLIDE 59

ITERATIVE METHODS OF DEVELOPMENT & TESTING Short iterations. The team delivers working code (or a working subset of code) after each iteration. Iterative approach means we can trade features for time instead of sacrificing quality As projects evolve and change over time, the team must respond to that change Plan only for the current iteration. Planning ahead means rework 3/20/23 SLIDE 60 Short iterations means increased regression risk

COMMITMENT AS A TEAM The whole team, including the testers, are committing to the mile stones The role of the tester changes from last line of defence to supporter of the team as the tester becomes part of the team Testers get involved as early in the development process as possible, to help with planning and estimation, to define exit criteria , to review stories, Testers and developers are co-located Track the testing status together with the development status Make sure testing is not a hidden/forgotten cost! Wheigh test tasks just as you would development tasks Quality of the software is the TEAM’s responsibility 3/20/23 SLIDE 61

COMMUNICATION IS KEY Daily stand up meetings help testers to Prioritize high risk items: what needs to be tested first? Provide early feedback from testers to developers about the quality of the code Reduce the feedback loop Tester must be willing and able to talk to all parties involved: Customer, Developers, Product owner This requires good communication skills! 3/20/23 SLIDE 62

STRUCTURED TESTING IN AN AGILE PROJECT Master test plan Communicate to the customer what will be tested and how Regularly communicate on test progress Risk analysis Practice risk based testing Test often and early Test techniques User acceptance tests and exploratory testing Decision table testing and State transition testing for test automation Some tests can occur outside of the sprint Regression tests and test automation As each sprint can impact the code done in the previous sprint, regression tests are VERY important Automate them to decrease the load and give faster feedback 3/20/23 SLIDE 63

THE AGILE TESTER The agile tester is a team player Becomes the “head lights” of the team Where are we in relation to our goal? – Progress measurement through user acceptance tests What problems might loom in the future? Focus is on the team, not on the errors No “gotcha!” attitude Tester is the supporter of the team Tests are co-written together with developers An agile tester is considered to be “a pig” The tester makes a total contribution to the team No other projects! This is usually rather difficult The tester is seen as accountable for the outcome of the project The tester is seen as an equal member of the team Involved in the decision making of the team 3/20/23 SLIDE 64

Focus binnen het Testen Flexibiliteit Risico verlagend Oplevering Component Kwaliteit verhogend Intake Documenten 03/20/2023 Automatisatie

DEVELOPMENT LIFECYCLES The unpleasant truth Each of these models Is just a theory, a simplification to explain what actually happens Or to suggest what should happen They are, at very best, only approximations to reality They are based on assumptions about The types of problems that are usually solved The expectations of users The resources, tools and time-frames available Task complexity None of those assumptions may be valid in a particular case Hence there is no ‘silver bullet’ which is always guaranteed to make the development of large systems easy Context Driven Testing Principles 3/20/23 SLIDE 66

AGENDA Structured testing Defining ‘Testing’ History Testing terms Why When Overview most common used development lifecycles Sequential models (V-model) Iterative and incremental models (Agile) RealDolmen Vision (Scripted Testing vs) Exploratory Testing To find a bug (or not) 3/20/23 SLIDE 67

STRUCTURED TESTING – REALDOLMEN VISION RealDolmen Vision Primary goal is gaining insight into software context by learning about it through experimentation, including questioning, studying, modeling, observation and inference Evolutions – RealDolmen Vision Embracing the learning paradigm: Context Driven Testing Testing and automate that ‘Which matters’, ‘What matters’, in a ‘Way it matters’ Managed / Structured Exploratory testing Exploratory testing vs. Scripted testing Integration within Application Lifecycle Management (ALM) 3/20/23 SLIDE 68

REALDOLMEN VISION – CONTEXT DRIVEN TESTING Introduced in 1999 By Cem Kaner, James Bach, Brian Marick and Bret Pettichord It is about Doing the best we can with what we get. Rather than trying to apply “best practices,”. Accepting that very different practices will work best under different circumstances. Main goal To test software Not to see that we have applied the methodology that is in place to the letter A context driven test approach makes testing more efficient and effective Efficiency and effectiveness complement each other Ex. One is very efficient at finding defects, but one only finds trivial low risk defects early on while the important defects are only discovered late in the project. This causes a lot of development rework, retesting and might have an impact on the release date. Testing might have been efficient (one found a lot of defects), it sure was not effective Seven Basic Principles of the Context-Driven School (see next slide) 3/20/23 SLIDE 69

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES The Seven Basic Principles of the Context-Driven School 1. 2. 3. 4. 5. 6. 7. The value of any practice depends on its context There are good practices in context, but there are no best practices People, working together, are the most important part of any project’s context Projects unfold over time in ways that are often not predictable The product is a solution. If the problem isn’t solved, the product doesn’t work Good software testing is a challenging intellectual process Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products 3/20/23 SLIDE 70

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 1 - The value of any practice depends on its context A best practice from one project does not always results in a good practice in another project. Ask yourself the following questions: Would you test an off-the-shelf ERP (Enterprise Resource Planning) tool the same way you would test a web application developed from scratch? Would you test a government portal the same way you would test a commercial web shop? Would you take the same approach to test medical software as you would testing software that guarantees safety in tunnels or on trains, or online banking software? To gain better understanding of the context needed to create the right testing practice for a project Learn more about the context of these projects – Why are we doing them? Which problems do we want to solve with them? What are the constraints? What is the organizational culture in which the software will be created/supported? How is the relation between the IT department and its internal or external customers? No single “best practice” would be the one correct approach to efficiently and effectively help you test all the above projects. This seamlessly brings us to the next principle. 3/20/23 SLIDE 71

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 2 - There are good practices in context, but there are no best practices A lot of test methodologies are positioned as ‘best practices’ on testing e.g. TMapTM, STBoXTM, Test FrameTM and many more Each methodology will have merits for some projects, but one methodology can never be efficiently applied to all test projects (even within one enterprise) Risks when relying on one certain methodology Inefficient and ineffective testing practices Missing critical goals and focus points when not taking into account (parts of) the right context People will stop questioning this methodology and its applicability The context of a project will point out which testing practices are relevant and which ones are not Do not create “one-size-fits-all” approaches towards testing Do create more high level guidelines or templates to be used by the projects when applicable 3/20/23 SLIDE 72

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 3 - People, working together, are the most important part of any project’s context Software is made by people for people Communication is crucial when the goal is to succeed in successfully delivering software Important to gain insight in the project context as soon as possible and advice on risks and issues relevant to the project Good communication skills are a very important asset for a tester – Communicate correctly – Ask the right questions Test professionals need to (be allowed to) improve communication with all the other stakeholders in the application lifecycle Test teams serve the project and are equal members of the software delivery team 3/20/23 SLIDE 73

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 4 - Projects unfold over time in ways that are often not predictable Almost everyone involved in software development projects is aware of this fact Examples of changes Change in deadlines/milestones Change in workload (incorrect estimations, resource issues, unforeseen regulatory constraints) Changing business requirements Change in scope Change in funding And many more When the context of the project slightly (or largely) changes, a tester should take these changes into account and adapt the strategy to cope with these changes Incorrect estimations might require more budget to go to development, possibly decreasing the available amount of time and/or budget for testing. If we can no longer test everything we planned, how can we make sure we still remain as efficient and effective in our testing as possible? This does not mean one should throw all practices and agreements aside 3/20/23 SLIDE 74

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 5 - The project is a solution. If the problem isn’t solved, the product doesn’t work 3/20/23 SLIDE 75

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 5 - The project is a solution. If the problem isn’t solved, the product doesn’t work A principle which is often forgotten in later stages of the project One can test as much as they want, but if in the end, the software product that is being delivered does not provide a solution to the ‘problems’, all work has been irrelevant In order to adhere to this principle, and provide effective software testing, the following key aspects need to be available for testers: Being able as a tester to communicate with business stakeholders – To gain a better insight into the context of the project and the problems it tries to solve – Helps to improve the focus on the original problem(s) and the way they are solved by the provided solution Being able as a tester to be involved in the project early enough – To have a better insight in the project and what it tries to achieve The possibility to review design documents – A tester should have a full understanding of what is analyzed and how this contributes to the solution of the problem Test oracles; but be aware that all oracles are fallible. 3/20/23 SLIDE 76

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 6 - Good software testing is a challenging intellectual process Using the right practices at the right time is the way to go The essence of context-driven testing is project-appropriate application of skill and judgment Impact of this way of thinking The essential value of any test case lies in its ability to provide information (i.e. to reduce uncertainty) – A tester needs to learn about the risks and other parameters having an impact on the software being produced (risk mitigation, right testing effort) Not everything needs to be automated – A tester should take deep consideration into what to automate (and when to do so) and which things not to automate » To avoid repetitive work, or to test things that are more difficult to test when no automation support is available Different types of defects will be revealed by different types of tests – Tests should become more challenging or should focus on different risks as the software becomes more stable. Testing is an evolving process. As a tester learns more about the software, he or she should adjust the test strategy and focus on those things that matter RealDolmen Vision: Testing and automate that ‘Which matters’, ‘What matters’, in a ‘Way it matters’ 3/20/23 SLIDE 77

CONTEXT DRIVEN TESTING – SEVEN BASIC PRINCIPLES Principle 7 - Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right time to effectively test our products Good testing requires skills Understanding of the business goals and business case Knowledge of the test (automation) tools fit for the project Insight in the database structure and its flaws Query knowledge Insight in program architecture and business architecture General communication and negotiation skills Ability and drive to report in a concise and correct manner Ability to focus and defocus where required Efficient and effective testing requires Thinking about the things we are learning in the project and use that information to adapt our practices Frequent communication with the right people at the right time 3/20/23 SLIDE 78

REALDOLMEN VISION – EXPLORATORY TESTING Goals Primary goal is to get a very good insight into the system Being able to test with or without knowing what the system needs to do To work the system and to learn from it in the same way that the end users will It helps to go beyond the obvious variations that have already been tested Combines learning, test design, and test execution into one test approach which helps to design more and better tests Manageable During exploratory testing sessions, the tester simultaneously designs and executes tests, using critical thinking to analyze the results Offers a much better opportunity to learn about the application than scripted tests Structured Meaning that qualitative test deliverables / documentation can and will be created Exploratory testing and/or scripted testing Both complement each other Again it depends on the context of the project 3/20/23 SLIDE 79

AGENDA Structured testing Defining ‘Testing’ History Testing terms Why When Overview most common used development lifecycles Sequential models (V-model) Iterative and incremental models (Agile) RealDolmen Vision (Scripted Testing vs) Exploratory testing To find a bug (or not) 3/20/23 SLIDE 80

DEFINITION OF EXPLORATORY TESTING Exploratory Testing is testing without the presumption that we already know what the system needs to do (e.g. through requirements), as our primary goal is to get a very good insight into the system. We gain this insight through the software (or earlier, by testing the requirements) by means of asking the software (or the authors of the requirements documents) questions during test execution. This will help us to: 1. 2. Estimate the quality of the final product so that the decision makers can be adequately informed Learn about the software, which in turn will allow us to ask better and more accurate questions 3/20/23 SLIDE 81

DEFINITION OF EXPLORATORY TESTING Exploratory testing is not solely used to test the system the same way the end users will use it. It can also be used to: Simulate “passive” users (e.g. Other systems that interact with the application through file dumps) Simulate users that are not defined as end users of your application (e.g. people who should not have access to the information (or parts of it) in the application) Passively detect possible issues (e.g. computer memory is filling up when you remain in one screen for too long without doing anything) 3/20/23 SLIDE 82

COMMON MISCONCEPTIONS 3/20/23 SLIDE 83

1. EXPLORATORY TESTING RANDOM TESTING 3/20/23 SLIDE 84

2. EXPLORATORY TESTING NO REQUIREMENTS 3/20/23 SLIDE 85

3. Exploratory testing unstructured 3/20/23 SLIDE 86

4. Exploratory testing no traceability 3/20/23 SLIDE 87

5. Exploratory testing cannot be managed http://www.satisfice.com/sbtm/ http://sessiontester.openqa.org/ http://testing.gershon.info/reporter/ 3/20/23 SLIDE 88

6. EXPLORATORY TESTING EXPERIENCED TESTERS 3/20/23 SLIDE 89

TEST DESIGN - EXPLORATORY TESTING Where does Exploratory Testing fit? Rapid feedback on a new product or feature Little or no specification or knowledge of the product Investigate and isolate a particular defect Find the most important bugs in the shortest time Seeking diversity from scripted testing Can be applied on every SW development lifecycle (Sequential, Agile, Iterative /Incremental, ) 3/20/23 SLIDE 91

EXPLORATORY TESTING IN ACTION Train your observational skills: look for what is not expected Focus & defocus http://www.youtube.com/watch?v vJG698U2Mvo http://www.youtube.com/watch?v IGQmdoK ZfY https://www.youtube.com/watch?v v3iPrBrGSJM Memorize & take notes http://www.youtube.com/watch?v ubNF9QNEQLA 3/20/23 SLIDE 94

BE LIKE SHERLOCK HOLMES! 3/20/23 SLIDE 95

CHEATING IS OK! 3/20/23 SLIDE 96

AGENDA Structured testing Defining ‘Testing’ History Testing terms Why When Overview most common used development lifecycles Sequential models (V-model) Iterative and incremental models (Agile) RealDolmen Vision (Scripted Testing vs) Exploratory testing To find a bug (or not) 3/20/23 SLIDE 99

www.realdolmen.com TO FIND A BUG OR NOT ? 3/20/23 SLIDE 100

TESTING IS EASY They give you the part that should be tested Software delivery, web page, document, They provide you the description of how it should work Analysis, documentation, operation manual, They tell you what the result should be Expected results, requirements, use cases, 3/20/23 SLIDE 102 You just need to check if What was given Can be used as described To provide the outcome wanted

TESTING IS EASY They give you the part that should be tested Software delivery, web page, document, They provide you the description of how it should work Analysis, documentation, operation manual, They tell you what the result should be Expected results, requirements, use cases, You just need to check if What was given Can be used as described To provide the outcome wanted 3/20/23 SLIDE 103 s i g n r i o h . r t r r e a E e l m : t c o n s s e i f I fer l l a f i , t d o n If

JUST CHECK FOR THE “PASS” So verifying if a test passes is quite simple: We check if the “system under test” is available If not? Test Fails We execute the task that it was dedicated to do If not? Test Fails We check if the outcome is as expected If not? Test Fails If it does Test Passes So if we do not find any fails during all our tests, we consider that the system Passes and is fit for purpose. 3/20/23 SLIDE 104

JUST CHECK FOR THE “PASS” So verifying if a test passes is quite simple: We check if the “system under test” is available If not? Test Fails We execute the task that it was dedicated to do If not? Test Fails We check if the outcome is as expected If not? Test Fails If it does Test Passes sy a e s i t g a n i h t t s y e l T a e r t i Is y? s a e So if we do not find any fails during all our tests, we consider that the system Passes and is fit for purpose. 3/20/23 SLIDE 105

IS SOFTWARE TESTING STRICTLY BINARY? 3/20/23 SLIDE 106

THERE IS MORE TO BE TESTED THEN ONE USUALY THINKS A tester needs to put everything in question surrounding the actual test process. Is the Test Documentation valid? Is the analysis valid? Is the expected outcome valid? Static testing & Test Preparation A tester also needs to put the actual test process in question Are there other factors impacting our test process? Is our test process impacting the outcome? Are we controlling all possible factors and circumstances? 3/20/23 SLIDE 107

TESTING IN 4D 3/20/23 SLIDE 108

ARE WE CONTROLLING THE RIGHT STUFF? There are an infinite possible influencers at input level We do not control all of them. A positive outcome is not conclusive. The system behaviour can provide more then just the expected outcome. We need to evaluate as much as we can. A negative outcome does not necessarily mean a failure of the system under test. A detailed analysis of a negative outcome is needed with support of the system experts. Every test run is different as not all factors are known. We need to rethink the repeatability of (automated)tests. “Not reproducible error ” just means that there are unknown factors. 3/20/23 SLIDE 109

WHAT ARE METRICS WORTH? The exact amount of tests passed or failed is no longer a valid indicator? Some are invalid passes Others are false alarms. The exact amount of bugs found is no longer a valid number? Some or bugs due to different circumstances. Some bugs remain undetected. Skepticism is needed Validations of every test outcome is required. Results are to be seen as indicators or estimates. Tendencies are worth more then exact amounts. 3/20/23 SLIDE 110

INTRODUCING THE TEST ORACLES (1) A Test Oracle is a mechanism or principle that supports us to decide whether or not a system under test is behaving reasonably. This can be done in a deterministic way (a mismatch means a failure occured) or in a probabilistic way (a mismatch means the system probably fails). A Test Oracle can be used as a reference function (comparaison of outcomes), an evaluation function (indicator for further investigation or not), or a combination of both. 3/20/23 SLIDE 111

INTRODUCING THE TEST ORACLES (2) At least one Oracle is needed in every test, more are possible and even preferrable. Oracles may be independent of the test (outside view), but can also be integrated within the test (inside view). Several Oracle challenges to be taken into account: Completeness Input and output coverage Function and environment coverage Types of errors possible Accuracy Logical or statistical Independency of the SUT, Environment Usability & Complexity Information format; Location & Size; Maintainability Cost 3/20/23 SLIDE 112

TYPES OF ORACLES (1) Complete or Independent Separate program Independent generation of expected results Verify some characteristics Sanity Check (approximation) Consistency Self Verifying (Data) Model-Based Using a (digital) model to structure the test and compare behaviour Diagnostic Code assertions or logging 3/20/23 SLIDE 113 Property-Based Verify variables or characteristics Verify data consistency Constraint-Based Check partial valid or invalid individual values or value combinations Computational Using inverse function to revert to input Embedded key or tagged message data Statistical Compare statistical characteristics of both input and output values Compare results between different runs or with an “original” master. Heuristic or Probabilistic Human Personal observation Hand-crafted One-off No Oracle

CONCLUSIONS Testing is not easy Testing is not binary A pass is not necessarily a pass A fail is not necessarily a fail Testing is 4 dimensional Take into account variable inputs Take into account variable outputs Include minimum one test oracle within your test, preferrably more Build variations into your test validations Do not rely on exact metrics, but search for tendencies Be adaptive 3/20/23 SLIDE 116

REFLECTION 3/20/23 SLIDE 117

Q&A 3/20/23 SLIDE 118

Thank you !! All feedback and suggestions are welcome: [email protected] 03/20/2023

Back to top button