What is Software Testing?

What is Software Testing? There are many published definitions of software testing, however, all of these definitions boil down to essentially the same thing: software testing is the process of executing software in a controlled manner, in order to answer the question "Does the software behave as specified?". Software testing is often used in association with the terms verification and validation.Verification is the checking or testing of items, including software, for conformance and consistency with an associated specification. Software testing is just one kind of verification, which also uses techniques such as reviews, analysis, inspections and walkthroughs. Validation is the process of checking that what has been specified is what the user actually wanted. •Validation: Are we doing the right job? •Verification: Are we doing the job right? The term bug is often used to refer to a problem or fault in a computer. There are software bugs and hardware bugs. The term originated in the United States, at the time when pioneering computers were built out of valves, when a series of previously inexplicable faults were eventually traced to moths flying about inside the computer. Software testing should not be confused with debugging. Debugging is the process of analyzing and locating bugs when software does not behave as expected. Although the identification of some bugs will be obvious from playing with the software, a methodical approach to software testing is a much more thorough means of identifying bugs. Debugging is therefore an activity which supports testing, but cannot replace testing. However, no amount of testing can be guaranteed to discover all bugs.
Other activities which are often associated with software testing are static analysis and dynamic analysis. Static analysis investigates the source code of software, looking for problems and gathering metrics without actually executing the code. Dynamic analysis looks at the behavior of software while it is executing, to provide information such as execution traces, timing profiles, and test coverage information.
2. Software Specifications and Testing The key component of the above definitions is the word specified. Validation and Verification activities, such as software testing, cannot be meaningful unless there is a Specification for the software. Software could be a single module or unit of code, or an entire system. Depending on the size of the development and the development methods, specification of software can range from a single document to a complex hierarchy of documents. A hierarchy of software specifications will typically contain three or more levels of software specification documents. •The Requirements Specification, which specifies what the software is required to do and may also specify constraints on how this may be achieved. •The Architectural Design Specification, which describes the architecture of a design which implements the requirements. Components within the software and the relationship between them will be described in this document. •Detailed Design Specifications, which describe how each component in the software, down to individual units, is to be implemented. Requirements Specification Architectural Design Specification Detailed Design Specification With such a hierarchy of specifications, it is possible to test software at various stages of the development, for conformance with each specification. The levels of testing which correspond to the hierarchy of software specifications listed above are: •Unit Testing, in which each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. •Software Integration Testing, in which progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole. •System Testing, in which the software is integrated to the overall product and tested to Show that all requirements are met. A further level of testing is also concerned with requirements: •Acceptance Testing, upon which acceptance of the completed software is based. This Will often use a subset of the system tests, witnessed by the customers for the software Or system. Once each level of software specification has been written, the next step is to design the tests. An important point here is that the tests should be designed before the software is implemented, because if the software was implemented first it would be too tempting to test the software against what it is observed to do (which is not really testing at all), rather than against what it is specified to do. Within each level of testing, once the tests have been applied, test results are evaluated. If a problem is encountered, then either the tests are revised and applied again, or the software is fixed and the tests applied again. This is repeated until no problems are encountered, at which point development can proceed to the next level of testing.
Testing does not end following the conclusion of acceptance testing. Software has to be maintained to fix problems which show up during use and to accommodate new requirements. Software tests have to be repeated, modified and extended. The effort to revise and repeat tests consequently forms a major part of the overall cost of developing and maintaining software. The term regression testing is used to refer to the repetition of earlier successful tests in order to make sure that changes to the software have not introduced side effects.
3. Test Design Documentation The design of tests is subject to the same basic engineering principles as the design of software. Good design consists of a number of stages which progressively elaborate the design of tests from an initial high level strategy to detailed test procedures. These stages are: test strategy, test planning, test case design, and test procedure design. The design of tests has to be driven by the specification of the software. At the highest level this means that tests will be designed to verify that the software faithfully implements the requirements of the Requirements Specification. At lower levels tests will be designed toverify that items of software implement all design decisions made in the Architectural Design Specification and Detailed Design Specifications. As with any design process, each stage of the test design process should be subject to informal and formal review. The ease with which tests can be designed is highly dependant on the design of the software. It is important to consider testability as a key (but usually undocumented) requirement for any software development.
3.1. Test Strategy The first stage is the formulation of a test strategy. A test strategy is a statement of the Overall approach to testing, identifying what levels of testing are to be applied and the Methods, techniques and tools to be used. A test strategy should ideally be organization Wide, being applicable to all of organizations software developments. Developing a test strategy which efficiently meets the needs of an organization is critical to the success of software development within the organization. The application of a test Strategy to a software development project should be detailed in the projects software . Quality plan. 3.2. Test Plans The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment. A test plan may be project wide, or may in fact be a hierarchy of plans relating to the Various levels of specification and testing: •An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the System test plan as a single document. •A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan. •A Software Integration Test Plan, describing the plan for integration of tested software components. This may form part of the Architectural Design Specification. •Unit Test Plan(s), describing the plans for testing of individual units of software. Thesemay form part of the Detailed Design Specifications. The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification. 3.3. Test Case Design Once the test plan for a level of testing has been written, the next stage of test design is to specify a set of test cases or test paths for each item to be tested at that level. A number of test cases will be identified for each item to be tested at each level of testing. Each test case will specify how the implementation of a particular requirement or design decision is to be tested and the criteria for success of the test. The test cases may be documented with the test plan, as a section of software specification, or in a separate document called a test specification or test description. •An Acceptance Test Specification, specifying the test cases for acceptance testing of the software. This would usually be published as a separate document, but might be published with the acceptance test plan. •A System Test Specification, specifying the test cases for system integration and Testing. This would also usually be published as a separate document, but might be Published with the system test plan. •Software Integration Test Specifications, specifying the test cases for each stage of Integration of tested software components. These may form sections of the Architectural Design Specification. •Unit Test Specifications, specifying the test cases for testing of individual units of Software. These may form sections of the Detailed Design Specifications. System testing and acceptance testing involve an enormous number of individual test cases.
In order to keep track of which requirements are tested by which test cases, an index which cross references between requirements and test cases often constructed. This is usually referred to as a Verification Cross Reference Index (VCRI) and is attached to the test specification. Cross reference indexes may also be used with unit testing and software integration testing. It is important to design test cases for both positive testing and negative testing. Positive testing checks that the software does what it should. Negative testing checks that the software doesn't do what it shouldn't. The process of designing test cases, including executing them as thought experiments, will often identify bugs before the software has even been built. It is not uncommon to find more bugs when designing tests than when executing tests.
3.4. Test Procedures The final stage of test design is to implement a set of test cases as a test procedure, Specifying the exact process to be followed to conduct each of the test cases. This is a fairly straight forward process, which can be likened to designing units of code from higher level functional descriptions. For each item to be tested, at each level of testing, a test procedure will specify the process to be followed in conducting the appropriate test cases. A test procedure cannot leave out steps or make assumptions. The level of detail must be such that the test procedure is deterministic and repeatable. Test procedures should always be separate items, because they contain a great deal of detail which is irrelevant to software specifications. If AdaTEST or Cantata are used, test procedures may be coded directly as AdaTEST or Cantata test scripts.
4. Test Results Documentation When tests are executed, the outputs of each test execution should be recorded in a test Results file. These results are then assessed against criteria in the test specification to determine the overall outcome of a test. If AdaTEST or Cantata is used, this file will be Created and the results assessed automatically according to criteria specified in the test Script. Each test execution should also be noted in a test log. The test log will contain records of When each test has been executed, the outcome of each test execution, and may also include key observations made during test execution. Often a test log is not maintained for lower levels of testing (unit test and software integration test). Test reports may be produced at various points during the testing process. A test report Will summaries the results of testing and document any analysis. An acceptance test report often forms a contractual document within which acceptance of software is agreed.
5. Further Results and Conclusion Software can be tested at various stages of the development and with various degrees of Rigors. Like any development activity, testing consumes effort and effort costs money. Developers should plan for between 30% and 70% of a projects effort to be expended on Verification and validation activities, including software testing. From an economics point of view, the level of testing appropriate to a particular Organization and software application will depend on the potential consequences of Undetected bugs. Such consequences can range from a minor inconvenience of having to Find a work-round for a bug to multiple deaths. Often overlooked by software developers (but not by customers), is the long term damage to the credibility of an organization which delivers software to users with bugs in it, and the resulting negative impact on future business. Conversely, a reputation for reliable software will help an organization to obtain future business. Efficiency and quality are best served by testing software as early in the life cycle as Practical, with full regression testing whenever changes are made. The later a bug is found, the higher the cost of fixing it, so it is sound economics to identify and fix bugs as early as possible. Designing tests will help to identify bugs, even before the tests are executed, so designing tests as early as practical in software development is a useful means of reducing the cost of identifying and correcting bugs. In practice the design of each level of software testing will be developed through a number of layers, each adding more detail to the tests. Each level of tests should be designed before the implementation reaches a point which could influence the design of tests in such a way as to be detrimental to the objectivity of the tests. Remember: software should be tested against what it is specified to do, not against what it actually observed to do. The effectiveness of testing effort can be maximized by selection of an appropriate testing strategy, good management of the testing process, and appropriate use of tools such as AdaTEST or Cantata to support the testing process. The net result will be an increase in quality and a decrease in costs, both of which can only be beneficial to a software developers business. The following list provides some rules to follow as an aid to effective and beneficial Software testing. •Always test against a specification. If tests are not developed from a specification, Then it is not testing. Hence, testing is totally reliant upon adequate specification of Software. •Document the testing process: specify tests and record test results. •Test hierarchically against each level of specification. Finding more errors earlier will Ultimately reduce costs. •Plan verification and validation activities, particularly testing. •Complement testing with techniques such as static analysis and dynamic analysis. •Always test positively: that the software does what it should, but also negatively: that it Doesn’t do what it shouldn't. •Have the right attitude to testing - it should be a challenge, not the chore it so often Becomes.

BUG LIFE CYCLE

BUG: Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).
Bug Life Cycle: In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows: 1. New 2. Open 3. Assign 4. Test 5. Verified 6. Deferred 7. Reopened 8. Duplicate 9. Rejected 10. Closed

Description of Various Stages:

  • 1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
  • 2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
  • 3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
  • 4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
  • 5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
  • 6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
  • 7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
  • 8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
  • 9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
  • 10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved

Guidelines on deciding the Severity of Bug: Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects. A sample guideline for assignment of Priority Levels during the product test phase includes: Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

What should be done after a bug is found? The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available. The following are items to consider in the tracking process:

  • Ø Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
  • Ø Bug identifier (number, ID, etc.)
  • Ø Current bug status (e.g., 'Released for Retest', 'New', etc.)
  • Ø The application name or identifier and version
  • Ø The function, module, feature, object, screen, etc. where the bug occurred
  • Ø Environment specifics, system, platform, relevant hardware specifics
  • Ø Test case name/number/identifier
  • Ø One-line bug description
  • Ø Full bug description
  • Ø Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
  • Ø Names and/or descriptions of file/data/messages/etc. used in test
  • Ø File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
  • Ø Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
  • Ø Was the bug reproducible?
  • Ø Tester name
  • Ø Test date
  • Ø Bug reporting date
  • Ø Name of developer/group/organization the problem is assigned to
  • Ø Description of problem cause
  • Ø Description of fix
  • Ø Code section/file/module/class/method that was fixed
  • Ø Date of fix
  • Ø Application version that contains the fix
  • Ø Tester responsible for retest
  • Ø Retest date
  • Ø Retest results
  • Ø Regression testing requirements
  • Ø Tester responsible for regression tests
  • Ø Regression testing results

More Testing Interview Question

  • 51. What is the diff between Stress & Load Testing?
  • 52. What is the Diff between Two Tier & Three tier Architecture?
  • 53. What is the diff between Client Server & Web Based Testing?
  • 54. What is the diff between Integration & System Testing?
  • 55. What is the Diff between Code Walkthrough & Code Review?
  • 56. What is the diff between walkthrough and inspection?
  • 57. What is the Diff between SIT & IST?
  • 58. What is the Diff between static and dynamic?
  • 59. What is the diff between alpha testing and beta testing?
  • 60. What are the Minimum requirements to start testing?
  • 61. What is Smoke Testing & when it will be done?
  • 62. What is Adhoc Testing? When it can be done?
  • 63. What is cookie testing?
  • 64. What is security testing?
  • 65. What is database testing?
  • 66. What is the relation ship between Quality & Testing?
  • 67. How do you determine, what to be tested?
  • 68. How do you go about testing a project?
  • 69. What is the Initial Stage of testing?
  • 70. What is Web Based Application Testing?
  • 71. What is Client Server Application Testing?
  • 72. What is Two Tier & Three tier Architecture?
  • 73. What is the use of Functional Specification?
  • 74. Why do we prepare test condition, test cases, test script (Before Starting Testing)?
  • 75. Is it not waste of time in preparing the test condition, test case & Test Script?
  • 76. How do you go about testing of Web Application?
  • 77. How do you go about testing of Client Server Application?
  • 78. What is meant by Static Testing?
  • 79. Can the static testing be done for both Web & Client Server Application?
  • 80. In the Static Testing, what all can be tested?
  • 81. Can test condition, test case & test script help you in performing the static testing?
  • 82. What is meant by dynamic testing?
  • 83. Is the dynamic testing a functional testing?
  • 84. Is the Static testing a functional testing?
  • 85. What are the functional testing you perform?
  • 86. What is meant by Alpha Testing?
  • 87. What kind of Document you need for going for an Functional testing?
  • 88. What is meant by Beta Testing?
  • 89. At what stage the unit testing has to be done?
  • 90 Who can perform the Unit Testing?
  • 91. When will the Verification & Validation be done?
  • 92. What is meant by Code Walkthrough?
  • 93. What is meant Code Review?
  • 94. What is the testing that a tester performs at the end of Unit Testing?
  • 95. What are the things, you prefer & Prepare before starting Testing?
  • 96. What is Integration Testing?
  • 97. What is Incremental Integration Testing?
  • 98. What is meant by System Testing?
  • 99. What is meant by SIT?
  • 100 .When do you go for Integration Testing?

Automation Testing- Win Runner

Testing Automation: Software testing can be very costly. Automation is a good way to cut down time and cost. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. The reason is straight-forward. In order to automate the process, we have to have some ways to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Today we still don't have a full-scale system that has achieved this goal. In general, significant amount of human intervention is still needed in testing. The degree of automation remains at the automated test script level.There are Many automation Testiing Tool For Functional & Regression Testing,Performance Testing, Bug Tracking and Test Management Tool.
What are the Advantages of Automation in testing?
  • 1. Fast
  • 2. Reliable
  • 3. Repeatable
  • 4. Programmable
  • 5. Comprehensive
  • 6. Reusable
Win Runner: Win Runner is a Functional & Regression Testing Tool of Mercury.
  • Win Runner:
  • ØNeed For Automation
  • ØWinRunner Introduction
  • ØWinRunner Basic Session / Examples
  • ØWinRunner Advanced Session / Examples

Few Reasons

  • Running test manually is boring and frustrating
  • Eliminates human error
  • Write once, run as many times as needed
  • Provides increased testing coverage
  • Allows testers to focus on verifying new rather than existing functionality
  • Creates tests that can be maintained and reused throughout the application life cycle

WinRunner is functional testing tool

  • Specifically a regression test tool
  • Help in creating reusable and adaptable script
  • Used for automating testing process
  • Need to write scripts in TSL for the same
  • Help in detecting early defects before regression Testing
  • Specifically a regression test tool
  • Help in creating reusable and adaptable script
  • Used for automating testing process
  • Need to write scripts in TSL for the same
  • Help in detecting early defects before regression testing

Test Plan Documenting System

  • Test Plan Design
  • Test Case Design
  • Test Script Creation - Manual & Automated

Test Execution Management

  • Scenario Creation
  • Test Runs

Analysis of Results

  • Reports & Graphs

Defect Tracking System WinRunner Testing Process

  • ØCreate GUI map
  • ØCreate tests
  • ØDebug tests
  • ØRun tests
  • ØExamine results
  • ØReport defects

Testing Process of Win Runner in Detail WinRunner testing process involves six main stages .

  • Create GUI Map File : So that WinRunner can recognize the GUI objects in the application being tested
  • Create test scripts : by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
  • Debug Test: run tests in Debug mode to make sure they run smoothly
  • Run Tests: run tests in Verify mode to test your application.
  • View Results: determines the success or failure of the tests.
  • Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

WinRunner Testing Modes Context Sensitive

  • Records the actions on the AUT in terms of GUI objects.
  • Ignores the physical location of the object on the screen

Analog

  • Records mouse clicks, keyboard input, and the exact x- and y-coordinates traveled by the mouse

Types of GUI Map files

GUI Map File per Test mode

  • Separate GUI Map File for each test

Global GUI Map File mode

  • Single GUI Map File for a group of tests

Different modes for running the tests

  • 1.Verify
  • 2.Debug
  • 3.Update

Checkpoints

  • ØGUI Checkpoint
  • ØBitmap Checkpoint
  • ØDatabase Checkpoint
  • ØSynchronization point

GUI Checkpoint

  • ØA GUI checkpoint examines the behavior of an object’s properties
  • ØDuring execution, the current state of the GUI objects is compared to the expected results

Bitmap Checkpoint

  • ØCompares captured bitmap images pixel by pixel
  • ØWhen running a test that includes bitmap checkpoints, make sure that the screen display settings are the same as when the test script was created. If the screen settings are different, WinRunner will report a bitmap mismatch .

Database Checkpoint

  • ØA query is defined on the database and the database checkpoint checks the values contained in the result set
  • ØResult set is a set of values retrieved from the results of the query
  • ØWays to define the query
  • (a) Microsoft query
  • (b) ODBC query
  • (c) Data junction

Synchronization point

  • ØWhen you run tests, your application may not always respond to input with the same speed
  • ØInsert a synchronization point into the test script at the exact point where the problem occurs
  • ØA synchronization point tells WinRunner to pause the test run in order to wait for a specified response in the application

Using Regular Expressions

  • ØEnables WR to identify objects with varying names or titles
  • ØCan be used in
  • An object’s physical descriptions in the GUI map
  • GUI Checkpoint
  • Text Checkpoint

Virtual Objects:

  • ØCan teach WinRunner to recognize any bitmap in a window as a GUI object
  • ØMake test scripts easier to read and understand

Creating Data-Driven Tests:

  • ØTo test how the AUT performs with multiple sets of data
  • ØCan be done using the..
  • •Data Driver Wizard
  • Add command manually in script

Advantage of Data-Driven Tests:

  • ØRun the same test with different data
  • ØTest the AUT for both, positive and negative results
  • ØExpandable
  • ØEasy to maintain

Manual Data-Driven Test

Web Application Testing

Web Application Testing: Test a Web Based Application , in Web Testing We Consider Many Testing.Web Testing We Test Each and Every Aspect.
  • Short release cycle
  • Constantly Changing Technology
  • Possibility Of huge number of users
  • Inability to control user’s running environment
  • 24hour availability of the website

Functionality Testing: It involves making sure that all the features that most affect the user interactions works properly.

  • These features may include
  • Forms in the page .
  • Searches present in the page.
  • Pop-up windows (client side and server side) .
  • Any online transactions available.

Usability Testing:

  • Identify the websites purpose.
  • Identify the intended users.
  • Whether the user completes the task successfully.
  • How much time the user needs to complete the task.
  • Number of pages accessed to complete the task.
  • At which place in the application the user can .
  • possibly make mistakes.
  • How the user seeks assistance when lost.
  • Whether the online information provides enough help.
  • How possibly a user can react for the specific
  • download time of the page.
  • Point at which the user gets confused and even fail to complete a task.
  • Number of clicks between tasks, number of seconds between clicks as well as number of clicks, as well as number of pages browsed.
  • Whether the user felt successful in using site.
  • Feedback on navigation and other features of the sites.
  • Whether the user would recommend this product to friends.
  • Whether the user understood the terminology.
  • Ideas for improvement
  • What the user liked or disliked about the website, and why.

Navigation Testing :

  • Ø Easy and quick access to the information.
  • Ø Logical hierarchy of the pages.
  • Ø Conformation of the page to tell the user about where they are at any point.
  • Ø Facility to return to previous state or the home page.
  • Ø Consistent look and lay out of every page.
  • Ø Moving to and from pages .
  • Ø Scrolling through the pages .
  • Ø Clicking on images and thumbnails to make sure they work.
  • Ø Testing all the links (both internal and external) .
  • Ø Ensuring no broken links exists.
  • Ø Proper lay out under different browsers.
  • Ø Measuring load time of every page.
  • Ø Compatibility and usage of buttons, key board shortcuts and mouse actions.

Form Testing:

  • Ø Using tab key the form traverses fields in the proper order, both forward and backward.
  • Ø Testing boundary values for the data input.
  • Ø Checking that the form is capable of trapping invalid data correctly. (Specially date and numeric formats)
  • Ø The form is capable of updating the information correctly.
  • Ø Tool tip text messages displayed for the form are proper and not misleading.
  • Ø Objects are aligned in the form properly, and the labels displayed are not misleading.

Page Content Testing :

  • Ø All the images and the graphics are displayed correctly.
  • Ø All contents are present as per the requirements.
  • Ø Page structures are consistent across the browsers.
  • Ø Critical pages contain same contents across browsers.
  • Ø All parts of the table or form are present in the right place and in the right order.
  • Ø Links to the relevant contents inside or out of the page are appropriate.
  • Ø Web pages are visually appealing.
  • Ø Checking the vital information’s present in the page are not changing across different builds.

Configuration and Compatibility Testing:

  • ØIs the user behind firewall?
  • ØDoes the user connect to the application server through a load balance.
  • ØDoes the browser accepts cookies?
  • ØAre high security settings enabled?
  • ØWhat technologies are the developers of the WebPages using?
  • ØAre secure blocking tools being used?

Performance Testing:

  • ØHigh activity volume at launch.
  • ØTime of day .
  • ØActivity spikes due to marketing promotions .
  • ØBottleneck due to hundreds of users on a network.
  • ØDownload time .
  • ØUsage patterns .
  • ØThink time .
  • ØUser arrival rates .
  • ØClient platforms .
  • ØInternet access speed .
  • ØAbandonment rates .

Load Testing:

  • Strategy:
  • •Understand the load requirements
  • •Number of user hits per day/week/month
  • •Total concurrent users (worst-case scenario)
  • •Peak request rates (Number of pages served per second)
  • •Identify the tools and their limitations.
  • •Generate enough users and transactions to access capacity.
  • •Create a base line criterion.
  • •Check the system behavior in multiple sessions.
  • •Identify any other application probably running on the client
  • •system or server.
  • •Execute the test multiple times.

Security Testing:

  • ØWhat precautions exist to prevent or limit attacks from users?
  • ØAre the browser settings set to ensure maximum security protections?
  • ØHow does the website handle access rights?
  • ØDoes the application has a virus detection mechanism?
  • ØDoes the application handle transaction tempering?
  • ØDoes the e-commerce vendor provide a mechanism to prevent
  • Øcredit card frauds?
  • ØHow does the website encrypt data transfer mechanism?
  • ØHow does the web site authenticate users?
  • ØDoes viewing the source code disclose any vital information?
  • ØHow safe is the credit card or user information?
  • ØDoes the application allow for a file to be digitally signed?

Testing Written Test Questions

  • 1 : With thorough testing it is possible to remove all defects from a program prior to delivery to the customer.
  • a. True b. False
  • The correct answer is b
  • 2 : Which of the following are characteristics of testable software?
  • a. observability b. simplicity c. stability d. all of the above
  • The correct answer is d
  • 3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called
  • a. black-box testing
  • b. glass-box testing
  • c. grey-box testing
  • d. white-box testing
  • The correct answer is a
  • 4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called
  • a. behavioral testing
  • b. black-box testing
  • c. grey-box testing d. white-box testing
  • The correct answer is d
  • 5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing?
  • a. behavioral errors
  • b. logic errors
  • c. performance errors
  • d. typographical errors
  • e. both b and d
  • The correct answer is e
  • 6 : Program flow graphs are identical to program flowcharts.
  • a. True b. False
  • The correct answer is b
  • 7 : The cyclomatic complexity metric provides the designer with information regarding
  • the number of
  • a. cycles in the program
  • b. errors in the program
  • c. independent logic paths in the program
  • d. statements in the programThe correct answer is c
  • 8 : The cyclomatic complexity of a program can be computed directly from a PDL
  • representation of an algorithm without drawing a program flow graph.
  • a. True b. False
  • The correct answer is a
  • 9 : Condition testing is a control structure testing technique where the criteria used to
  • design test cases is that they
  • a. rely on basis path testing
  • b. exercise the logical conditions in a program module
  • c. select test paths based on the locations and uses of variables
  • d. focus on testing the validity of loop constructs
  • The correct answer is b
  • 10 : Data flow testing is a control structure testing technique where the criteria used to
  • design test cases is that they
  • a. rely on basis path testing
  • b. exercise the logical conditions in a program module
  • c. select test paths based on the locations and uses of variables
  • d. focus on testing the validity of loop constructs
  • The correct answer is c
  • 11 : Loop testing is a control structure testing technique where the criteria used to design
  • test cases is that they
  • a. rely basis path testing
  • b. exercise the logical conditions in a program module
  • c. select test paths based on the locations and uses of variables
  • d. focus on testing the validity of loop constructs
  • The correct answer is d
  • 12 : Black-box testing attempts to find errors in which of the following categories
  • a. incorrect or missing functions
  • b. interface errors
  • c. performance errors
  • d. all of the above
  • e. none of the above
  • The correct answer is d
  • 13 : Graph-based testing methods can only be used for object-oriented systems
  • a. True b. False
  • The correct answer is b
  • 14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.
  • a. True b. False
  • The correct answer is a
  • 15 : Boundary value analysis can only be used to do white-box testing.
  • a. True b. False
  • The correct answer is b
  • 16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.
  • a. True b. False
  • The correct answer is b
  • 17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.
  • a. True b. False
  • The correct answer is a
  • 18 : Test case design "in the small" for OO software is driven by the algorithmic detail of
  • the individual operations.
  • a. True b. False
  • The correct answer is a
  • 19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.
  • a. True b. False
  • The correct answer is b
  • 20 : Use-cases can provide useful input into the design of black-box and state-based tests
  • of OO software.
  • a. True b. False
  • The correct answer is a
  • 21 : Fault-based testing is best reserved for
  • a. conventional software testing
  • b. operations and classes that are critical or suspect
  • c. use-case validation
  • d. white-box testing of operator algorithms
  • The correct answer is b
  • 22 : Testing OO class operations is made more difficult by
  • a. encapsulation b. inheritance c. polymorphism d. both b and c
  • The correct answer is d
  • 23 : Scenario-based testing
  • a. concentrates on actor and software interaction
  • b. misses errors in specifications
  • c. misses errors in subsystem interactions
  • d. both a and b
  • The correct answer is a
  • 24 : Deep structure testing is not designed to
  • a. examine object behaviors
  • b. exercise communication mechanisms
  • c. exercise object dependencies
  • d. exercise structure observable by the user
  • The correct answer is d
  • 25 : Random order tests are conducted to exercise different class instance life histories.
  • a. True b. False
  • The correct answer is a
  • 26 : Which of these techniques is not useful for partition testing at the class level
  • a. attribute-based partitioning
  • b. category-based partitioning
  • c. equivalence class partitioning
  • d. state-based partitioning
  • The correct answer is c
  • 27 : Multiple class testing is too complex to be tested using random test cases.
  • a. True b. False
  • The correct answer is b
  • 28 : Tests derived from behavioral class models should be based on the a. data flow
  • diagram
  • b. object-relation diagram
  • c. state diagram
  • d. use-case diagram
  • The correct answer is c
  • 29 : Client/server architectures cannot be properly tested because network load is highly
  • variable.
  • a. True b. False
  • The correct answer is b
  • 30 : Real-time applications add a new and potentially difficult element to the testing mix
  • a. performance b. reliability c. security d. time
  • The correct answer is d

Question Answers

  • 1.What is Software Testing?
  • Ans- To validate the software against the requirement.
  • 2. What is the Purpose of Testing?
  • Ans- To check whether system is meeting requirement.
  • 3. What types of testing do testers perform?
  • Ans- Black Box
  • 4. What is the Outcome of Testing?
  • Ans- System which is bug free and meet the system requirements.
  • 5. What kind of testing have you done?
  • Ans- Black Box
  • 6. What is the need for testing?
  • Ans- To Make error Free Product and Reduce Development Cost.
  • 7. What are the entry criteria for Functionality and Performance testing?
  • Ans- Functional: should have stable functionality code Performance- After system testing
  • 8. What is test metrics?
  • Ans- Contains how many test cases we have executed, which again contain how many pass, fail and can not be executed.
  • 9. Why do you go for White box testing, when Black box testing is available?
  • Ans- To check code, branches and loops in code
  • 10. What are the entry criteria for Automation testing?
  • Ans- Should have stable code.
  • 11. When to start and Stop Testing?
  • Ans-When system meets the requirement and there is no change in functionality.
  • 12. What is Quality?
  • Ans- Consists of two QA and QC.Customer point of View Fit for use and Meets User Requirement is Quality.
  • 13. What is Baseline document, Can you say any two?
  • Ans- Document which is standard like test plan format, checklist for system testing .
  • 13. What is Baseline document, Can you say any two?
  • Ans- Document which is standard like test plan format, checklist for system testing
  • 14. What is verification?
  • Ans- To review the document.
  • 15. What is validation?
  • Ans- To validate the system against the requirements.
  • 16. What is quality assurance?
  • Ans – Bug presentation activity is called QA.
  • 17. What is quality control?
  • Ans- Bug dictation activity is called QC.
  • 18. What is SDLC and TDLC?
  • Ans- SDLC- Software Development Life Cycle and Testing is a part of it. TDLC-Test Development Life Cycle.
  • 19. What are the Qualities of a Tester?
  • Ans- Should have ability of find out hidden bug as early as possible in SDLC.
  • 20. When to start and Stop Testing?
  • Ans- Start- At the time of requirement gathering, When Meets Requirements.
  • 21. What are the various levels of testing?
  • ans- Unit,Integration,System and Acceptance testing.
  • 22. What are the types of testing you know and you experienced?
  • Ans- Black Box, Functional Testing, system testing, gui testing etc
  • 23. After completing testing, what would you deliver to the client?
  • Ans- Testware
  • 24. What is a Test Bed?
  • Ans- Test Data is called test bad.
  • 25. What is a Data Guidelines?
  • Ans- Guidelines which are to be followed for the preparation of test data.
  • 26. Why do you go for Test Bed?
  • Ans- To validate the system against the required input.
  • 27. What is Severity and Priority and who will decide what?
  • Ans- Severity- how much severe is bug for application like critical Priority- How much urgency of functionality in which bug occur.
  • 28. Can Automation testing replace manual testing? If it so, how?
  • Ans- Yes, if there are many modifications in functionality and it is near impossible to update the automated scripts.
  • 29. What is a test condition?
  • Ans: logical input data against which we validate the system.
  • 30. What is the test data?
  • Ans- input data against which we validate the system.
  • 31. What is an Inconsistent bug?
  • Ans-Bug which are not reproducible
  • 32. What is the difference between Re-testing and Regression testing?
  • Ans- Regression- Check that change in code have not effected the working functionality
  • Retesting- Again testing the functionality of the application.

Test Case

  • Test Case : Test case is a document that describes an input, action or event and an expected response to determine if a feature of an application is working correctly and meets the user requirements.A Good test case is the one that has high probability of finding as yet undiscovered error.
  • The Test case document contains the following fields:
  • Test case id : This is an id given to every test case and it should be unique.
  • Test case decription : This field contains the brief description of what we are going to test in that application.
  • Test input : This contains the input that we are going to give to test the application or system.
  • Test Action : This field explains the action to be performed to execute the test case.
  • Expected results : This fielsd contains the description of what tester should see after all test steps has been completed.
  • Actual results : This field contains a brief description of what the tester saw after the test steps has been completed.
  • Result:This is often replaced with either pass or fail. If both the expected result and actual result are same then the result is pass else the result is fail.

Test Plan Contents

Contents of test plan: Introduction: This section contains the purpose and objectives of the test plan. Scope (A)Items to be tested :Refer to the functional requirements that specify the features and functions to be tested. (B)Items not to be Tested:List the features and functions that will not be covered in this test plan. Identify briefly the reasons for leaving them out.

Test Strategy :Testing is the process of analyzing a software item to detect the differencesbetween existing and required conditions and to evaluate the features of thesoftware item. This may appear as a specific document (such as a TestSpecification), or it may be part of the organization's standard test approach. Foreach level of testing, there should be a test plan and an appropriate set ofdeliverables. The test strategy should be clearly defined and the Software TestPlan acts as the high-level test plan.
Environment Requirements:
(a) System Requirements This section should be filled out in detail for new projects. For existing maintenance tasks, a simple cross-reference to the document describing existing system requirements is fine.
(B)Hardware/ software requirements : This section contains the details of system/server required to install the application or perform the testing, specific s/w that needs to be installed on the system to get the application running or to connect to the database, connectivity related issues etc.
Test Schedule: Identify the estimated effort required to execute the test plan. Include a both a range and a confidence level. Identify the resources available to carry out the test plan.Identify time or resource constraints that will lead to a risk of the test project falling behind schedule, below expected scope, or below expected quality.
Resources and Responsibilities: This section will explain the roles and responsibilites of the management team, testing team, Business team, testing support team and external support team.
Deliverables: This section contains various deliverables that are due to the client at various points of time. i.e. daily, weekly, start of project, end of project etc. these could include test plans, test procedures, test matrices, status reports, test scripts etc. templates for all these also be attached.
Suspension / Exit Criteria : This is a particular risk clause to define under what circumstances testing would stop and restart If any defects are found which seriously impact the test progress, the QA manager may choose to Suspend testing.
Criteria that will justify test suspension are:
(I) Hardware/software is not available at the times indicated in the project schedule.
(II) Source code contains one or more critical defects, which seriously prevents or limits testing progress.
(III)Assigned test resources are not available when needed by the test team.
Risks : Define in advance what could go wrong with a plan and the measures that will be taken to deal with these problems.
(A) The event causing the risk.
(B) The likelihood of the event happening.
(C) The impact on the plan if the event occurs.
ToolsApart from manual testing list the tools used for automating the unit testing, functional testing, performance testing and user interface testing.
Documentation:This section contains the embedded documents or links to document which have been/will be used in the course of testing. E.g. Templates used for reports, test cases etc. reference documents can also be attached here.
Approvals: This section contains the mutual agreement between the client and QA team with both leads/ managers signing off their agreement on the test plan.

Testing Process

The steps in the testing process are as follows.
1. Requirement analysisTesting should begin in the requirements phase of the software life cycle (SDLC). The actual requirement should be understand clearly with the help of Requirement Specification documents, Functional Specification documents, Design Specification documents, Use case Documents etc.
During the requirement analysis the following should be considered.-Are the definitions and descriptions of the required capabilities precise? -Is there clear delineation between the system and its environment?-Can the requirements be realized in practice?-Can the requirements be tested effectively?
2. Test PlanningDuring this phase Test Strategy, Test Plan, Test Bed will be created.A test plan is a systematic approach in testing a system or software. The plan should identify:-Which aspects of the system should be tested?-Criteria for success.-The methods and techniques to be used.-Personnel responsible for the testing.-Different Test phase and Test Methodologies-Manual and Automation Testing-Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc-Evaluation & identification – Test, Defect tracking tools
3. Test Environment SetupDuring this phase the required environment will be setup will be done. The following should also be taken in account.- Network connectivity’s-All the Software/ tools Installation and configuration-Coordination with Vendors and others
4. Test DesignDuring this phase-Test Scenarios will be identified.-Test Cases will be prepared.-Test data and Test scripts prepared.-Test case reviews will be conducted.
5. Test AutomationIn this phase the requirement for the automation will be identified. The tools that are to be used will be identified. Designing framework, scripting, script integration, Review and approval will be undertaken in this phase.
6. Test Execution and Defect Tracking Testers execute the software based on the plans and tests and report any errors found to the development team. In this phase
-Test cases will be executed.-Test Scripts will be tested.-Test Results will be analyzed.-Raised the defects and tracking for its closure.
7. Test Reports Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
-Test summary reports will be prepared-Test Metrics and process Improvements made-Build release-Receiving acceptance

Software Development Life Cycle

SDLC: The software development life cycle (SDLC) is the entire process of formal, logical steps taken to develop a software product. Software development is the process of developing software through successive phases in an orderly way. This process includes not only the actual writing of code but also the preparation of requirements and objectives, the design of what is to be coded, and confirmation that what is developed has met objectives.
The software development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. Typical phases of software development:
1) Requirement Analysis 2) Software design 3) Development or Coding 4) Testing 5) Maintenance
Requirement Analysis: The new system requirements are defined. The requirements of a desired software product are extracted. Based the business scenario the SRS (Software Requirement Specification) document is prepared in this phase. The purpose of this document is to specify the functional requirements of the software that will be produced by the SOS project group. The specifications are intended to guide the group through the development process.
Design: Plans are laid out concerning the physical construction, hardware, operating systems, programming, communications, and security issues for the software. Design phase is concerned with making sure the software system will meet the requirements of the product. It should also ensure that the future requirements will also be addressed.. Development (or) Coding: Programs are developed by the developers with the help of the design. The design is reduced to code by the software engineers. Testing: Testing is evaluating the software to check for the user requirements. Here the software is evaluated with intent of finding defects. Maintenance: Once the new system is up and running for awhile, it should be exhaustively evaluated. Maintenance must be kept up rigorously at all times. Users of the system should be kept up-to-date concerning the latest modifications and procedures.

Testing Technique

Testing Techniques

  • Black Box Testing -Testing of a function without knowing internal structure of the program.
  • White Box Testing -Testing of a function with knowing internal structure of the program.
  • Regression Testing -To ensure that the code changes have not had an adverse affect to the other modules or on existing functions.
  • Functional Testing:Study SRS , Identify Unit Functions For each unit function - Take each input function - Identify Equivalence class - Form Test cases - Form Test cases for boundary values - From Test cases for Error Guessing Form Unit function v/s Test cases, Cross Reference Matrix and Find the coverage.
  • Unit Testing:
  • The most 'micro' scale of testing to test particular functions or code modules.
  • Typically done by the programmer and not by testers .
  • Unit - smallest testable piece of software.
  • A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness.
  • Unit testing done to show that the unit does not satisfy the functional specification and/ or its implemented structure does not match the intended design structure.
  • Integration Testing: Integration is a systematic approach to build the complete software structure specified in the design from unit-tested modules. There are two ways integration performed. It is called Pre-test and Pro-test.
  • Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is required only if the development is done in module development area.
  • System Testing: *
  • A system is the big component.
  • System testing is aimed at revealing bugs that cannot be attributed to a component as such, to inconsistencies between components or planned interactions between components.
  • Concern: issues, behaviors that can only be exposed by testing the entire integrated system (e.g., performance, security, recovery)
  • Volume Testing: The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.
    Stress testing: This refers to testing system functionality while the system is under unusually heavy or peak load; it's similar to the validation testing mentioned previously but is carried out in a "high-stress" environment. This requires that you make some predictions about expected load levels of your Web site.
  • Usability testing: Usability means that systems are easy and fast to learn, efficient to use, easy to remember, cause no operating errors and offer a high degree of satisfaction for the user. Usability means bringing the usage perspective into focus, the side towards the user.
  • Security testing: If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site's overall protection against unauthorized internal or external access.
  • Alpha testing: Testing of an application when development is nearing completion minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • Beta testing: Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers.

Why Testing?

Why Testing?

  • To unearth and correct defects.
  • To detect defects early and to reduce cost of defect fixing.
  • To ensure that product works as user expected it to.
  • To avoid user detecting problems

What is Testing?

What is Testing?

  • An examination of the behavior of a program by executing on sample data sets.
  • Testing comprises of set of activities to detect defects in a produced material.
  • To unearth & correct defects.
  • To detect defects early & to reduce cost of defect fixing.
  • To avoid user detecting problems.
  • To ensure that product works as users expected it to.
Why we use Testing? Click on Below Link http://testingsolution.blogspot.com/2006/12/why-testing.html

Testing Defination

  • Agile Testing : Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
  • Alpha Testing: Early testing of a software product conducted by selected customers.
  • Ad Hoc Testing : Ad-hoc testing is the interactive testing process where developers invoke application units explicitly, and individually compare execution results to expected results.
  • Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
  • Beta Testing: Testing of a re-release of a software product conducted by customers.
  • Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
  • Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
  • Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
  • Branch Testing: Testing in which all branches in the program source code are tested at least once.
  • Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail
  • Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single thread code and locking semaphores.
  • Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
  • Dependency Testing : Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
  • Depth Testing : A test that exercises a feature of a product in full detail.
  • Endurance Testing : Checks for memory leaks or other problems that may occur with prolonged execution.
  • End-to-End testing : Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • Exhaustive Testing : Testing which covers all combinations of input values and preconditions for an element of the software under test.
  • Gorilla Testing : Testing one particular module, functionality heavily
  • Loop Testing : A white box testing technique that exercises program loops.
  • Path Testing : Testing in which all paths in the program source code are tested at least once.
  • Scalability Testing : Performance testing focused on ensuring the application under test gracefully handles increases in work load
  • Structural Testing : Testing based on an analysis of internal workings and structure of a piece of software.
  • Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
  • Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
  • Volume Testing: Testing which confirms that any values that may become large over time can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
  • Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
  • Books On Computers And Internet