BUG LIFE CYCLE
The different states of a bug can be summarized as follows: 1. New 2. Open 3. Assign 4. Test 5. Verified 6. Deferred 7. Reopened 8. Duplicate 9. Rejected 10. Closed
Description of Various Stages:
- 1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
- 2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
- 3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
- 4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
- 5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
- 6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
- 7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
- 8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
- 9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
- 10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved
Guidelines on deciding the Severity of Bug: Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects. A sample guideline for assignment of Priority Levels during the product test phase includes: Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.
What should be done after a bug is found? The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available. The following are items to consider in the tracking process:
- Ø Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
- Ø Bug identifier (number, ID, etc.)
- Ø Current bug status (e.g., 'Released for Retest', 'New', etc.)
- Ø The application name or identifier and version
- Ø The function, module, feature, object, screen, etc. where the bug occurred
- Ø Environment specifics, system, platform, relevant hardware specifics
- Ø Test case name/number/identifier
- Ø One-line bug description
- Ø Full bug description
- Ø Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
- Ø Names and/or descriptions of file/data/messages/etc. used in test
- Ø File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
- Ø Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
- Ø Was the bug reproducible?
- Ø Tester name
- Ø Test date
- Ø Bug reporting date
- Ø Name of developer/group/organization the problem is assigned to
- Ø Description of problem cause
- Ø Description of fix
- Ø Code section/file/module/class/method that was fixed
- Ø Date of fix
- Ø Application version that contains the fix
- Ø Tester responsible for retest
- Ø Retest date
- Ø Retest results
- Ø Regression testing requirements
- Ø Tester responsible for regression tests
- Ø Regression testing results
Posted by Unknown 2 comments
More Testing Interview Question
- 51. What is the diff between Stress & Load Testing?
- 52. What is the Diff between Two Tier & Three tier Architecture?
- 53. What is the diff between Client Server & Web Based Testing?
- 54. What is the diff between Integration & System Testing?
- 55. What is the Diff between Code Walkthrough & Code Review?
- 56. What is the diff between walkthrough and inspection?
- 57. What is the Diff between SIT & IST?
- 58. What is the Diff between static and dynamic?
- 59. What is the diff between alpha testing and beta testing?
- 60. What are the Minimum requirements to start testing?
- 61. What is Smoke Testing & when it will be done?
- 62. What is Adhoc Testing? When it can be done?
- 63. What is cookie testing?
- 64. What is security testing?
- 65. What is database testing?
- 66. What is the relation ship between Quality & Testing?
- 67. How do you determine, what to be tested?
- 68. How do you go about testing a project?
- 69. What is the Initial Stage of testing?
- 70. What is Web Based Application Testing?
- 71. What is Client Server Application Testing?
- 72. What is Two Tier & Three tier Architecture?
- 73. What is the use of Functional Specification?
- 74. Why do we prepare test condition, test cases, test script (Before Starting Testing)?
- 75. Is it not waste of time in preparing the test condition, test case & Test Script?
- 76. How do you go about testing of Web Application?
- 77. How do you go about testing of Client Server Application?
- 78. What is meant by Static Testing?
- 79. Can the static testing be done for both Web & Client Server Application?
- 80. In the Static Testing, what all can be tested?
- 81. Can test condition, test case & test script help you in performing the static testing?
- 82. What is meant by dynamic testing?
- 83. Is the dynamic testing a functional testing?
- 84. Is the Static testing a functional testing?
- 85. What are the functional testing you perform?
- 86. What is meant by Alpha Testing?
- 87. What kind of Document you need for going for an Functional testing?
- 88. What is meant by Beta Testing?
- 89. At what stage the unit testing has to be done?
- 90 Who can perform the Unit Testing?
- 91. When will the Verification & Validation be done?
- 92. What is meant by Code Walkthrough?
- 93. What is meant Code Review?
- 94. What is the testing that a tester performs at the end of Unit Testing?
- 95. What are the things, you prefer & Prepare before starting Testing?
- 96. What is Integration Testing?
- 97. What is Incremental Integration Testing?
- 98. What is meant by System Testing?
- 99. What is meant by SIT?
- 100 .When do you go for Integration Testing?
Posted by Unknown 2 comments
Automation Testing- Win Runner
- 1. Fast
- 2. Reliable
- 3. Repeatable
- 4. Programmable
- 5. Comprehensive
- 6. Reusable
- Win Runner:
- ØNeed For Automation
- ØWinRunner Introduction
- ØWinRunner Basic Session / Examples
- ØWinRunner Advanced Session / Examples
Few Reasons
- Running test manually is boring and frustrating
- Eliminates human error
- Write once, run as many times as needed
- Provides increased testing coverage
- Allows testers to focus on verifying new rather than existing functionality
- Creates tests that can be maintained and reused throughout the application life cycle
WinRunner is functional testing tool
- Specifically a regression test tool
- Help in creating reusable and adaptable script
- Used for automating testing process
- Need to write scripts in TSL for the same
- Help in detecting early defects before regression Testing
- Specifically a regression test tool
- Help in creating reusable and adaptable script
- Used for automating testing process
- Need to write scripts in TSL for the same
- Help in detecting early defects before regression testing
Test Plan Documenting System
- Test Plan Design
- Test Case Design
- Test Script Creation - Manual & Automated
Test Execution Management
- Scenario Creation
- Test Runs
Analysis of Results
- Reports & Graphs
Defect Tracking System WinRunner Testing Process
- ØCreate GUI map
- ØCreate tests
- ØDebug tests
- ØRun tests
- ØExamine results
- ØReport defects
Testing Process of Win Runner in Detail WinRunner testing process involves six main stages .
- Create GUI Map File : So that WinRunner can recognize the GUI objects in the application being tested
- Create test scripts : by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
- Debug Test: run tests in Debug mode to make sure they run smoothly
- Run Tests: run tests in Verify mode to test your application.
- View Results: determines the success or failure of the tests.
- Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.
WinRunner Testing Modes Context Sensitive
- Records the actions on the AUT in terms of GUI objects.
- Ignores the physical location of the object on the screen
Analog
- Records mouse clicks, keyboard input, and the exact x- and y-coordinates traveled by the mouse
Types of GUI Map files
GUI Map File per Test mode
- Separate GUI Map File for each test
Global GUI Map File mode
- Single GUI Map File for a group of tests
Different modes for running the tests
- 1.Verify
- 2.Debug
- 3.Update
Checkpoints
- ØGUI Checkpoint
- ØBitmap Checkpoint
- ØDatabase Checkpoint
- ØSynchronization point
GUI Checkpoint
- ØA GUI checkpoint examines the behavior of an object’s properties
- ØDuring execution, the current state of the GUI objects is compared to the expected results
Bitmap Checkpoint
- ØCompares captured bitmap images pixel by pixel
- ØWhen running a test that includes bitmap checkpoints, make sure that the screen display settings are the same as when the test script was created. If the screen settings are different, WinRunner will report a bitmap mismatch .
Database Checkpoint
- ØA query is defined on the database and the database checkpoint checks the values contained in the result set
- ØResult set is a set of values retrieved from the results of the query
- ØWays to define the query
- (a) Microsoft query
- (b) ODBC query
- (c) Data junction
Synchronization point
- ØWhen you run tests, your application may not always respond to input with the same speed
- ØInsert a synchronization point into the test script at the exact point where the problem occurs
- ØA synchronization point tells WinRunner to pause the test run in order to wait for a specified response in the application
Using Regular Expressions
- ØEnables WR to identify objects with varying names or titles
- ØCan be used in
- An object’s physical descriptions in the GUI map
- GUI Checkpoint
- Text Checkpoint
Virtual Objects:
- ØCan teach WinRunner to recognize any bitmap in a window as a GUI object
- ØMake test scripts easier to read and understand
Creating Data-Driven Tests:
- ØTo test how the AUT performs with multiple sets of data
- ØCan be done using the..
- •Data Driver Wizard
- Add command manually in script
Advantage of Data-Driven Tests:
- ØRun the same test with different data
- ØTest the AUT for both, positive and negative results
- ØExpandable
- ØEasy to maintain
Manual Data-Driven Test
- Win Runner Related Some Question Answers
- Win Runner Navigation
Posted by Unknown 3 comments
Web Application Testing
- Short release cycle
- Constantly Changing Technology
- Possibility Of huge number of users
- Inability to control user’s running environment
- 24hour availability of the website
Functionality Testing: It involves making sure that all the features that most affect the user interactions works properly.
- These features may include
- Forms in the page .
- Searches present in the page.
- Pop-up windows (client side and server side) .
- Any online transactions available.
Usability Testing:
- Identify the websites purpose.
- Identify the intended users.
- Whether the user completes the task successfully.
- How much time the user needs to complete the task.
- Number of pages accessed to complete the task.
- At which place in the application the user can .
- possibly make mistakes.
- How the user seeks assistance when lost.
- Whether the online information provides enough help.
- How possibly a user can react for the specific
- download time of the page.
- Point at which the user gets confused and even fail to complete a task.
- Number of clicks between tasks, number of seconds between clicks as well as number of clicks, as well as number of pages browsed.
- Whether the user felt successful in using site.
- Feedback on navigation and other features of the sites.
- Whether the user would recommend this product to friends.
- Whether the user understood the terminology.
- Ideas for improvement
- What the user liked or disliked about the website, and why.
Navigation Testing :
- Ø Easy and quick access to the information.
- Ø Logical hierarchy of the pages.
- Ø Conformation of the page to tell the user about where they are at any point.
- Ø Facility to return to previous state or the home page.
- Ø Consistent look and lay out of every page.
- Ø Moving to and from pages .
- Ø Scrolling through the pages .
- Ø Clicking on images and thumbnails to make sure they work.
- Ø Testing all the links (both internal and external) .
- Ø Ensuring no broken links exists.
- Ø Proper lay out under different browsers.
- Ø Measuring load time of every page.
- Ø Compatibility and usage of buttons, key board shortcuts and mouse actions.
Form Testing:
- Ø Using tab key the form traverses fields in the proper order, both forward and backward.
- Ø Testing boundary values for the data input.
- Ø Checking that the form is capable of trapping invalid data correctly. (Specially date and numeric formats)
- Ø The form is capable of updating the information correctly.
- Ø Tool tip text messages displayed for the form are proper and not misleading.
- Ø Objects are aligned in the form properly, and the labels displayed are not misleading.
Page Content Testing :
- Ø All the images and the graphics are displayed correctly.
- Ø All contents are present as per the requirements.
- Ø Page structures are consistent across the browsers.
- Ø Critical pages contain same contents across browsers.
- Ø All parts of the table or form are present in the right place and in the right order.
- Ø Links to the relevant contents inside or out of the page are appropriate.
- Ø Web pages are visually appealing.
- Ø Checking the vital information’s present in the page are not changing across different builds.
Configuration and Compatibility Testing:
- ØIs the user behind firewall?
- ØDoes the user connect to the application server through a load balance.
- ØDoes the browser accepts cookies?
- ØAre high security settings enabled?
- ØWhat technologies are the developers of the WebPages using?
- ØAre secure blocking tools being used?
Performance Testing:
- ØHigh activity volume at launch.
- ØTime of day .
- ØActivity spikes due to marketing promotions .
- ØBottleneck due to hundreds of users on a network.
- ØDownload time .
- ØUsage patterns .
- ØThink time .
- ØUser arrival rates .
- ØClient platforms .
- ØInternet access speed .
- ØAbandonment rates .
Load Testing:
- Strategy:
- •Understand the load requirements
- •Number of user hits per day/week/month
- •Total concurrent users (worst-case scenario)
- •Peak request rates (Number of pages served per second)
- •Identify the tools and their limitations.
- •Generate enough users and transactions to access capacity.
- •Create a base line criterion.
- •Check the system behavior in multiple sessions.
- •Identify any other application probably running on the client
- •system or server.
- •Execute the test multiple times.
Security Testing:
- ØWhat precautions exist to prevent or limit attacks from users?
- ØAre the browser settings set to ensure maximum security protections?
- ØHow does the website handle access rights?
- ØDoes the application has a virus detection mechanism?
- ØDoes the application handle transaction tempering?
- ØDoes the e-commerce vendor provide a mechanism to prevent
- Øcredit card frauds?
- ØHow does the website encrypt data transfer mechanism?
- ØHow does the web site authenticate users?
- ØDoes viewing the source code disclose any vital information?
- ØHow safe is the credit card or user information?
- ØDoes the application allow for a file to be digitally signed?
Posted by Unknown 5 comments
Testing Written Test Questions
- 1 : With thorough testing it is possible to remove all defects from a program prior to delivery to the customer.
- a. True b. False
- The correct answer is b
- 2 : Which of the following are characteristics of testable software?
- a. observability b. simplicity c. stability d. all of the above
- The correct answer is d
- 3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called
- a. black-box testing
- b. glass-box testing
- c. grey-box testing
- d. white-box testing
- The correct answer is a
- 4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called
- a. behavioral testing
- b. black-box testing
- c. grey-box testing d. white-box testing
- The correct answer is d
- 5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing?
- a. behavioral errors
- b. logic errors
- c. performance errors
- d. typographical errors
- e. both b and d
- The correct answer is e
- 6 : Program flow graphs are identical to program flowcharts.
- a. True b. False
- The correct answer is b
- 7 : The cyclomatic complexity metric provides the designer with information regarding
- the number of
- a. cycles in the program
- b. errors in the program
- c. independent logic paths in the program
- d. statements in the programThe correct answer is c
- 8 : The cyclomatic complexity of a program can be computed directly from a PDL
- representation of an algorithm without drawing a program flow graph.
- a. True b. False
- The correct answer is a
- 9 : Condition testing is a control structure testing technique where the criteria used to
- design test cases is that they
- a. rely on basis path testing
- b. exercise the logical conditions in a program module
- c. select test paths based on the locations and uses of variables
- d. focus on testing the validity of loop constructs
- The correct answer is b
- 10 : Data flow testing is a control structure testing technique where the criteria used to
- design test cases is that they
- a. rely on basis path testing
- b. exercise the logical conditions in a program module
- c. select test paths based on the locations and uses of variables
- d. focus on testing the validity of loop constructs
- The correct answer is c
- 11 : Loop testing is a control structure testing technique where the criteria used to design
- test cases is that they
- a. rely basis path testing
- b. exercise the logical conditions in a program module
- c. select test paths based on the locations and uses of variables
- d. focus on testing the validity of loop constructs
- The correct answer is d
- 12 : Black-box testing attempts to find errors in which of the following categories
- a. incorrect or missing functions
- b. interface errors
- c. performance errors
- d. all of the above
- e. none of the above
- The correct answer is d
- 13 : Graph-based testing methods can only be used for object-oriented systems
- a. True b. False
- The correct answer is b
- 14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.
- a. True b. False
- The correct answer is a
- 15 : Boundary value analysis can only be used to do white-box testing.
- a. True b. False
- The correct answer is b
- 16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.
- a. True b. False
- The correct answer is b
- 17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.
- a. True b. False
- The correct answer is a
- 18 : Test case design "in the small" for OO software is driven by the algorithmic detail of
- the individual operations.
- a. True b. False
- The correct answer is a
- 19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.
- a. True b. False
- The correct answer is b
- 20 : Use-cases can provide useful input into the design of black-box and state-based tests
- of OO software.
- a. True b. False
- The correct answer is a
- 21 : Fault-based testing is best reserved for
- a. conventional software testing
- b. operations and classes that are critical or suspect
- c. use-case validation
- d. white-box testing of operator algorithms
- The correct answer is b
- 22 : Testing OO class operations is made more difficult by
- a. encapsulation b. inheritance c. polymorphism d. both b and c
- The correct answer is d
- 23 : Scenario-based testing
- a. concentrates on actor and software interaction
- b. misses errors in specifications
- c. misses errors in subsystem interactions
- d. both a and b
- The correct answer is a
- 24 : Deep structure testing is not designed to
- a. examine object behaviors
- b. exercise communication mechanisms
- c. exercise object dependencies
- d. exercise structure observable by the user
- The correct answer is d
- 25 : Random order tests are conducted to exercise different class instance life histories.
- a. True b. False
- The correct answer is a
- 26 : Which of these techniques is not useful for partition testing at the class level
- a. attribute-based partitioning
- b. category-based partitioning
- c. equivalence class partitioning
- d. state-based partitioning
- The correct answer is c
- 27 : Multiple class testing is too complex to be tested using random test cases.
- a. True b. False
- The correct answer is b
- 28 : Tests derived from behavioral class models should be based on the a. data flow
- diagram
- b. object-relation diagram
- c. state diagram
- d. use-case diagram
- The correct answer is c
- 29 : Client/server architectures cannot be properly tested because network load is highly
- variable.
- a. True b. False
- The correct answer is b
- 30 : Real-time applications add a new and potentially difficult element to the testing mix
- a. performance b. reliability c. security d. time
- The correct answer is d
Posted by Unknown 1 comments
Question Answers
- 1.What is Software Testing?
- Ans- To validate the software against the requirement.
- 2. What is the Purpose of Testing?
- Ans- To check whether system is meeting requirement.
- 3. What types of testing do testers perform?
- Ans- Black Box
- 4. What is the Outcome of Testing?
- Ans- System which is bug free and meet the system requirements.
- 5. What kind of testing have you done?
- Ans- Black Box
- 6. What is the need for testing?
- Ans- To Make error Free Product and Reduce Development Cost.
- 7. What are the entry criteria for Functionality and Performance testing?
- Ans- Functional: should have stable functionality code Performance- After system testing
- 8. What is test metrics?
- Ans- Contains how many test cases we have executed, which again contain how many pass, fail and can not be executed.
- 9. Why do you go for White box testing, when Black box testing is available?
- Ans- To check code, branches and loops in code
- 10. What are the entry criteria for Automation testing?
- Ans- Should have stable code.
- 11. When to start and Stop Testing?
- Ans-When system meets the requirement and there is no change in functionality.
- 12. What is Quality?
- Ans- Consists of two QA and QC.Customer point of View Fit for use and Meets User Requirement is Quality.
- 13. What is Baseline document, Can you say any two?
- Ans- Document which is standard like test plan format, checklist for system testing .
- 13. What is Baseline document, Can you say any two?
- Ans- Document which is standard like test plan format, checklist for system testing
- 14. What is verification?
- Ans- To review the document.
- 15. What is validation?
- Ans- To validate the system against the requirements.
- 16. What is quality assurance?
- Ans – Bug presentation activity is called QA.
- 17. What is quality control?
- Ans- Bug dictation activity is called QC.
- 18. What is SDLC and TDLC?
- Ans- SDLC- Software Development Life Cycle and Testing is a part of it. TDLC-Test Development Life Cycle.
- 19. What are the Qualities of a Tester?
- Ans- Should have ability of find out hidden bug as early as possible in SDLC.
- 20. When to start and Stop Testing?
- Ans- Start- At the time of requirement gathering, When Meets Requirements.
- 21. What are the various levels of testing?
- ans- Unit,Integration,System and Acceptance testing.
- 22. What are the types of testing you know and you experienced?
- Ans- Black Box, Functional Testing, system testing, gui testing etc
- 23. After completing testing, what would you deliver to the client?
- Ans- Testware
- 24. What is a Test Bed?
- Ans- Test Data is called test bad.
- 25. What is a Data Guidelines?
- Ans- Guidelines which are to be followed for the preparation of test data.
- 26. Why do you go for Test Bed?
- Ans- To validate the system against the required input.
- 27. What is Severity and Priority and who will decide what?
- Ans- Severity- how much severe is bug for application like critical Priority- How much urgency of functionality in which bug occur.
- 28. Can Automation testing replace manual testing? If it so, how?
- Ans- Yes, if there are many modifications in functionality and it is near impossible to update the automated scripts.
- 29. What is a test condition?
- Ans: logical input data against which we validate the system.
- 30. What is the test data?
- Ans- input data against which we validate the system.
- 31. What is an Inconsistent bug?
- Ans-Bug which are not reproducible
- 32. What is the difference between Re-testing and Regression testing?
- Ans- Regression- Check that change in code have not effected the working functionality
- Retesting- Again testing the functionality of the application.
Posted by Unknown 2 comments
Test Case
- Test Case : Test case is a document that describes an input, action or event and an expected response to determine if a feature of an application is working correctly and meets the user requirements.A Good test case is the one that has high probability of finding as yet undiscovered error.
- The Test case document contains the following fields:
- Test case id : This is an id given to every test case and it should be unique.
- Test case decription : This field contains the brief description of what we are going to test in that application.
- Test input : This contains the input that we are going to give to test the application or system.
- Test Action : This field explains the action to be performed to execute the test case.
- Expected results : This fielsd contains the description of what tester should see after all test steps has been completed.
- Actual results : This field contains a brief description of what the tester saw after the test steps has been completed.
- Result:This is often replaced with either pass or fail. If both the expected result and actual result are same then the result is pass else the result is fail.
Posted by Unknown 0 comments
Test Plan Contents
Contents of test plan: Introduction: This section contains the purpose and objectives of the test plan. Scope (A)Items to be tested :Refer to the functional requirements that specify the features and functions to be tested. (B)Items not to be Tested:List the features and functions that will not be covered in this test plan. Identify briefly the reasons for leaving them out.
Posted by Unknown 0 comments
Testing Process
1. Requirement analysisTesting should begin in the requirements phase of the software life cycle (SDLC). The actual requirement should be understand clearly with the help of Requirement Specification documents, Functional Specification documents, Design Specification documents, Use case Documents etc.
During the requirement analysis the following should be considered.-Are the definitions and descriptions of the required capabilities precise? -Is there clear delineation between the system and its environment?-Can the requirements be realized in practice?-Can the requirements be tested effectively?
2. Test PlanningDuring this phase Test Strategy, Test Plan, Test Bed will be created.A test plan is a systematic approach in testing a system or software. The plan should identify:-Which aspects of the system should be tested?-Criteria for success.-The methods and techniques to be used.-Personnel responsible for the testing.-Different Test phase and Test Methodologies-Manual and Automation Testing-Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc-Evaluation & identification – Test, Defect tracking tools
3. Test Environment SetupDuring this phase the required environment will be setup will be done. The following should also be taken in account.- Network connectivity’s-All the Software/ tools Installation and configuration-Coordination with Vendors and others
4. Test DesignDuring this phase-Test Scenarios will be identified.-Test Cases will be prepared.-Test data and Test scripts prepared.-Test case reviews will be conducted.
5. Test AutomationIn this phase the requirement for the automation will be identified. The tools that are to be used will be identified. Designing framework, scripting, script integration, Review and approval will be undertaken in this phase.
6. Test Execution and Defect Tracking Testers execute the software based on the plans and tests and report any errors found to the development team. In this phase
-Test cases will be executed.-Test Scripts will be tested.-Test Results will be analyzed.-Raised the defects and tracking for its closure.
7. Test Reports Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
-Test summary reports will be prepared-Test Metrics and process Improvements made-Build release-Receiving acceptance
Posted by Unknown 0 comments
Software Development Life Cycle
Posted by Unknown 2 comments
Testing Technique
Testing Techniques
- Black Box Testing -Testing of a function without knowing internal structure of the program.
- White Box Testing -Testing of a function with knowing internal structure of the program.
- Regression Testing -To ensure that the code changes have not had an adverse affect to the other modules or on existing functions.
- Functional Testing:Study SRS , Identify Unit Functions For each unit function - Take each input function - Identify Equivalence class - Form Test cases - Form Test cases for boundary values - From Test cases for Error Guessing Form Unit function v/s Test cases, Cross Reference Matrix and Find the coverage.
- Unit Testing:
- The most 'micro' scale of testing to test particular functions or code modules.
- Typically done by the programmer and not by testers .
- Unit - smallest testable piece of software.
- A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness.
- Unit testing done to show that the unit does not satisfy the functional specification and/ or its implemented structure does not match the intended design structure.
- Integration Testing: Integration is a systematic approach to build the complete software structure specified in the design from unit-tested modules. There are two ways integration performed. It is called Pre-test and Pro-test.
- Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is required only if the development is done in module development area.
- System Testing: *
- A system is the big component.
- System testing is aimed at revealing bugs that cannot be attributed to a component as such, to inconsistencies between components or planned interactions between components.
- Concern: issues, behaviors that can only be exposed by testing the entire integrated system (e.g., performance, security, recovery)
- Volume Testing: The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.Stress testing: This refers to testing system functionality while the system is under unusually heavy or peak load; it's similar to the validation testing mentioned previously but is carried out in a "high-stress" environment. This requires that you make some predictions about expected load levels of your Web site.
- Usability testing: Usability means that systems are easy and fast to learn, efficient to use, easy to remember, cause no operating errors and offer a high degree of satisfaction for the user. Usability means bringing the usage perspective into focus, the side towards the user.
- Security testing: If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site's overall protection against unauthorized internal or external access.
- Alpha testing: Testing of an application when development is nearing completion minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
- Beta testing: Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers.
Posted by Unknown 1 comments
Why Testing?
Why Testing?
- To unearth and correct defects.
- To detect defects early and to reduce cost of defect fixing.
- To ensure that product works as user expected it to.
- To avoid user detecting problems
Posted by Unknown 0 comments
What is Testing?
What is Testing?
- An examination of the behavior of a program by executing on sample data sets.
- Testing comprises of set of activities to detect defects in a produced material.
- To unearth & correct defects.
- To detect defects early & to reduce cost of defect fixing.
- To avoid user detecting problems.
- To ensure that product works as users expected it to.
Posted by Unknown 0 comments
Testing Defination
- Agile Testing : Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
- Alpha Testing: Early testing of a software product conducted by selected customers.
- Ad Hoc Testing : Ad-hoc testing is the interactive testing process where developers invoke application units explicitly, and individually compare execution results to expected results.
- Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
- Beta Testing: Testing of a re-release of a software product conducted by customers.
- Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
- Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
- Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
- Branch Testing: Testing in which all branches in the program source code are tested at least once.
- Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail
- Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single thread code and locking semaphores.
- Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
- Dependency Testing : Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
- Depth Testing : A test that exercises a feature of a product in full detail.
- Endurance Testing : Checks for memory leaks or other problems that may occur with prolonged execution.
- End-to-End testing : Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
- Exhaustive Testing : Testing which covers all combinations of input values and preconditions for an element of the software under test.
- Gorilla Testing : Testing one particular module, functionality heavily
- Loop Testing : A white box testing technique that exercises program loops.
- Path Testing : Testing in which all paths in the program source code are tested at least once.
- Scalability Testing : Performance testing focused on ensuring the application under test gracefully handles increases in work load
- Structural Testing : Testing based on an analysis of internal workings and structure of a piece of software.
- Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
- Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
- Volume Testing: Testing which confirms that any values that may become large over time can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
- Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
- Books On Computers And Internet
Posted by Unknown 2 comments