Quality Related Questions

  • 1. Quality means
  • a) conformance to requirements
  • b) meeting customer needs
  • c) both
  • d) none
  • 2. AQL stands for
  • a)Allowable Quality Level
  • b)Allocated Quality Level
  • c) Acceptable Quality Level
  • d) Allowed Quality Level
  • 3. For quality to happen, there must be well-defined standards and procedures which are followed. (True/False)
  • 4. Many technical personnel believe that standards inhibit their creativity. (True/False)
  • 5. Achieving quality (defect-free products and servcices) is easy. (True/False)
  • 6. The challenge of making quality happen is
  • a)a miniscule challenge
  • b)a monumental challenge
  • 7. Accomplishing quality requires "a thought revolution by management.". Who stated as above?
  • a)Dr. Malcolm Baldrige
  • b)Dr. Kaoru Ishikawa
  • c)Bill Gates
  • d)William J Clinton
  • 8. Taylor approach refers to
  • a) Engineers create work standards and specifications. Workers merely follow.
  • b) Engineers create work standards and specifictions. Both engineers and workers follow.
  • c) Engineers create work standards and specifications and follow them.
  • 9. In Zero defect movement all responsibilities and defects are borne by the workers.
  • (True/False)
  • 10. The word "kick-off" is used in
  • a)Zero Defect Movement
  • b)Deming's Circle
  • c)QAI's Quality Improvement Model
  • 11. Dr. W. Edwards Deming has stated that it takes ____________ years to change a culture from an emphasis on productivity to an emphasis on quality.
  • (10 years/20 years/2 to 5 years/none of these)
  • 12. Quality means meeting requirements. This is ____________view.
  • (producer's /customer's )
  • 13. Quality means fit for use. This is ________________view.
  • (producer's /customer's )
  • 14. Which one of the following definitions of quality is more important.
  • a)Quality means meeting requirements.
  • b)Quality means fit for use.
  • 15. Quality control is a ______________ function. (line/staff)
  • 16. Quality assurance is a ____________ function. (line/staff)
  • 17. By what other two names is PDCA process referred to?
  • 18. What does "Going Around the PDCA Circle" connote?
  • 19. The no of principles in Dr. W. Edwards Deming's quality principles is ____________. (8/10/12/14/16)
  • 20. For quality to happen it is not necessary that all the Deming's principles are used concurrently. (True/False)
  • Hi Friends Try To Find Answers give answer in Next Post.

Levels of testing

Levels of testing We divide Testing In Four Level,Unit Testing,Integration Testing, System Testing and Acceptance Testing.
Unit testing:-
Generally the code which is generated is compiled. The unit test is white box oriented and the steps can be conducted in parallel for multiple components.
1. The module Interface is tested to ensure that information properly flows into and out of the program unit under test.
2. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution.
3. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit and restrict processing.
4. All the statements are executed at least once and error handling paths are tested
Integration testing:-
Integration testing is a systematic technique for constructing the program structure .After Performing Unit Testing perform integration testing. you have like
Top down :-Top down integration testing with the main routine and one or two immediate subordinate routines in the system structure it is good to have modules are integrated as they are developed, top level interfaces are tested first.
Bottom up :-Bottom up integration testing is the traditional strategy used to integrate the components of a software system into a functioning whole
Regressive testing:- Retesting the already test modules and adding new modules .Regressive testing is an important strategy for reducing side effects.
System Level Testing :
System Testing is third level of Testing In this level we check Functionility of application.
Performance testing: - Performance testing is designed to test the run time performance of software or hardware.
Recovery testing :- is a system test forces the software to fail in a variety of ways and verifies that recovery is properly performed .if recovery is automatic, re initialization , check pointing ,data recovery and restart are evaluated for correctness.
Security Testing: - Security testing attempts to verify that protection mechanisms built into a system will in fact, protect it from improper penetration.
Acceptance Testing:- When customer software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. conducted by the end user rather than software engineers, an acceptance test can range from an informal test drive to a planned and systematically executed series of tests. If software is developed as a product to be used by many customers, it is impractical to perform acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find .

Web Testing Challenges

  • Web Testing Challanges
  • Understanding the Web test process is essential for deciding how to proceed with the selection of a Web test process, automated test tools, and methodologies.

  • Following are several challenges that need to be considered when deciding on the Web process that is most applicable for your business:
  • The Web is in a state of constant change. The developer and tester need to understand how changes will affect their development and the Web sitetest process. As technology changes, testers will need to understand how this will affect them and how they will handle their testing responsibilities.

  • When setting up the test scenarios, the tester needs to understand how to implement different scenarios that will meet different types of business requirements. For example, is a tester testing a site with graphic user interface (GUI) buttons and text boxes or testing HyperText MarkupLanguage (HTML) code? Simulating response time by pressing buttons and inputting different values will verify if correct calculations are valid.
  • The test environment can be a difficult part of the setup for the tester.
  • You need to be aware of all of the different components that make up the environment; the networking piece can be especially difficult to simulate.
  • The following several considerations need to be addressed
  • Multiple server tiers
  • Firewalls
  • Databases
  • Database servers
  • In the test environment, it is important to know how the different components will interact with each other.
  • When setting up the Web testing environment, special consideration should be given to how credit card transactions are handled, carried out, and verified. Because testers are responsible for setting up the test scenarios, they will need to be able to simulate the quantity of transactions that are going to be processed on the Web site.
  • Security is a constant concern for business on the Internet as well as for developers and testers. There are hackers who enjoy breaking the secuiry on a Web site.
  • Web-based applications Present New Challanges ,Both For Developers and Testers .These challanges include .
  • 1. Short Release Cycle
  • 2. Constant Changing Technology
  • 3. Possible Huge Number Of Users during Initial Website Launch.
  • 4. Inability To control The Users Running Environment.
  • 5.Twenty Four -24 Hour avilability of web site.

Win Runner Related some Questions .

Win Runner Related some Questions .

  • Q: What is contained in the GUI map?
  • Ans: WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
  • Q: How Win Runner identifies the GUI objects?
  • Ans: Win Runner identifies the objects based on their Logical name & Physical properties.
  • Q: What browsers are supported by Win Runner 7.x?
  • Ans:Win Runner 7.x supports Internet Explorer 4.x-5.5, Netscape Navigator 4.0-6.1 (excluding versions 6.0 and 6.01) and AOL 5 and 6.
  • Q: What is GUI Spy?
  • Ans: GUI Spy is an integrated tool for spying on standard, ActiveX and Java controls. It displays the properties of standard controls and the properties and methods of ActiveX and Java controls. You can copy and paste functions for activating Java methods from the GUI Spy into your test script.
  • Q:What is the use of GUI Map File per Test mode? Ans: This mode automatically manages GUI map files., so we do not have to load or save GUI map files in your test. GUI map files per test can be combined into the Global GUI map file if needed.
  • Q:What add-ins are available for Win Runner 7.x?
  • Ans:Add-ins are available for Java, ActiveX, WebTest, Siebel, Terminal Emulator, Forte, Oracle and PowerBuilder.
  • Q:Can WR automatically back up test scripts?
  • Ans:Yes, Win Runner 7.x can automatically create a backup copy of your test script at intervals you specify.
  • Q: What are the different run modes?
  • Ans: Three modes for running test:
  • Verify (default): To check your application against expected results. Win Runner compares the current response of the application to its expected response. Any discrepancies between the current and expected response are captured and saved as verification results.
  • Debug: The debug mode helps you identify bugs in test scripts. Running a test in the debug mode is the same as running a test in the Verify mode, expect that debug results are always saved in the debug directory.
  • Update: The update mode to update the expected results a test.
  • Q: What is GUI checkpoint?
  • Ans: GUI Checkpoints allows to verify the current state or attributes of GUI objects. When we insert GUI checkpoint in script, Win Runner captures the current value of the object properties and saves them in the expected result directory (exp) of the test. When we run the test Win Runner compares the current state of the object in the application to the expected state and detects and reports any mismatches.
  • Q: When we need to update the GUI map?
  • Ans: We need to update the GUI map when objects in the application have changed. This usually happens when a new major version of the application is released.
  • Q:What is the need for Data Driven Tests?
  • Ans:‘Parameterizing’ the test allows to run the same test with different data each time. In addition the test is expandable and easier to maintain.
  • Q:What is the purpose of the set_window function?
  • Ans:The set_window function sets the focus of the window in the application as well as sets the scope of the window in the GUI map.
  • Q: What is the difference between call() and load() function?
  • Ans: Call functions invokes a test from within a test script but load is used for loading a compiled module into memory.
  • Q:What is compiled module? Why do you create a complied module?
  • Ans: 1 Compiled module is library of frequently used functions. We can save user defined functions in compiled module and then call them in the test scripts.
  • 2 Compiled module improves the organization and performance of the tests.
  • 3 Compiled modules are debugged before using; they will require less error checking.
  • 4 Calling a function whose is already compiled is significantly faster than interpreting a function in test script.
  • 5 Complied module does not support analog recoding and checkpoints.
  • Q: How do you create user-defined functions?
  • Ans:User Defined functions enhance the efficiency and reliability of test scripts. Easy way to create a function is
  • 1. Create the process by recording the TSL functions
  • 2. Enclose it into the function header
  • 3. Replace values with parameters
  • 4. Declare local variable
  • 5. Handle errors
  • Q: What is database checkpoint?
  • Ans:Database checkpoint is used to check the contents of database in different versions of the application.
  • Q:What do Runtime Database Record Checkpoints do?
  • Ans: Runtime Database Record Checkpoints enable you to check that your application inserts, deletes, updates or retrieves data in a database correctly. By mapping application controls to database fields, you can check that the values in your application are correctly read from or written to the matching database fields when you run your test.
  • Q:What is Startup script? What is role of Startup script?
  • Ans: A startup script is a test scripts that is automatically run each time we start WinRunner. We can create startup tests that load GUI map, compiled modules, configuring recording options and staring AUT.
  • Q: What is Function Generator and how it is used?
  • Ans: In Function Generator functions are grouped in categories according to the object class (list, button) or the type of function (input/output, system, file, etc).
  • In Function Generator we choose a function, then expand the dialog box (by pressing the Args>> button) to fill in the argument values and paste it to script.
  • Q:If you want to run the same script 100 times, what is the syntax? Ans: for(i=1;i<=100;i++) { TSL statements }
  • Q:What is a virtual object?How do you handle Virtual Objects in WinRunner?
  • Ans:Our applications may contain bitmaps that look and behave like GUI objects. WinRunner record operations on these objects using win_mouse_click statements. We can define these objects as virtual objects and instruct WinRunner to treat them as GUI objects when we record or run the tests. Using Virtual Object wizard we can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.
  • Q:What are the different Checkpoints that you can insert in a WinRunner Script?
  • Ans: Four types of checkpoints can be added to any WinRunner script.
  • 1. GUI Checkpoint
  • 2. Bitmap Checkpoint
  • 3. Database Checkpoint
  • 4. Text Checkpoint (only for Web scripts)
  • Q:How do you check for the database table contents?
  • Ans:Using ‘Database checkpoint’
  • Q:How do you handle Web Exceptions?
  • Ans:We can instruct WinRunner to handle the appearance of specific dialog box in the web site during the test run. WinRunner contains a list of exceptions that it supports in the Web Exception Editor. We can modify the list and configure additional exceptions that we would like WinRunner to support.
  • Q:What is the difference between Main Test and a Compile Module file?
  • Ans:Main test contains the TSL script to test the AUT. Compiled module is library of frequently used functions. We can save user defined functions in compiled module and then call them in the main test scripts.
  • Q:How do you start client/server applications from the script?
  • Ans:Using following TSL function: invoke_application ( file, command_option, working_dir, show );
  • Q:When do you run a test in batch mode?
  • Ans:Batch testing is execution of a suite of test scripts towards an overall testing goal. We need to run a batch test when we want to test the overall AUT.
  • Win Runner Navigation
  • http://testingsolution.blogspot.com/2006/12/win-runner-navigation.html
  • Software Testing Books

Win Runner Navigation

    Win Runner Navigation

    Using Rapid Test Script wizard

    • Start->Program Files->Winrunner->winruner
    • Select the Rapid Test Script Wizard (or) create->Rapid Test Script wizard
    • Click Next button of welcome to script wizard
    • Select hand icon and click on Application window and Cilck Next button
    • Select the tests and click Next button
    • Select Navigation controls and Click Next button
    • Set the Learning Flow(Express or Comprehensive) and click Learn button
    • Select start application YES or NO, then click Next button
    • Save the Startup script and GUI map files, click Next button
    • Save the selected tests, click Next button
    • Click Ok button
    • Script will be generated.then run the scripts.
    • Run->Run from top Find results of each script and select tools->text report in Winrunner test results

    Using GUI-Map Configuration Tool:

    • Open an application.
    • Select Tools-GUI Map Configuration; Windows pops-up.
    • Click ADD button; Click on hand icon.
    • Click on the object, which is to be configured. A user-defined class for that object is added to list.
    • Select User-defined class you added and press ‘Configure’ button.
    • Mapped to Class;(Select a corresponding standard class from the combo box).
    • You can move the properties from available properties to Learned Properties. By selecting Insert button .
    • Select the Selector and recording methods.
    • Click Ok button
    • Now, you will observe Win runner identifying the configured objects.

    Using Record-ContextSensitive mode:

    • Create->Record context Sensitive
    • Select start->program files->Accessories->Calculator
    • Do some action on the application.
    • Stop recording
    • Run from Top; Press ‘OK’.

    Using Record-Analog Mode:

    • Create->Insert Function->from function generator
    • Function name:(select ‘invoke_application’ from combo box).
    • Click Args button; File: mspaint.
    • Click on ‘paste’ button; Click on ‘Execute’ button to open the application; Finally click on ‘Close’.
    • Create->Record-Analog .
    • Draw some picture in the paintbrush file.
    • Stop Recording
    • Run->Run from Top; Press ‘OK’.

    GUI CHECK POINTS-Single Property Check:

    • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
    • Click on’paste’ and click on’execute’ & close the window.
    • Create->Record Context sensitive.
    • Do some operations & stop recording.
    • Create->GUI Check Point->For single Property.
    • Click on some button whose property to be checked.
    • Click on paste.
    • Now close the Flight1a application; Run->Run from top.
    • Press ‘OK’ it displays results window.
    • Double click on the result statement. It shows the expected value & actual value window.

    GUI CHECK POINTS-For Object/Window Property:

    • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
    • Click on’paste’ and click on’execute’ & close the window.
    • Create->Record Context sensitive.
    • Do some operations & stop recording.
    • Create->GUI Check Point->Object/Window Property.
    • Click on some button whose property to be checked.
    • Click on paste.
    • 40Now close the Flight 1a application; Run->Run from top.
    • Press ‘OK’ it displays results window.
    • Double click on the result statement. It shows the expected value & actual value window.

    GUI CHECK POINTS-For Object/Window Property:

    • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
    • Click on’paste’ and click on’execute’ & close the window.
    • Create->Record Context sensitive.
    • Do some operations & stop recording.
    • Create->GUI Check Point->For Multiple Object.
    • Click on some button whose property to be checked.
    • Click on Add button.
    • Click on few objects & Right click to quit.
    • Select each object & select corresponding properties to be checked for that object: click ‘OK’.
    • Run->Run from Top. It displys the results.

    BITMAP CHECK POINT:For object/window.

    • Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a)
    • Click on’paste’ and click on’execute’ & close the window.
    • Create->Record Context sensitive.
    • Enter the Username, Password & click ‘OK’ button
    • Open the Order in Flight Reservation Application
    • Select File->Fax Order& enter Fax Number, Signature
    • Press ‘Cancel’ button.
    • Create->Stop Recording.
    • Then open Fax Order in Flight Reservation Application
    • Create->Bitmap Check->For obj.window;
    • Run->run from top.
    • The test fails and you can see the difference.

    For Screen Area:

    • Open new Paint Brush file;
    • Create->Bitmapcheck point->from screen area.
    • Paint file pops up; select an image with cross hair pointer.
    • Do slight modification in the paint file(you can also run on the same paint file);
    • Run->Run from Top.
    • The test fails and you can see the difference of images.

    DATABASE CHECK POINTSUsing Default check(for MS-Access only)

    • Create->Database Check Point->Default check
    • Select the Specify SQL Statement check box
    • Click Next button
    • Click Create button
    • Type New DSN name and Click New button
    • Then select a driver for which you want to set up a database & double clcik that driver
    • Then select Browse button and retype same DSN name and Click save button.
    • Click Next button & click Finish button
    • Select database button & set path of the your database name
    • Click ‘OK’ button & then Click the your DSN window ‘OK’ button
    • Type the SQL query in SQL box
    • The click Finish button Note : same process will be Custom Check Point

    Runtime Record Check Point.

    • Repeat above 10 steps.
    • Type query of two related tables in SQL box Ex: select Orders.Order_Number, Flights.Flight_Number from Orders, Flights where Flight.Flight_Number=Orders.Flight_Number.
    • Select Finish Button
    • Select hand Icon button& select Order No in your Application
    • Click Next button.
    • Select hand Icon button& select Filght No in your Application
    • Click Next button
    • Select any one of the following check box 1. One match record 2. One or more match records. 3. No match record
    • select Finish button the script will be generated

    Synchronization PointFor Obj/Win Properties:

    • Open start->Programs->Win Runner->Sample applications->Flight1A.
    • Open winrunner window
    • Create->RecordContext Sensitive
    • Insert information for new Order &click on "insert Order" button
    • After inserting click on "delete" button
    • Stop recording& save the file.
    • Run->Run from top: Gives your results.

    Without Synchronization:

    • settings->General Options->Click on "Run" tab. "Timeout for checkpoints& Cs statements’ value:10000 follow 1 to 7->the test display on "Error Message" that "delete" button is disabled.

    With Synchronization:

    • Keep Timeout value:1000 only
    • Go to the Test Script file, insert pointed after "Insert Order" button, press statement.
    • Create->Synchronization->For Obj/Window Property
    • Click on"Delete Order" button & select enable property; click on "paste".
    • It inserts the Synch statement.

    For Obj/Win Bitmap:

    • Create-> Record Context Sensitive.
    • Insert information for new order & click on "Insert order" button
    • Stop recording & save the file.
    • Go to the TSL Script, just before inserting of data into "date of flight" insert pointer.
    • Create->Synchronization->For Obj/Win Bitmap is selected.
    • (Make sure Flight Reservation is empty) click on "data of flight" text box
    • Run->Run from Top; results are displayed. Note:(Keep "Timeout value" :1000)

    Get Text: From Screen Area: (Note: Checking whether Order no is increasing when ever Order is created)

    • Open Flight1A; Analysis->graphs(Keep it open)
    • Create->get text->from screen area
    • Capture the No of tickets sold; right clcik &close the graph
    • Now , insert new order, open the graph(Analysis->graphs)
    • Go to Winrunner window, create->get text->from screen area
    • Capture the No of tickets sold and right click; close the graph
    • Save the script file
    • Add the followinf script; If(text2==text1) tl_step("text comparision",0,"updateed"); else tl_step("text comparision",1,"update property");
    • Run->Run from top to see the results.

    Get Text: For Object/Window:

    • Open a "Calc" application in two windows (Assuming two are two versions)
    • Create->get text->for Obj/Window
    • Click on some button in one window
    • Stop recording
    • Repeat 1 to 4 for Capture the text of same object from another "Calc" application.
    • Add the following TSL(Note:Change "text" to text1 & text2 for each statement) if(text1==text2) report_msg("correct" text1); Else report_msg("incorrect" text2);
    • Run & see the results

    Using GUI-Spy:

    Using the GUI Spy, you can view and verify the properties of any GUI object on selected application

    • Tools->Gui Spy…
    • Select Spy On ( select Object or Window)
    • Select Hand icon Button
    • Point the Object or window & Press Ctrl_L + F3.
    • You can view and verify the properties.

    Using Virtual Object Wizard:

    Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name

    • Tools->Virtual Object Wizard.
    • Click Next Button
    • Select standard class object for the virtual object Ex: class:Push_button
    • Click Next button
    • Click Mark Object button
    • Drag the cursor to mark the area of the virtual object.
    • Click Next button
    • Assign the Logical Name, This name will appear in the test script when you record object.
    • Select Yes or No check box
    • Click Finish button
    • Go to winrunner window & Create->Start Recording.
    • Do some operations
    • Stop Recording

    Using Gui Map Editor:

    Using the GUI Map Editor, you can view and modify the properties of any GUI object on selected application. To modify an object’s logical name in a GUI map file

    • Tools->GUI Map Editor
    • Select Learn button
    • Select the Application A winrunner message box informs “do you want to learn all objects within the window” & select ‘yes’’ button.
    • Select perticular object and select Modify Button
    • Change the Logical Name& click ‘OK’ Button
    • Save the File

    To find an object in a GUI map file:

    • Choose Tools > GUI Map Editor.
    • Choose View > GUI Files.
    • Choose File > Open to load the GUI map file.
    • Click Find. The mouse pointer turns into a pointing hand.
    • Click the object in the application being tested. The object is highlighted in the GUI map file.

    To highlight an object in a Application:

    • Choose Tools > GUI Map Editor.
    • Choose View > GUI Files.
    • Choose File > Open to load the GUI map file.
    • Select the object in the GUI map file
    • Click Show. The object is highlighted in the Application.

    Data Driver Wizard

    • Start->Programs->Wirunner->Sample applications->Flight 1A
    • Open Flight Reservation Application
    • Go to Winrunner window
    • Create->Start recording
    • Select file->new order, insert the fields; Click the Insert Order
    • Tools->Data Table; Enter different Customer names in one row and Tickets in another row.
    • Default that two column names are Noname1 and Noname2.
    • Tools->Data Driver Wizard
    • Click Next button &select the Data Table
    • Select Parameterize the test; select Line by Line check box
    • Click Next Button
    • Parameterize each specific values with column names of tables;Repeat for all
    • Final Click finish button.
    • Run->Run from top;
    • View the results.

    Merge the GUI Files:

    Manual Merge

    • Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button.
    • Select the Manual Merge. Manual Merge enables you to manually add GUI objects from the source to target files.
    • To specify the Target GUI map file click the browse button& select GUI map file
    • To specify the Source GUI map file. Click the add button& select source GUI map file.
    • Click ‘OK’ button
    • GUI Map File Manual Merge Tool Opens Select Objects and move Source File to Target File
    • Close the GUI Map File Manual Merge Tool

    Auto Merge

    • Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button.
    • Select the Auto Merge in Merge Type. If you chose Auto Merge and the source GUI map files are merged successfully without conflicts,
    • To specify the Target GUI map file click the browse button& select GUI map file
    • To specify the Source GUI map file.
    • Click the add button& select source GUI map file.
    • Click ‘OK’ button A message confirms the merge.

    Manually Retrive the Records form Database

    • db_connect(query1,DSN=Flight32);
    • db_execute_query(query1,select * from Orders,rec);
    • db_get_field_value(query1,#0,#0);
    • db_get_headers(query1, field_num,headers);
    • db_get_row(query1,5,row_con);
    • db_write_records(query1,,c:\\str.txt,TRUE,10);

    Web Test Plan Development

    Web Test Plan Development
    The objective of a test plan is to provide a roadmap so that the Web site can be evaluated through requirements or design statements. A test plan is a document that describes objectives and the scope of a Web site project. When you prepare a test plan, you should think through the process of the Web site test. The plan should be written so that it can successfully give the reader a complete picture of the Web site project and should be thorough enough to be useful. Following are some of the items that might be included in a test plan. Keep in mind thatthe items may vary depending on the Web site project.
    The Web Testing Process
    • Internet
    • Web Browser
    • Web Server
    PROJECT
    • Title of the project:
    • Date:
    • Prepared by:
    PURPOSE OF DOCUMENT
    • Objective of testing: Why are you testing the application? Who, what, when, where, why, and how should be some of the questions you ask in this section of the test plan.
    • Overview of the application: What is the purpose of the application? What are the specifications of the project?
    TEST TEAM
    • Responsible parties: Who is responsible and in charge of the testing?
    • List of test team: What are the names and titles of the people on the test team?
    RISK ASSUMPTIONS
    • Anticipated risks: What types of risks are involved that could cause the test to fail?
    • Similar risks from previous releases: Have there been documented risks from previous tests that may be helpful in setting up the current test?
    SCOPE OF TESTING
    • Possible limitations of testing: Are there any factors that may inhibit the test, such as resources and budget?
    • Impossible testing: What are the considerations involved that could prevent the tests that are planned?
    • Anticipated output: What are the anticipated outcomes of the test and have they been documented for comparison?
    • Anticipated input: What are the anticipated outcomes that need to be compared to the test documentation?
    TEST ENVIRONMENT: Hardware:
    • What are the operating systems that will be used?
    • What is the compatibility of all the hardware being used?
    Software:
    • What data configurations are needed to run the software?
    • Have all the considerations of the required interfaces to other systems been used?
    • Are the software and hardware compatible?
    TEST DATA
    • Database setup requirements: Does test data need to be generated or will a specific data from production be captured and used for testing?
    • Setup requirements: Who will be responsible for setting up the environment and maintaining it throughout the testing process?
    TEST TOOLS
    • Automated:Will automated tools be used?
    • Manual:Will manual testing be done?
    DOCUMENTATION
    • Test cases: Are there test cases already prepared or will they need to be prepared?
    • Test scripts: Are there test scripts already prepared or will they need to be prepared?
    PROBLEM TRACKING
    • Tools: What type of tools will be selected?
    • Processes: Who will be involved in the problem tracking process?
    REPORTING REQUIREMENTS
    • Testing deliverables: What are the deliverables for the test?
    • Retests: How will the retesting reporting be documented?
    PERSONNEL RESOURCES
    • Training:Will training be provided?
    • Implementation: How will training be implemented?
    ADDITIONAL DOCUMENTATION
    • Appendix:Will samples be included?
    • Reference materials:Will there be a glossary, acronyms, and/or data dictionary?
    Once you have written your test plan, you should address some of the following issues and questions:
    • Verify plan. Make sure the plan is workable, the dates are realistic, and that the plan is published. How will the test plan be implemented and what are the deliverables provided to verify the test?
    • Validate changes. Changes should be recorded by a problem tracking system and assigned to a developer to make revisions, retest, and sign off on changes that have been made.
    • Acceptance testing. Acceptance testing allows the end users to verify that the system works according to their expectation and the documentation. Certification of the Web site should be recorded and signed off by the end users, testers, and management.
    Test reports. Reports should be generated and the data should be checked and validated by the test team and users.

    Understanding Software Defects

    Understanding Software Defects

    Describe 13 major categories of Software defects:
    • User interface errors – the system provides something that is different from Interface.
    • Error handling – the way the errors are recognized and treated may be in error .
    • Boundary-related errors – the treatment of values at the edges of their ranges may be incorrect .
    • Calculation errors – arithmetic and logic calculations may be incorrect .
    • Initial and later states – the function fails the first time it is used but not later, or Vice-versa .
    • Control flow errors – the choice of what is done next is not appropriate for the current state .
    • Errors in handling or interpreting data – passing and converting data between systems (and even separate components of the system) may introduce errors.
    • Race conditions – when two events could be processed, one is always accepted prior to the other and things work fine, however eventually the other event may be processed first and unexpected or incorrect results are produced.
    • Load conditions – as the system is pushed to maximum limits problems start to occur, e.g. arrays overflow, disks full .
    • Hardware – interfacing with devices may not operate correctly under certain conditions, e.g. device unavailable .
    • Source and version control - out-of-date programs may be used where correct revisions are available.
    • Documentation – the user does not observe operation described in manuals .
    • Testing errors – the tester makes mistakes during testing and thinks the system is behaving incorrectly.

    Testing Life Cycle – Roles and Responsibilities

    Testing Life Cycle – Roles and Responsibilities

    • Clear communication protocol should be defined with in the testing team to ensure proper understanding of roles and responsibilities.
    • The roles chart should contain both on-site and off-shore team members.
    Testing Life Cycle – Roles and Responsibilities
    Test Manager
    • Single point contact between on site and offshore team
    • prepare the project plan.
    • Test management
    • Test planning
    • Interact with onsite lead, Client QA manager.
    • Team Management.
    • Work Allocation to the team.
    • Test coverage analysis.
    • Co-ordination with onsite for issue resolution.
    • Monitoring the deliverables.
    • Verify readiness of the product for release through release review.
    • Obtain customer acceptance on the deliverables
    • Performing risk analysis when required
    • Reviews and status reporting.
    • Authorize intermediate deliverables and patch release to customer.
    Test Lead
    • Resolves technical issues for the product group
    • Provide direction to the team member.
    • Perform activities to the respective product group.
    • Review and approve of test plan.
    • Review test script/code.
    • Approve completion of the integration testing.
    • Conduct system/regression test.
    • Ensure tests are conducted as per plan.
    • Reports status to the offshore test manager.
    Test Engineer
    • Development of the test cases and scripts
    • Test execution
    • Result capturing and analysis
    • Defects reporting and status reporting.

    Testing Life Cycle – Team Structure

    Testing Life Cycle Team Structure

    • An effective testing team includes a mixture of members who has Testing expertise/Tools expertise.
    • Database expertise/Domain/Technology expertise.
    • Consultants/End users.
    • The testing team must be properly structured, with defined roles and responsibilities that allow the testers to perform their function with minimal overlap.
    • There should not be any certainty regarding which team member should perform which duties.
    • The test manager will be facilitating any resources required for the testing team.

    Iteration Model

    Spiral Model

    Prototype Model

    Waterfall Model

    Waterfall Model

    The waterfall Model is an engineering model designed to be applied to the development of software.The idea is the following: there are different stages to the development and the outputs of the First Stage "Flow" into the second stage and these outputs "flow" into the third stage and so on.there are Usually five stages in this model of software Development.
    Stages of Water Fall Model
    Requirement analyis and Planninng:- In This stage the requirements of the "To be developed Software" are established.These are usually the services it will provide,its constraints and Goalsof the Software.Once theae are Established they Have to be defined such a way that are usable in the next Stage.This Stage is often Preludes by a Feasibility or a Feasible study is included in this stage.the Feasibility study includes Questions like;Should we develop the Software,what are the alternatives? It could be called the conception of a software project and might be seen as the very begining of the life cycle.

    V-mode of SDLC

    V & V PROCESS MODEL :

    V&V Model is Verification & Validation Model.In This Model We work simultaneously Development and Testing.In this Model One V for Verification and one For Validation first 'V' we follow SDLC(software Development Life Cycle) and Second 'V' we follow STLC-(Software Testing Life Cycle).
    • Testing normally done in a large system in 2 parts. The functional verification and validation against the Requirement specification and Performance evaluation against the indicated requirements.
    • Testing activity is involved right from the beginning of the project.
    • Use of V&V process model increases the rate of success in a project development company to deliver the application on time and increases the cost effectiveness.
    Testing Related Activities During Requirement Phase
    • Creation and finalization of testing template.
    • Creation of test plan and test strategy .
    • Capturing Acceptance criteria and preparation of acceptance test plan.
    • Capturing Performance Criteria of the software requirements.
    Testing activities in Design Phase
    • Develop test cases to ensure that product is on par with Requirement Specification document.
    • Verify Test Cases & Test Scripts by peer reviews.
    • Preparation of traceability matrix from system requirements.
    Testing activities in Unit Testing Phase
    • Unit test is done for validating the product with respect to client requirements.
    • Testing can be in multiple rounds.
    • Defects found during system test should be logged in to defect tracking system for the purpose of resolving and tracking.
    • Test logs and defects are captured and maintained.
    • Review of all test documents.
    Testing activities in Integration Testing Phase
    • This testing is done in parallel with integration of various applications or components.
    • Testing the product with its external and internal interfaces without using drivers and stubs.
    • Incremental approach while integrating the interfaces.
    Performance testing
    • This is done to validate the performance criteria of the product/ application. This is non-functional testing.

    Business Cycle testing

    • This refers to end to end testing of real life like business scenarios.

    Testing activities during Release phase

    • Acceptance testing is conducted at the customer location.
    • Resolves all defects reported by the customer during Acceptance testing.
    • Conduct Root Cause Analysis (RCA) for those defects reported by customer during acceptance testing.

    Web Testing Processes In Detail

    Web Testing
    Testing a Web site is a relatively new concept in the information technology (IT)field. Many businesses will test one part of a Web site, failing to see the importance of testing all the major components. Many businesses have not been as successful as others have because of this lack of testing; therefore, the need to test different aspects of the Web site has increased. The presence of online businesses on the World Wide Web (WWW) has become almost overwhelming. Because of this, you must test your site if you want to succeed, and to do so you need to identify the testing processes and methodologies that are most applicable to your business. Individuals can purchase just about anything on the Internet such as books, medicine, flowers, and paper supplies. To compete in this market, a Web business must be able to handle the volume, secure purchases, and deliver goods to customers. For this to happen, businesses should take Web testing seriously.
    Web Testing Challenges.
    Web Test Plan

    Software Testing Life Cycle Models

    Software Testing Life Cycle Models

    The various activities which are undertaken when developing software are commonly Modeled as a software development lifecycle. The software development lifecycle begins with the identification of a requirement for software and ends with the formal verification of the developed software against that requirement. The software development lifecycle does not exist by itself; it is in fact part of an overall Product lifecycle. Within the product lifecycle, software will undergo maintenance to correct errors and to comply with changes to requirements. The simplest overall form is Where the product is just software, but it can become much more complicated, with Multiple software developments each forming part of an overall system to comprise a Product. There are a number of different models for software development lifecycles. One thing Which all models have in common, is that at some point in the lifecycle, software has to be tested. This paper outlines some of the more commonly used software development Lifecycles, with particular emphasis on the testing activities in each model. A software life cycle model depicts the significant phases or activities of a software project from conception until the product is retired. It specifies the relationship between project phases, including transition criteria, feedback mechanisms, milestones, baselines, reviews, and deliverables. Typically, a life cycle model addresses the following phases of a software project: requirement phase, design phase, implemenentation, integration, testing, operations and maintenance. Much of the motivation behind utilizing a life cycle model is to provide structure. Life cycle models describe the inter relationship between software development phases .the common life cycle models are;

    Testing Life Cycle – Team Structure

    http://testingsolution.blogspot.com/2006/12/testing-life-cycle-team-structure.html

    Testing Life Cycle – Roles and Responsibilities

    http://testingsolution.blogspot.com/2006/12/testing-life-cycle-roles-and.html

    Testing Life Cycle – Roles and Responsibilities of Test Manager

    http://testingsolution.blogspot.com/2006/12/testing-life-cycle-roles-and.html

    Testing Life Cycle – Roles and Responsibilities of Test Lead

    http://testingsolution.blogspot.com/2006/12/testing-life-cycle-roles-and.html

    Testing Life Cycle – Roles and Responsibilities of Test Engineer

    http://testingsolution.blogspot.com/2006/12/testing-life-cycle-roles-and.html

    Life cycle model

    Life cycle model A frame work containing the processes, activities, and tasks involved in the development, operation and maintenance of a product spanning the life of the system from the definition of its requirements to the termination of its use.

    What is meant by Software? Software is the collection of computer programs, which includes procedures, rules and associated towards documentation of requirements.
    Phases in Project Development
    ØPreliminary Investigation ØFeasibility Analysis
    ØRequirement Analysis ØSRS Preparation ØPlanning ØDesign ØCoding ØTesting ØImplementation & Maintenance
    What is meant by Software Engineering?
    Software engineering is the systematic approach to the development, operation, maintenance and retirement of software with reliable cost.
    Test An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component .
    Testing®
    The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specifications.
    Testing (IEEE) The process of operating a system or component under specified conditions, Observing or recording the results, and making an evaluation of some aspect of the System or component.
    Testing is a process designed to :- • Prove that the program is error free • Establish that the software performs its functions correctly • Establish with confidence that the software does its job fully
    GOALS OF TESTING 1. Find cases where the program does not do what it is supposed to do.
    2. Find cases where the program does things it is not supposed to do.
    THE EIGHT BASIC PRINCIPLES OF TESTING
    1. Define the expected output or result.
    2. Don't test your own programs.
    3. Inspect the results of each test completely.
    4. Include test cases for invalid or unexpected conditions .
    5. Test the program to see if it does what it is not supposed to do as well as what it is supposed to do. 6. Avoid disposable test cases unless the program itself is disposable. 7. Do not plan tests assuming that no errors will be found. 8. The probability of locating more errors in any one module is directly proportional to the number of errors already found in that module. Let's look at each of these pints.
    Why Testing is Required?
    Technical Reason Business Reasons Professional Reasons
    Technical Reason: *Developers not infallible. *Requirement Implications are not always fully seen. *Behavior of system not necessarily predictable from components. *Dynamic testing can only reveal some bugs.
    Business Reasons *Don’t need customer/user to find bugs. *Post release debugging is very difficult and expensive.
    Professional Reasons *Test case design is challenging and rewarding. *Good testing gives confidence in work. *Systematic testing is effective.
    PDCA METHOD Plan (P) Device a plan Do (D) Execute the plan Check (C) Check the results Act (A) Take the necessary action
    Attitude of a Tester:
    * I perform at least as well as another expert would * I deliver useful results in a usable form * I choose methods that fit the situation * I make appropriate use of available tools and resources * I collaborate effectively with the project team
    * I can explain and defend my work * I can advise clients about the risks and limitations of my work * I can advise clients about how my work could be even better * I faithfully and ethically serve my clients * I become more expert over time
    The Economics of Testing
    * Testing involves a trade-off between COST and RISK. * Is the level of acceptable risk the same for all programs? * When is it not cost effective to continue testing? * Under what circumstances could testing guarantee that a program is correct? Costs of Errors Over the Life Cycle * The sooner an error can be found and corrected, the lower the cost. * Costs can increase exponentially with time between injection and discovery. * An industry survey showed that it is 75 times more expensive to correct errors discovered during ‘installation’’ than during ‘‘analysis’’. * One organization reported an average cost of $91 per defect found during ‘‘inspections’’ versus $25,000 per defect found after product delivery. What is 'Software Quality Assurance'? Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

    Verification and Validation Concepts

    Verification Concepts “A Verification concept is the understanding of principles, rationale, rules, participant roles and the psychologies of various techniques used to evaluate systems during development.”
    Validation Concepts “Validation typically involves actual testing and takes place after verifications are completed.”
    Verification Techniques Audits • An independent assessment of a project • To verify whether or not the project is in compliance with appropriate policies, procedures, standards, contractual specifications • An audit may include operational aspects of the project Reviews and Inspections • To determine whether or not to continue development To identify defects in the product early in the life cycle
    Types of Reviews 1. In-Process Reviews 2. Milestone Reviews (also called) Decision-point/Phase-end Reviews (a) Test Readiness Review (b)Test Completion Review 3. Post Implementation Reviews (also called) Post Mortem
    Types of Review 1. In-Process •Assess progress towards requirements • During a specific period of the development cycle – like design period • Limited to a segment of the product • Used to find defects in the work product and the work process • Catches defects early – where they are less costly to correct.
    2. Decision-point & Phase-end •Review of products and processes near the completion of each phase of development •Decisions for proceeding with development are based on cost, schedule, risk, progress, readiness for next phase •Also referred to as Milestone Review •Contains Requirements, Critical Design, Test Readiness and Phase-end Reviews
    Decision-point & Phase-end Software Requirements Review •Requirements documented •Baseline established •Analysis areas identified •Software development plan •Test plan •Configuration management plan derived
    Critical Design Review •Baselines the detailed design specification •Test cases are reviewed and approved •Usually, coding will begin at the close of this phase.
    Test Readiness Reviews • Performed when the appropriate modules are near completion • Determines whether or not testing should progress based on a review of entrance and exit criteria • Determines the readiness of the application/project for system and acceptance testing
    Test Completion Reviews • Determine the state of the software product
    3.Post Implementation Reviews •Also known as “Postmortems”
    • Review/evaluation of the product that includes planned vs. actual development results and compliance with requirements •Used for process improvement of software development •Can be held up to three to six months after implementation
    •Conducted in a formal format
    Classes of Reviews 1. Informal Review 2. Semiformal Review 3. Formal Review
    Informal • Also called peer-reviews • Generally one-on-one meeting between author of a work product & Peer.
    • Initiated as a request for input • No agenda • Results are not formally reported • Occur as needed through out each phase
    Semiformal •Facilitated by the author •Presentation is made with comment at the end •Presentation is made with comment made throughout •Issues raised are captured and published in a report distributed to participants •Possible solutions for defects not discussed •Occur one or more times during a phase
    Formal •Facilitated by a moderator (not author) •Moderator is assisted by a recorder •Defects are recorded and assigned •Meeting is planned •Materials are distributed beforehand •Participants are prepared- their preparedness dictates the effectiveness of the review •Full participation by all members of the reviewing team is required •A formal report captures issues raised and is distributed to participants and management •Defects found are tracked through the defect tracking system and followed through to resolution •Formal reviews may be held at any time
    Review Rules 1.The product is reviewed, not the producer 2.Defects and issues are identified, not corrected 3. All members of the reviewing team are responsible for the results of the review
    Review Notes # “Stage Containment”: identifying defects in the stage in which they were created, rather than in later testing stages. # Reviews are generally greater than 65% effective Testing is often less than 30% effective # The earlier defects are found the less expensive they are to correct In addition to learning about a specific product/project, team members are exposed to a variety of approaches to technical issues,Provides training in and enforces the use of standards.
    Participant Roles Management of V & V 1. Prepare the plans for execution of the process 2. Initiate the implementation of the plan 3. Monitor the execution of the plan 4. Analyze problems discovered during the execution of the plan 5. Report progress of the processes 6. Ensure products satisfy requirements 7. Assess evaluation results 8. Determine whether a task is complete 9. Check the results for completeness

    what is the use of different kind of testing when we use?

    What kinds of testing should be considered in Testing? When We use?
    • Black Box Testing -Not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
    • White Box Testing -Based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
    • Unit Testing -The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
    • Incremental Integration Testing -Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
    • Integration Testing -Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
    • Functional Testing -Black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
    • System Testing -Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
    • End-to-End Testing -Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
    • Sanity Testing or Smoke Testing - Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'same' enough condition to warrant further testing in its current state.
    • Regression Testing - Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
    • Acceptance Testing - Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
    • Load Testing - Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
    • Stress Testing -Term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
    • Performance Testing - Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
    • Usability Testing - Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
    • Install/Uninstall Testing -Testing of full, partial, or upgrade install/uninstall processes.
    • Recovery Testing - Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
    • Failover Testing - Typically used interchangeably with 'recovery testing'
    • Security Testing - Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
    • Compatability Testing -Testing how well software performs in a particular hardware/software/operatingsystem/network/etc. environment.
    • Exploratory Testing - Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
    • Ad-hoc Testing - Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
    • Context-driven Testing - Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
    • User Acceptance Testing - Determining if software is satisfactory to an end-user or customer.
    • Comparison Testing - Comparing software weaknesses and strengths to competing products.
    • Alpha Testing - Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
    • Beta Testing - Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
    • Mutation Testing - A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

    Testing Written Test Question -2

    1 : In software quality assurance work there is no difference between software verification and software validation.
    a. True
    b. False
    The correct answer is b
    2 : The best reason for using Independent software test teams is that.
    a. software developers do not need to do any testing
    b. a test team will test the software more thoroughly
    c. testers do not get involved with the project until testing begins
    d. arguments between developers and testers are reduced
    The correct answer is b
    3 : What is the normal order of activities in which traditional software testing is organized?
    a. integration testing b. system testing c. unit testing d.validation testing
    a. a, d, c, b
    b. b, d, a, c
    c. c, a, d, b
    d. d, b, c, a
    The correct answer is c
    4 : Class testing of object-oriented software is equivalent to unit testing for traditional software.
    a. True
    b. False
    The correct answer is a
    5 : By collecting software metrics and making use of existing software reliability models it is possible to develop meaningful guidelines for determining when software testing is finished.
    a. True
    b. False
    The correct answer is a
    6 : Which of the following strategic issues needs to be addressed in a successful software testing process?
    a. conduct formal technical reviews prior to testing
    b. specify requirements in a quantifiable manner
    c. use independent test teams
    d. wait till code is written prior to writing the test plan
    e. both a and bThe correct answer is e
    7 : Which of the following need to be assessed during unit testing?
    a. algorithmic performance
    b. code stability
    c. error handling
    d. execution paths
    e. both c and d
    The correct answer is e
    8 : Drivers and stubs are not needed for unit testing because the modules are tested independently of one another.
    a. True
    b. False
    The correct answer is b
    9 : Top-down integration testing has as it's major advantage(s) that
    a. low level modules never need testing
    b. major decision points are tested early
    c. no drivers need to be written
    d. no stubs need to be written
    e. both b and c
    The correct answer is e
    10 : Bottom-up integration testing has as it's major advantage(s) that
    a. major decision points are tested early
    b. no drivers need to be written
    c. no stubs need to be written
    d. regression testing is not required
    The correct answer is c
    11 : Regression testing should be a normal part of integration testing because as a new module is added to the system new
    a. control logic is invoked
    b. data flow paths are established
    c. drivers require testing
    d. all of the above
    e. both a and b
    The correct answer is e
    12 : Smoke testing might best be described as
    a. bulletproofing shrink-wrapped software
    b. rolling integration testing
    c. testing that hides implementation errors
    d. unit testing for small programs
    The correct answer is b
    13 : When testing object-oriented software it is important to test each class operation separately as part of the unit testing process.
    a. True
    b. False
    The correct answer is b
    14 : The OO testing integration strategy involves testing
    a. groups of classes that collaborate or communicate in some way
    b. single operations as they are added to the evolving class implementation
    c. operator programs derived from use-case scenarios
    d. none of the above
    The correct answer is a
    15 : The focus of validation testing is to uncover places that a user will be able to observe failure of the software to conform to its requirements.
    a. True
    b. False
    The correct answer is a
    16 : Software validation is achieved through a series of tests performed by the user once the software is deployed in his or her work environment.
    a. True
    b. False
    The correct answer is b
    17 : Configuration reviews are not needed if regression testing has been rigorously applied during software integration.
    a. True
    b. False
    The correct answer is b
    18 : Acceptance tests are normally conducted by the
    a. developer
    b. end users
    c. test team
    d. systems engineers
    The correct answer is b
    19 : Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that software is able to continue execution without interruption.
    a. True
    b. False
    The correct answer is b
    20 : Security testing attempts to verify that protection mechanisms built into a system protect it from improper penetration.
    a. True
    b. FalseThe correct answer is a
    21 : Stress testing examines the pressures placed on the user during system use in extreme environments.
    a. True
    b. False
    The correct answer is b
    22 : Performance testing is only important for real-time or embedded systems.
    a. True
    b. False
    The correct answer is b
    23 : Debugging is not testing, but always occurs as a consequence of testing.
    a. True
    b. False
    The correct answer is a
    24 : Which of the following is an approach to debugging?
    a. backtracking
    b. brute force
    c. cause elimination
    d. code restructuring
    e. a, b, and c
    Your answer is d

    What is Software Testing?

    What is Software Testing? There are many published definitions of software testing, however, all of these definitions boil down to essentially the same thing: software testing is the process of executing software in a controlled manner, in order to answer the question "Does the software behave as specified?". Software testing is often used in association with the terms verification and validation.Verification is the checking or testing of items, including software, for conformance and consistency with an associated specification. Software testing is just one kind of verification, which also uses techniques such as reviews, analysis, inspections and walkthroughs. Validation is the process of checking that what has been specified is what the user actually wanted. •Validation: Are we doing the right job? •Verification: Are we doing the job right? The term bug is often used to refer to a problem or fault in a computer. There are software bugs and hardware bugs. The term originated in the United States, at the time when pioneering computers were built out of valves, when a series of previously inexplicable faults were eventually traced to moths flying about inside the computer. Software testing should not be confused with debugging. Debugging is the process of analyzing and locating bugs when software does not behave as expected. Although the identification of some bugs will be obvious from playing with the software, a methodical approach to software testing is a much more thorough means of identifying bugs. Debugging is therefore an activity which supports testing, but cannot replace testing. However, no amount of testing can be guaranteed to discover all bugs.
    Other activities which are often associated with software testing are static analysis and dynamic analysis. Static analysis investigates the source code of software, looking for problems and gathering metrics without actually executing the code. Dynamic analysis looks at the behavior of software while it is executing, to provide information such as execution traces, timing profiles, and test coverage information.
    2. Software Specifications and Testing The key component of the above definitions is the word specified. Validation and Verification activities, such as software testing, cannot be meaningful unless there is a Specification for the software. Software could be a single module or unit of code, or an entire system. Depending on the size of the development and the development methods, specification of software can range from a single document to a complex hierarchy of documents. A hierarchy of software specifications will typically contain three or more levels of software specification documents. •The Requirements Specification, which specifies what the software is required to do and may also specify constraints on how this may be achieved. •The Architectural Design Specification, which describes the architecture of a design which implements the requirements. Components within the software and the relationship between them will be described in this document. •Detailed Design Specifications, which describe how each component in the software, down to individual units, is to be implemented. Requirements Specification Architectural Design Specification Detailed Design Specification With such a hierarchy of specifications, it is possible to test software at various stages of the development, for conformance with each specification. The levels of testing which correspond to the hierarchy of software specifications listed above are: •Unit Testing, in which each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. •Software Integration Testing, in which progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole. •System Testing, in which the software is integrated to the overall product and tested to Show that all requirements are met. A further level of testing is also concerned with requirements: •Acceptance Testing, upon which acceptance of the completed software is based. This Will often use a subset of the system tests, witnessed by the customers for the software Or system. Once each level of software specification has been written, the next step is to design the tests. An important point here is that the tests should be designed before the software is implemented, because if the software was implemented first it would be too tempting to test the software against what it is observed to do (which is not really testing at all), rather than against what it is specified to do. Within each level of testing, once the tests have been applied, test results are evaluated. If a problem is encountered, then either the tests are revised and applied again, or the software is fixed and the tests applied again. This is repeated until no problems are encountered, at which point development can proceed to the next level of testing.
    Testing does not end following the conclusion of acceptance testing. Software has to be maintained to fix problems which show up during use and to accommodate new requirements. Software tests have to be repeated, modified and extended. The effort to revise and repeat tests consequently forms a major part of the overall cost of developing and maintaining software. The term regression testing is used to refer to the repetition of earlier successful tests in order to make sure that changes to the software have not introduced side effects.
    3. Test Design Documentation The design of tests is subject to the same basic engineering principles as the design of software. Good design consists of a number of stages which progressively elaborate the design of tests from an initial high level strategy to detailed test procedures. These stages are: test strategy, test planning, test case design, and test procedure design. The design of tests has to be driven by the specification of the software. At the highest level this means that tests will be designed to verify that the software faithfully implements the requirements of the Requirements Specification. At lower levels tests will be designed toverify that items of software implement all design decisions made in the Architectural Design Specification and Detailed Design Specifications. As with any design process, each stage of the test design process should be subject to informal and formal review. The ease with which tests can be designed is highly dependant on the design of the software. It is important to consider testability as a key (but usually undocumented) requirement for any software development.
    3.1. Test Strategy The first stage is the formulation of a test strategy. A test strategy is a statement of the Overall approach to testing, identifying what levels of testing are to be applied and the Methods, techniques and tools to be used. A test strategy should ideally be organization Wide, being applicable to all of organizations software developments. Developing a test strategy which efficiently meets the needs of an organization is critical to the success of software development within the organization. The application of a test Strategy to a software development project should be detailed in the projects software . Quality plan. 3.2. Test Plans The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment. A test plan may be project wide, or may in fact be a hierarchy of plans relating to the Various levels of specification and testing: •An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the System test plan as a single document. •A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan. •A Software Integration Test Plan, describing the plan for integration of tested software components. This may form part of the Architectural Design Specification. •Unit Test Plan(s), describing the plans for testing of individual units of software. Thesemay form part of the Detailed Design Specifications. The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification. 3.3. Test Case Design Once the test plan for a level of testing has been written, the next stage of test design is to specify a set of test cases or test paths for each item to be tested at that level. A number of test cases will be identified for each item to be tested at each level of testing. Each test case will specify how the implementation of a particular requirement or design decision is to be tested and the criteria for success of the test. The test cases may be documented with the test plan, as a section of software specification, or in a separate document called a test specification or test description. •An Acceptance Test Specification, specifying the test cases for acceptance testing of the software. This would usually be published as a separate document, but might be published with the acceptance test plan. •A System Test Specification, specifying the test cases for system integration and Testing. This would also usually be published as a separate document, but might be Published with the system test plan. •Software Integration Test Specifications, specifying the test cases for each stage of Integration of tested software components. These may form sections of the Architectural Design Specification. •Unit Test Specifications, specifying the test cases for testing of individual units of Software. These may form sections of the Detailed Design Specifications. System testing and acceptance testing involve an enormous number of individual test cases.
    In order to keep track of which requirements are tested by which test cases, an index which cross references between requirements and test cases often constructed. This is usually referred to as a Verification Cross Reference Index (VCRI) and is attached to the test specification. Cross reference indexes may also be used with unit testing and software integration testing. It is important to design test cases for both positive testing and negative testing. Positive testing checks that the software does what it should. Negative testing checks that the software doesn't do what it shouldn't. The process of designing test cases, including executing them as thought experiments, will often identify bugs before the software has even been built. It is not uncommon to find more bugs when designing tests than when executing tests.
    3.4. Test Procedures The final stage of test design is to implement a set of test cases as a test procedure, Specifying the exact process to be followed to conduct each of the test cases. This is a fairly straight forward process, which can be likened to designing units of code from higher level functional descriptions. For each item to be tested, at each level of testing, a test procedure will specify the process to be followed in conducting the appropriate test cases. A test procedure cannot leave out steps or make assumptions. The level of detail must be such that the test procedure is deterministic and repeatable. Test procedures should always be separate items, because they contain a great deal of detail which is irrelevant to software specifications. If AdaTEST or Cantata are used, test procedures may be coded directly as AdaTEST or Cantata test scripts.
    4. Test Results Documentation When tests are executed, the outputs of each test execution should be recorded in a test Results file. These results are then assessed against criteria in the test specification to determine the overall outcome of a test. If AdaTEST or Cantata is used, this file will be Created and the results assessed automatically according to criteria specified in the test Script. Each test execution should also be noted in a test log. The test log will contain records of When each test has been executed, the outcome of each test execution, and may also include key observations made during test execution. Often a test log is not maintained for lower levels of testing (unit test and software integration test). Test reports may be produced at various points during the testing process. A test report Will summaries the results of testing and document any analysis. An acceptance test report often forms a contractual document within which acceptance of software is agreed.
    5. Further Results and Conclusion Software can be tested at various stages of the development and with various degrees of Rigors. Like any development activity, testing consumes effort and effort costs money. Developers should plan for between 30% and 70% of a projects effort to be expended on Verification and validation activities, including software testing. From an economics point of view, the level of testing appropriate to a particular Organization and software application will depend on the potential consequences of Undetected bugs. Such consequences can range from a minor inconvenience of having to Find a work-round for a bug to multiple deaths. Often overlooked by software developers (but not by customers), is the long term damage to the credibility of an organization which delivers software to users with bugs in it, and the resulting negative impact on future business. Conversely, a reputation for reliable software will help an organization to obtain future business. Efficiency and quality are best served by testing software as early in the life cycle as Practical, with full regression testing whenever changes are made. The later a bug is found, the higher the cost of fixing it, so it is sound economics to identify and fix bugs as early as possible. Designing tests will help to identify bugs, even before the tests are executed, so designing tests as early as practical in software development is a useful means of reducing the cost of identifying and correcting bugs. In practice the design of each level of software testing will be developed through a number of layers, each adding more detail to the tests. Each level of tests should be designed before the implementation reaches a point which could influence the design of tests in such a way as to be detrimental to the objectivity of the tests. Remember: software should be tested against what it is specified to do, not against what it actually observed to do. The effectiveness of testing effort can be maximized by selection of an appropriate testing strategy, good management of the testing process, and appropriate use of tools such as AdaTEST or Cantata to support the testing process. The net result will be an increase in quality and a decrease in costs, both of which can only be beneficial to a software developers business. The following list provides some rules to follow as an aid to effective and beneficial Software testing. •Always test against a specification. If tests are not developed from a specification, Then it is not testing. Hence, testing is totally reliant upon adequate specification of Software. •Document the testing process: specify tests and record test results. •Test hierarchically against each level of specification. Finding more errors earlier will Ultimately reduce costs. •Plan verification and validation activities, particularly testing. •Complement testing with techniques such as static analysis and dynamic analysis. •Always test positively: that the software does what it should, but also negatively: that it Doesn’t do what it shouldn't. •Have the right attitude to testing - it should be a challenge, not the chore it so often Becomes.