Life cycle model

Life cycle model A frame work containing the processes, activities, and tasks involved in the development, operation and maintenance of a product spanning the life of the system from the definition of its requirements to the termination of its use.

What is meant by Software? Software is the collection of computer programs, which includes procedures, rules and associated towards documentation of requirements.
Phases in Project Development
ØPreliminary Investigation ØFeasibility Analysis
ØRequirement Analysis ØSRS Preparation ØPlanning ØDesign ØCoding ØTesting ØImplementation & Maintenance
What is meant by Software Engineering?
Software engineering is the systematic approach to the development, operation, maintenance and retirement of software with reliable cost.
Test An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component .
Testing®
The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specifications.
Testing (IEEE) The process of operating a system or component under specified conditions, Observing or recording the results, and making an evaluation of some aspect of the System or component.
Testing is a process designed to :- • Prove that the program is error free • Establish that the software performs its functions correctly • Establish with confidence that the software does its job fully
GOALS OF TESTING 1. Find cases where the program does not do what it is supposed to do.
2. Find cases where the program does things it is not supposed to do.
THE EIGHT BASIC PRINCIPLES OF TESTING
1. Define the expected output or result.
2. Don't test your own programs.
3. Inspect the results of each test completely.
4. Include test cases for invalid or unexpected conditions .
5. Test the program to see if it does what it is not supposed to do as well as what it is supposed to do. 6. Avoid disposable test cases unless the program itself is disposable. 7. Do not plan tests assuming that no errors will be found. 8. The probability of locating more errors in any one module is directly proportional to the number of errors already found in that module. Let's look at each of these pints.
Why Testing is Required?
Technical Reason Business Reasons Professional Reasons
Technical Reason: *Developers not infallible. *Requirement Implications are not always fully seen. *Behavior of system not necessarily predictable from components. *Dynamic testing can only reveal some bugs.
Business Reasons *Don’t need customer/user to find bugs. *Post release debugging is very difficult and expensive.
Professional Reasons *Test case design is challenging and rewarding. *Good testing gives confidence in work. *Systematic testing is effective.
PDCA METHOD Plan (P) Device a plan Do (D) Execute the plan Check (C) Check the results Act (A) Take the necessary action
Attitude of a Tester:
* I perform at least as well as another expert would * I deliver useful results in a usable form * I choose methods that fit the situation * I make appropriate use of available tools and resources * I collaborate effectively with the project team
* I can explain and defend my work * I can advise clients about the risks and limitations of my work * I can advise clients about how my work could be even better * I faithfully and ethically serve my clients * I become more expert over time
The Economics of Testing
* Testing involves a trade-off between COST and RISK. * Is the level of acceptable risk the same for all programs? * When is it not cost effective to continue testing? * Under what circumstances could testing guarantee that a program is correct? Costs of Errors Over the Life Cycle * The sooner an error can be found and corrected, the lower the cost. * Costs can increase exponentially with time between injection and discovery. * An industry survey showed that it is 75 times more expensive to correct errors discovered during ‘installation’’ than during ‘‘analysis’’. * One organization reported an average cost of $91 per defect found during ‘‘inspections’’ versus $25,000 per defect found after product delivery. What is 'Software Quality Assurance'? Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

Verification and Validation Concepts

Verification Concepts “A Verification concept is the understanding of principles, rationale, rules, participant roles and the psychologies of various techniques used to evaluate systems during development.”
Validation Concepts “Validation typically involves actual testing and takes place after verifications are completed.”
Verification Techniques Audits • An independent assessment of a project • To verify whether or not the project is in compliance with appropriate policies, procedures, standards, contractual specifications • An audit may include operational aspects of the project Reviews and Inspections • To determine whether or not to continue development To identify defects in the product early in the life cycle
Types of Reviews 1. In-Process Reviews 2. Milestone Reviews (also called) Decision-point/Phase-end Reviews (a) Test Readiness Review (b)Test Completion Review 3. Post Implementation Reviews (also called) Post Mortem
Types of Review 1. In-Process •Assess progress towards requirements • During a specific period of the development cycle – like design period • Limited to a segment of the product • Used to find defects in the work product and the work process • Catches defects early – where they are less costly to correct.
2. Decision-point & Phase-end •Review of products and processes near the completion of each phase of development •Decisions for proceeding with development are based on cost, schedule, risk, progress, readiness for next phase •Also referred to as Milestone Review •Contains Requirements, Critical Design, Test Readiness and Phase-end Reviews
Decision-point & Phase-end Software Requirements Review •Requirements documented •Baseline established •Analysis areas identified •Software development plan •Test plan •Configuration management plan derived
Critical Design Review •Baselines the detailed design specification •Test cases are reviewed and approved •Usually, coding will begin at the close of this phase.
Test Readiness Reviews • Performed when the appropriate modules are near completion • Determines whether or not testing should progress based on a review of entrance and exit criteria • Determines the readiness of the application/project for system and acceptance testing
Test Completion Reviews • Determine the state of the software product
3.Post Implementation Reviews •Also known as “Postmortems”
• Review/evaluation of the product that includes planned vs. actual development results and compliance with requirements •Used for process improvement of software development •Can be held up to three to six months after implementation
•Conducted in a formal format
Classes of Reviews 1. Informal Review 2. Semiformal Review 3. Formal Review
Informal • Also called peer-reviews • Generally one-on-one meeting between author of a work product & Peer.
• Initiated as a request for input • No agenda • Results are not formally reported • Occur as needed through out each phase
Semiformal •Facilitated by the author •Presentation is made with comment at the end •Presentation is made with comment made throughout •Issues raised are captured and published in a report distributed to participants •Possible solutions for defects not discussed •Occur one or more times during a phase
Formal •Facilitated by a moderator (not author) •Moderator is assisted by a recorder •Defects are recorded and assigned •Meeting is planned •Materials are distributed beforehand •Participants are prepared- their preparedness dictates the effectiveness of the review •Full participation by all members of the reviewing team is required •A formal report captures issues raised and is distributed to participants and management •Defects found are tracked through the defect tracking system and followed through to resolution •Formal reviews may be held at any time
Review Rules 1.The product is reviewed, not the producer 2.Defects and issues are identified, not corrected 3. All members of the reviewing team are responsible for the results of the review
Review Notes # “Stage Containment”: identifying defects in the stage in which they were created, rather than in later testing stages. # Reviews are generally greater than 65% effective Testing is often less than 30% effective # The earlier defects are found the less expensive they are to correct In addition to learning about a specific product/project, team members are exposed to a variety of approaches to technical issues,Provides training in and enforces the use of standards.
Participant Roles Management of V & V 1. Prepare the plans for execution of the process 2. Initiate the implementation of the plan 3. Monitor the execution of the plan 4. Analyze problems discovered during the execution of the plan 5. Report progress of the processes 6. Ensure products satisfy requirements 7. Assess evaluation results 8. Determine whether a task is complete 9. Check the results for completeness

what is the use of different kind of testing when we use?

What kinds of testing should be considered in Testing? When We use?
  • Black Box Testing -Not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
  • White Box Testing -Based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
  • Unit Testing -The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
  • Incremental Integration Testing -Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
  • Integration Testing -Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • Functional Testing -Black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
  • System Testing -Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
  • End-to-End Testing -Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • Sanity Testing or Smoke Testing - Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'same' enough condition to warrant further testing in its current state.
  • Regression Testing - Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
  • Acceptance Testing - Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
  • Load Testing - Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
  • Stress Testing -Term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
  • Performance Testing - Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
  • Usability Testing - Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  • Install/Uninstall Testing -Testing of full, partial, or upgrade install/uninstall processes.
  • Recovery Testing - Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  • Failover Testing - Typically used interchangeably with 'recovery testing'
  • Security Testing - Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  • Compatability Testing -Testing how well software performs in a particular hardware/software/operatingsystem/network/etc. environment.
  • Exploratory Testing - Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
  • Ad-hoc Testing - Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • Context-driven Testing - Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
  • User Acceptance Testing - Determining if software is satisfactory to an end-user or customer.
  • Comparison Testing - Comparing software weaknesses and strengths to competing products.
  • Alpha Testing - Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • Beta Testing - Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
  • Mutation Testing - A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Testing Written Test Question -2

1 : In software quality assurance work there is no difference between software verification and software validation.
a. True
b. False
The correct answer is b
2 : The best reason for using Independent software test teams is that.
a. software developers do not need to do any testing
b. a test team will test the software more thoroughly
c. testers do not get involved with the project until testing begins
d. arguments between developers and testers are reduced
The correct answer is b
3 : What is the normal order of activities in which traditional software testing is organized?
a. integration testing b. system testing c. unit testing d.validation testing
a. a, d, c, b
b. b, d, a, c
c. c, a, d, b
d. d, b, c, a
The correct answer is c
4 : Class testing of object-oriented software is equivalent to unit testing for traditional software.
a. True
b. False
The correct answer is a
5 : By collecting software metrics and making use of existing software reliability models it is possible to develop meaningful guidelines for determining when software testing is finished.
a. True
b. False
The correct answer is a
6 : Which of the following strategic issues needs to be addressed in a successful software testing process?
a. conduct formal technical reviews prior to testing
b. specify requirements in a quantifiable manner
c. use independent test teams
d. wait till code is written prior to writing the test plan
e. both a and bThe correct answer is e
7 : Which of the following need to be assessed during unit testing?
a. algorithmic performance
b. code stability
c. error handling
d. execution paths
e. both c and d
The correct answer is e
8 : Drivers and stubs are not needed for unit testing because the modules are tested independently of one another.
a. True
b. False
The correct answer is b
9 : Top-down integration testing has as it's major advantage(s) that
a. low level modules never need testing
b. major decision points are tested early
c. no drivers need to be written
d. no stubs need to be written
e. both b and c
The correct answer is e
10 : Bottom-up integration testing has as it's major advantage(s) that
a. major decision points are tested early
b. no drivers need to be written
c. no stubs need to be written
d. regression testing is not required
The correct answer is c
11 : Regression testing should be a normal part of integration testing because as a new module is added to the system new
a. control logic is invoked
b. data flow paths are established
c. drivers require testing
d. all of the above
e. both a and b
The correct answer is e
12 : Smoke testing might best be described as
a. bulletproofing shrink-wrapped software
b. rolling integration testing
c. testing that hides implementation errors
d. unit testing for small programs
The correct answer is b
13 : When testing object-oriented software it is important to test each class operation separately as part of the unit testing process.
a. True
b. False
The correct answer is b
14 : The OO testing integration strategy involves testing
a. groups of classes that collaborate or communicate in some way
b. single operations as they are added to the evolving class implementation
c. operator programs derived from use-case scenarios
d. none of the above
The correct answer is a
15 : The focus of validation testing is to uncover places that a user will be able to observe failure of the software to conform to its requirements.
a. True
b. False
The correct answer is a
16 : Software validation is achieved through a series of tests performed by the user once the software is deployed in his or her work environment.
a. True
b. False
The correct answer is b
17 : Configuration reviews are not needed if regression testing has been rigorously applied during software integration.
a. True
b. False
The correct answer is b
18 : Acceptance tests are normally conducted by the
a. developer
b. end users
c. test team
d. systems engineers
The correct answer is b
19 : Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that software is able to continue execution without interruption.
a. True
b. False
The correct answer is b
20 : Security testing attempts to verify that protection mechanisms built into a system protect it from improper penetration.
a. True
b. FalseThe correct answer is a
21 : Stress testing examines the pressures placed on the user during system use in extreme environments.
a. True
b. False
The correct answer is b
22 : Performance testing is only important for real-time or embedded systems.
a. True
b. False
The correct answer is b
23 : Debugging is not testing, but always occurs as a consequence of testing.
a. True
b. False
The correct answer is a
24 : Which of the following is an approach to debugging?
a. backtracking
b. brute force
c. cause elimination
d. code restructuring
e. a, b, and c
Your answer is d