What Is WSDL?

WSDL: Web Service Description Language

This New Breed of dot com needs a solution that can describe the services .the web services it offers specially this means that you need a format or some type of grammar with which you can describe the answers to the following questions.
  • What are the services offered in Your Online business?
  • How Can You Invoke Your business services?
  • What information do your business services need from the user when he or she invoke your Service?
  • How Will the user Provide the required Information?
  • In which Format will the services send information back to user?

Testing Definition In Simple

  • What's Ad Hoc Testing ?
  • A testing where the tester tries to break the software by randomly trying functionality of software.
  • What's the Accessibility Testing ?
  • Testing that determines if software will be usable by people with disabilities.
  • What's the Alpha Testing ?
  • The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software
  • What's the Beta Testing ?
  • Testing the application after the installation at the client place।
  • What is Component Testing ?
  • Testing of individual software components (Unit Testing).
  • What's Compatibility Testing ?
  • In Compatibility testing we can test that software is compatible with other elements of system.
  • What is Concurrency Testing ?
  • Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
  • What is Conformance Testing ?
  • The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
  • What is Context Driven Testing ?
  • The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
  • What is Data Driven Testing ?
  • Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
  • What is Conversion Testing ?
  • Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
  • What is Dependency Testing ?
  • Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
  • What is Depth Testing ?
  • A test that exercises a feature of a product in full detail.
  • What is Dynamic Testing ?
  • Testing software through executing it. See also Static Testing.
  • What is Endurance Testing ?
  • Checks for memory leaks or other problems that may occur with prolonged execution.
  • What is End-to-End testing ?
  • Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • What is Exhaustive Testing ?
  • Testing which covers all combinations of input values and preconditions for an element of the software under test.
  • What is Gorilla Testing ?
  • Testing one particular module, functionality heavily.
  • What is Installation Testing ?
  • Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
  • What is Localization Testing ?
  • This term refers to making software specifically designed for a specific locality.
  • What is Loop Testing ?
  • A white box testing technique that exercises program loops.
  • What is Mutation Testing ?
  • Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources
  • What is Monkey Testing ?
  • Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
  • What is Positive Testing ?
  • Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
  • What is Negative Testing ?
  • Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.
  • What is Path Testing ?
  • Testing in which all paths in the program source code are tested at least once.
  • What is Performance Testing ?
  • Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
  • What is Ramp Testing ?
  • Continuously raising an input signal until the system breaks down.
  • What is Recovery Testing ?
  • Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
  • What is the Re-testing testing ?
  • Retesting- Again testing the functionality of the application.
  • What is the Regression testing ?
  • Regression- Check that change in code have not effected the working functionality
  • What is Sanity Testing ?
  • Brief test of major functional elements of a piece of software to determine if its basically operational.
  • What is Scalability Testing ?
  • Performance testing focused on ensuring the application under test gracefully handles increases in work load in Normal Condition.
  • What is Security Testing ?
  • Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
  • What is Stress Testing ?
  • Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results in Abnormal Condition.
  • What is Smoke Testing ?
  • A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
  • What is Soak Testing ?
  • Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.it means Reliability with load.
  • What's the Usability testing ?
  • Usability testing is for user friendliness.
  • What's the User acceptance testing ?
  • User acceptance testing is determining if software is satisfactory to an end-user or customer.
  • What's the Volume Testing ?
  • We can perform the Volume testing, where the system is subjected to large volume of data.

Test Plan

Test Plan:- A Test Plan is What To Test ? and How To Test? The Test Plan describes the overall approach to development, integration, qualification, and acceptance testing. It describes plans for testing software systems; test environment to be used for the testing; identifies tests to be performed, and provides schedules for test activities. The software Test Plan is used by IT Manager, Test Manager, Documentation Manager, System Administrator, Technical Architect, Development Manager, Project Manager.
The Test Plan is used to perform system testing, subsystem testing, assembly testing, subassembly testing, module testing, user acceptance testing. The Test Plan includes step by step instructions for the various types of testing that will occur during the project life cycle; it also defines the required test equipment, calibration requirements, test facility requirements, and other key factors.
This is a sample of an outline for a test plan. It has been designed for medium to small test projects, and thus is fairly lightweight. It is by necessity general, because each enterprise, each development group, each testing group, and each development project is different. This outline should be used as a set of guidelines for creating your own standard template; add to it or subtract from it as you find appropriate.

Differences in Testing

Difference Between White box and Black Box Testing?
Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
Black-box and white-box are test design methods. Black-box test design treats the system as a “black-box”, so it doesn’t explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the “box”, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box. While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural". Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this "gray-box" or "translucent-box" test design, but others wish we'd stop talking about boxes altogether. It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate .
What are unit, component and integration testing?
Note that the definitions of unit, component, integration, and integration testing are recursive:
Unit. The smallest compilable component. A unit typically is the work of one programmer (At least in principle). As defined, it does not include any called sub-components (for procedural languages) or communicating components in general.
Unit Testing: in unit testing called components (or communicating components) are replaced with stubs, simulators, or trusted components. Calling components are replaced with drivers or trusted super-components. The unit is tested in isolation.
Component: a unit is a component. The integration of one or more components is a component.
Note: The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves recursively.
Component testing: same as unit testing except that all stubs and simulators are replaced with the real thing.
Two components (actually one or more) are said to be integrated when:
a. They have been compiled, linked, and loaded together.
b. They have successfully passed the integration tests at the interface between them. Thus, components A and B are integrated to create a new, larger, component (A,B).
Note that this does not conflict with the idea of incremental integration—it just means that A is a big component and B, the component added, is a small one.
Integration testing: carrying out integration tests.
Integration tests (After Leung and White) for procedural languages. This is easily generalized for OO(Object Oriented) languages by using the equivalent constructs for message passing. In the following, the word "call" is to be understood in the most general sense of a data flow and is not restricted to just formal subroutine calls and returns – for example, passage of data through global data structures and/or the use of pointers.
As to the difference between integration testing and system testing. System testing specifically goes after behaviors and bugs that are properties of the entire system as distinct from properties attributable to components (unless, of course, the component in question is the entire system). Examples of system testing issues: Resource loss bugs, throughput bugs, performance, security, recovery, Transaction synchronization bugs (often misnamed "timing bugs").
What's the difference between load and stress testing ?
One of the most common, but unfortunate misuse of terminology is treating “load testing” and “stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly “load tested” nor subjected to a meaningful stress test.
Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, MIPS, interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.
Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads are in support of software reliability testing and in performance testing. The term "load testing" by itself is too vague and imprecise to warrant use. For example, do you mean representative load," "overload," "high load," etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions ,suffer (application-specific) excessive delay.
A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, "load testing" is merely testing at the highest transaction arrival rate in performance testing.
What's the difference between QA and testing?
QA is more a preventive thing, ensuring quality in the company and therefore the product rather than just testing the product for software bugs?
TESTING means "quality control"
QUALITY CONTROL measures the quality of a product
QUALITY ASSURANCE measures the quality of processes used to create a quality product.

Testing Interview Question.

  • 1. What is Software Testing?
  • 2. What is the Purpose of Testing?
  • 3. What types of testing do testers perform?
  • 4. What is the Outcome of Testing?
  • 5. What kind of testing have you done?
  • 6. What is the need for testing?
  • 7. What are the entry criteria for Functionality and Performance testing?
  • 8. What is test metrics?
  • 9. Why do you go for White box testing, when Black box testing is available?
  • 10. What are the entry criteria for Automation testing?
  • 11. When to start and Stop Testing?
  • 12. What is Quality?
  • 13. What is Baseline document, Can you say any two?
  • 14. What is verification?
  • 15. What is validation?
  • 16. What is quality assurance?
  • 17. What is quality control?
  • 18. What is SDLC and TDLC?
  • 19. What are the Qualities of a Tester?
  • 20. When to start and Stop Testing?
  • 21. What are the various levels of testing?
  • 22. What are the types of testing you know and you experienced?
  • 23. What exactly is Heuristic checklist approach for unit testing?
  • 24. After completing testing, what would you deliver to the client?
  • 25. What is a Test Bed?
  • 26. What is a Data Guidelines?
  • 27. Why do you go for Test Bed?
  • 28. What is Severity and Priority and who will decide what?
  • 29. Can Automation testing replace manual testing? If it so, how?
  • 30. What is a test case?
  • 31. What is a test condition?
  • 32. What is the test script?
  • 33. What is the test data?
  • 34. What is an Inconsistent bug?
  • 35. What is the difference between Re-testing and Regression testing?
  • 36. What are the different types of testing techniques?
  • 37. What are the different types of test case techniques?
  • 38. What are the risks involved in testing?
  • 39. Differentiate Test bed and Test Environment?
  • 40. What ifs the difference between defect, error, bug, failure, fault?
  • 41. What is the difference between quality and testing?
  • 42. What is the difference between White & Black Box Testing?
  • 43. What is the difference between Quality Assurance and Quality Control?
  • 44. What is the difference between Testing and debugging?
  • 45. What is the difference between bug and defect?
  • 46. What is the difference between verification and validation?
  • 47. What is the difference between functional spec. and Business requirement specification?
  • 48. What is the difference between unit testing and integration testing?
  • 49. What is the diff between Volume & Load?
  • 50. What is diff between Volume & Stress?

Do Testers Need Programming Skills ?

Do Testers Need Programming Skills ? A Good Tester Friend View.
Many freshers (especially people with Computer Science background) does not take up the first job as QA engineer and the reason they give is that many organizations give preference to people with development skills even while hiring people for testing positions. This raises the question does the testers need to be coders ? Some blogs report that organization like Microsoft hire the coders for test positions because they want to automate everything and eliminate manual testing. Automation is just one part of testing. But testers who understand programming and CS concepts have better analysis skills for testing. Testers with development skills can find the bugs earlier in development cycle and also find the cause for the bug. This also helps them in finding other places in code where there can be similar errors. I, personally feel that people with programming skills always makes better testers. Does this mean all the testers without programming knowledge be fired ?- of course not. Its always an option to have creative people as testers than bad programmers. An organization cannot have all the testers to have programming knowledge as well, with industry facing shortage of quality programmers. Microsoft seems to have found a way out of this by creating two different job profiles SDET and STE. SDET needs to have good programming skills as they are used for automation and debugging. But even this did'nt seem to have solved the problem completely. Can read more about SDET Vs STE here.

SOA- Service Oriented Architecture

software vendors have been creating development and infrastructure products for the latest IT architecture style - Service Oriented Architecture (SOA). Recognizing the immense value SOA can bring to IT, companies like BEA, IBM, and Microsoft have delivered products to help customers design and build SOA-based applications. These vendor-driven initiatives are beginning to sprout actual customer-developed applications that are built on the promise of SOA, such as better flexibility, agility and reuse.
A challenge facing many organizations is how to quickly and effectively react to frequent changes in business requirements, whilst improving productivity and reducing costs. To achieve this, you need a flexible infrastructure that can meet the demands of a changing marketplace and seize emerging opportunities. To address this challenge, Service Oriented Architecture (SOA) promotes an architectural approach that replaces rigid proprietary systems with heterogeneous, "loosely-coupled" services. The Service Component Architecture (SCA), along with Service Data Objects (SDO), makes this architectural concept a reality and provides the programming model to build SOA solutions for agile businesses.

Some QTP Related Question Answers

  • Full form of QTP ?
  • Quick Test Professional
  • What's the QTP ?
  • QTP is Mercury Interactive Functional Testing Tool.
  • Which scripting language used by QTP ?
  • QTP uses VB scripting.
  • What's the basic concept of QTP ?
  • QTP is based on two concept-
  • * Recording
  • * Playback
  • How many types of recording facility are available in QTP ?
  • QTP provides three types of recording methods-
  • * Context Recording (Normal)
  • * Analog Recording
  • * Low Level Recording
  • How many types of Parameters are available in QTP ?
  • QTP provides three types of Parameter-
  • * Method Argument
  • * Data Driven
  • * Dynamic
  • What's the QTP testing process ?
  • QTP testing process consist of seven steps-
  • * Preparing to recoding
  • * Recording
  • * Enhancing your script
  • * Debugging
  • * Run
  • * Analyze
  • * Report
  • DefectsWhat's the Active Screen ?
  • It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.
  • What's the Test Pane ?
  • Test Pane contains Tree View and Expert View tabs.
  • What's Data Table ?
  • It assists to you about parameterizing the test.
  • What's the Test Tree ?
  • It provides graphical representation of your operations which you have performed with your application.
  • Which all environment QTP supports ?
  • ERP/ CRM /Java/ J2EE/ VB, .NET ,Multimedia, XML Web Objects, ActiveX controls ,SAP, Oracle, Siebel, PeopleSoft Web Services, Terminal EmulatorIE, NN, AOL .
  • How can you view the Test Tree ?
  • The Test Tree is displayed through Tree View tab.
  • What's the Expert View ?
  • Expert View display the Test Script.
  • Which keyword used for Normal Recording ?
  • F3
  • Which keyword used for run the test script ?
  • F5
  • Which keyword used for stop the recording ?
  • F4
  • Which keyword used for Analog Recording ?
  • Ctrl+Shift+F4
  • Which keyword used for Low Level Recording ?
  • Ctrl+Shift+F3
  • Which keyword used for switch between Tree View and Expert View ?
  • Ctrl+Tab
  • What's the Transaction ?
  • You can measure how long it takes to run a section of your test by defining transactions.
  • Where you can view the results of the checkpoint ?
  • You can view the results of the checkpoints in the Test Result Window.
  • What's the Standard Checkpoint ?
  • Standard Checkpoints checks the property value of an object in your application or web page.
  • Which environment are supported by Standard Checkpoint ?
  • Standard Checkpoint are supported for all add-in environments.
  • What's the Image Checkpoint ?
  • Image Checkpoint check the value of an image in your application or web page.
  • Which environments are supported by Image Checkpoint ?
  • Image Checkpoint are supported only Web environment.
  • What's the Bitmap Checkpoint ?
  • Bitmap Checkpoint checks the bitmap images in your web page or application.
  • Which enviornment are supported by Bitmap Checkpoints ?
  • Bitmap checkpoints are supported all add-in environment.
  • What's the Table Checkpoints ?
  • Table Checkpoint checks the information with in a table.
  • Which environments are supported by Table Checkpoint ?
  • Table Checkpoints are supported only ActiveX environment.
  • What's the Text Checkpoint ?
  • Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.
  • Which environment are supported by Test Checkpoint ?
  • Text Checkpoint are supported all add-in environments Note:
  • * QTP records each steps you perform and generates a test tree and test script.
  • * QTP records in normal recording mode.
  • * If you are creating a test on web object, you can record your test on one browser and run it on another browser.
  • * Analog Recording and Low Level Recording require more disk sapce than normal recording mode.

Capability Maturity Model Integration (CMM) Maturity Levels

Capability Maturity Model Integration (CMM) Maturity Levels
The Five Maturity Levels described by the Capability Maturity Model can be characterized as per their primary process changes made at each level as follows:

1) Initial The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort and heroics.
2) Repeatable Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
3) Defined The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.
4) Managed Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
5) Optimizing Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

CMM-Capability Maturity Model

Capability Maturity Model:The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically, SEI was established to optimize the process of developing, acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes involved are equally applicable to the software industry as a whole, SEI advocates industry-wide adoption of the CMM.
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the International Organization for Standardization (ISO). The ISO 9000 standards specify an effective quality system for manufacturing and service industries; ISO 9001 deals specifically with software development and maintenance. The main difference between the two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software processes, while the CMM establishes a framework for continuous process improvement and is more explicit than the ISO standard in defining the means to be employed to that end.

What is CMM (SEI Capability Maturity Model)? The Capability Maturity Model for Software (CMM) is a framework that describes the key elements of an effective software process. There are CMMs for non software processes as well, such as Business Process Management (BPM). The CMM describes an evolutionary improvement path from an ad hoc, immature process to a mature, disciplined process. The CMM covers practices for planning, engineering, and managing software development and maintenance. When followed, these key practices improve the ability of organizations to meet goals for cost, schedule, functionality, and product quality. The CMM establishes a yardstick against which it is possible to judge, in a repeatable way, the maturity of an organization's software process and compare it to the state of the practice of the industry. The CMM can also be used by an organization to plan improvements to its software process. It also reflects the needs of individuals performing software process, improvement, software process assessments, or software capability evaluations; is documented; and is publicly available.

Unit & Integration Testing

Unit Testing

In computer programming, a unit test is a method of testing the correctness of a particular module of source code.
The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers.
Benefits
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits:
Encourages change
Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly.
Simplifies Integration
Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier.
Documents the code
Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs.
Separation of Interface from Implementation
Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system.
Limitations
It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.
Integration Testing
It is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing. takes as its input modules that have been checked out by unit testing, groups them in larger aggregates, applies tests defined in an Integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
Purpose
The purpose of Integration testing is to verify functional, performance and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested, individual subsystems are exercised through their input interface. All test cases are constructed to test that all components within assemblages interact correctly, for example, across procedure calls or process activations.
The overall idea, is the "building block" approach in which verified assemblages are added to a verified base which is then used to support the Integration testing of further assemblages.

White Box and Black Box Testing

White Box Testing
Testing of a function with knowing internal structure of the program. Also known as glass box, structural, clear box and open box testing. A software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.
Black Box Testing
Testing of a function without knowing internal structure of the program. Black-box and white-box are test design methods. Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural". Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this "gray-box" or "translucent-box" test design, but others wish we'd stop talking about boxes altogether.

It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate.

Interview Questions

Interview Question Answer By Testing Expert- Dr K.V.V.K. Prasad
Testing Scenarios :
Q: How do you know that all the scenarios for testing are covered? By using the Requirement Traceability Matrix (RTM) we can ensure that we have covered all the functionalities in Test Coverage. RTM is a document that traces User Requirements from analysis through implementations. RTm can be used as a completeness check to verify that all the requirements are present or that there is no unnecessary/extra features and as a maintenance guide to new personnel. We can use the simple format in Excel sheet where we map the Functionality with the Test case ID. 2. Complete Testing with Time Constraints :
Question: How do you complete the testing when you have a time constraint? If i am doinf regression testing and i do not have sufficient time then we have to see for which sort of regression testing i have to go 1)unit regression testing 2)Regional Regression testing 3)Full Regression testing. 3. Given an yahoo application how many test cases u can write? First we need requirements of the yahoo applicaiton. I think test cases are written aganist given requirements.So for any working webapplication or new application, requirements are needed to prepare testcases.The number of testcases depends on the requirements of the application Note to learners : A Test Engineer must have knowledge on SDLC. I suggest learners to take any one exiting application and start pratice from writing requirements. 4. Lets say we have an GUI map and scripts, a we got some 5 new pages included inan application how do we do that? By integration testing. 5. GUI contains 2 fields Field 1 to accept the value of x and Field 2 displays the result of the formula a+b/c-d where a= 0.4*x, b=1.5*a, c=x, d=2.5*b; How many system test cases would you write GUI contains 2 fields Field 1 to accept the value of x and Field 2 displays the result so that there is only one testcase to write