Custom Search

QTP 11: Complete List of New Features

Here is the complete list of new features that are available in QTP 11

XPath and CSS based object identification

Identify objects not only using normal object identification but with XPath and CSS identifier properties. A much awaited and a killer feature

Good Looking and Enhanced Results Viewer

The new improved results viewer provides an executive summary page with summary data, pie charts and statistics for both the current and previous runs and a quick link to the previous run results.

Easy Regular Expressions

You can now create reg ex with the help of syntax hints. Regular Expression Evaluator is available to test regular expressions that you have created. Good One.

Now identify objects not only in relation to each other but in relation to neighboring objects.

With this feature, QTP 11 has moved beyond the unreliability of ordinal identifiers. Objects identified with ordinal identifiers are good only as long as they maintain their relative positions with respect to each other in the new build of application. In case if this position changes or gets interchanged, ordinal identifiers may go for a toss.
HP has now introduced Visual Relation Identifier.

visual relation identifier is a set of definitions that enable you to identify the object in the application according its neighboring objects in the application. You can select neighboring objects that will maintain the same relative location to your object, even if the user interface design changes. You define visual relations in the Visual Relation Identifier dialog box, which is accessible from the local or shared object repository, and from the Object Properties dialog box.

Load Function Libraries at Run Time

With the help of LoadFunctionLibrary statement. You can now load a function library when a step runs instead of at the beginning of a run session.

Test Your GUI and UI-Less Application Functionality in One Test

Since QTP is integrated with Service Test, you can now test your GUI and non-GUI based apps in a single run.

Record Support

For FireFox is now available.

Much Awaited Log Tracking is available now

QTP 11 is capable of receiving Java or .NET log framework messages from your application which can then be embedded in the run results.

Embed/Run Javascript in web pages

You can use the new EmbedScript/EmbedScriptFromFile andRunScript/RunScriptFromFile functions to embed JavaScripts in all loaded browser pages. You can use these scripts to perform operations on, or retrieve data from, the browser pages in your application.

Manage Test Data

Improved test data management when integrated with Quality Center

Web 2.0 Toolkit Applications Support

QTP 11 now supports Web 2.0 Toolkit Applications out-of-the-box similar to any other add-ins.

Automatically Parameterize Steps

You can instruct QTP 11 to automatically parameterize test steps at the end of record session.

Silverlight Add-in

To test objects in Silverlight 2 and Silverlight 3 applications. [After installation, Silverlight Add-in is displayed in the Add-in Manager as a child add-in under the WPF Add-in]

Extend WPF and Silverlight Support

You can use WPF and Silverlight Add-in Extensibility SDK to develop support for testing third-party and custom WPF and Silverlight controls that are not supported out-of-the-box

Use Extensibility Accelerator for Web Add-in Extensibility Development

Avoid Downtime Due to License Server Failures

Useful for concurrent license users. With redundant license serversyou can create failover, so that if your main license server fails, your remaining servers maintain availability of your licenses without causing any downtime or loss of licenses for users.

QTP Descriptive Programming

In this post we will discuss:

Introduction to Descriptive Programming.
How to write Descriptive Programming?
When and Where to use Descriptive programming?
Some points to note with 
Descriptive Programming.

Introduction to Descriptive Programming:
Descriptive programming is used when we want to perform an operation on an object that is not present in the object repository. There can be various valid reason to do so. We will discuss them later in this article.

How to write Descriptive Programming? 

There are two ways in which descriptive programming can be used

1. By giving the description in form of the string arguments.
2. By creating properties collection object for the description.

1. By giving the description in form of the string arguments.

This is a more commonly used method for Descriptive Programming.
You can describe an object directly in a statement by specifying property:=value pairs describing the object instead of specifying an object's
name. The general syntax is:

TestObject("PropertyName1:=PropertyValue1", "..." , "PropertyNameX:=PropertyValueX")

TestObject—the test object class could be WebEdit, WebRadioGroup etc….

PropertyName:=PropertyValue—the test object property and its value. Each property:=value pair should be separated by commas and quotation
marks. Note that you can enter a variable name as the property value if you want to find an object based on property values you retrieve during a run session.

Consider the HTML Code given below:

<--!input type=""textbox"" name=""txt_Name""-->
<--!input type=""radio"" name=""txt_Name""-->

Now to refer to the textbox the statement would be as given below

Browser("Browser").Page("Page").WebEdit("Name:=txt_Name","html tag:=INPUT").set "Test"

And to refer to the radio button the statement would be as given below

Browser("Browser").Page("Page").WebRadioGroup("Name:=txt_Name","html tag:=INPUT").set "Test"

If we refer to them as a web element then we will have to distinguish between the 2 using the index property

Browser("Browser").Page("Page").WebElement("Name:=txt_Name","html tag:=INPUT","Index:=0").set "Test" ' Refers to the textbox
Browser("Browser").Page("Page").WebElement("Name:=txt_Name","html tag:=INPUT","Index:=1").set "Test" ' Refers to the radio button

To determine which property and value pairs to use, you can use the Object Spy:
1. Go to Tools -> Object Spy.
2. Select the "Test Object Properties" radio button.
3. Spy on the desired object.
4. In the Properties list, find and write down the properties and values that can be used to identify the object.

2. By creating properties collection object for the description.

Properties collection also does the same thing as string arguments. The only difference is that it "collects" all the properties of a particular object in an instance of that object. Now that object can be referenced easily by using the instance, instead of writing "string arguments" again and again. It is my observation that people find "string arguments" [1] method much easier and intuitive to work with.

To use this method you need first to create an empty description
Dim obj_Desc 'Not necessary to declare
Set obj_Desc = Description.Create

Now we have a blank description in "obj_Desc". Each description has 3 properties "Name", "Value" and "Regular Expression".

obj_Desc("html tag").value= "INPUT"

When you use a property name for the first time the property is added to the collection and when you use it again the property is modified. By default each property that is defined is a regular expression. Suppose if we have the following description

obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt.*"

This would mean an object with html tag as INPUT and name starting with txt. Now actually that ".*" was considered as regular expression. So, if you want the property "name" not to be recognized as a regular expression then you need to set the "regularexpression" property as FALSE

obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt.*"
obj_Desc("name").regularexpression= "txt.*"

This is how we create a description. Now below is the way we can use it

Browser("Browser").Page("Page").WebEdit(obj_Desc).set "Test"

When we say .WebEdit(obj_Desc) we define one more property for our description that was not earlier defined that is it's a text box (because QTPs WebEdit boxes map to text boxes in a web page).

If we know that we have more than 1 element with same description on the page then we must define "index" property for the that description

Consider the HTML code given below

<--!input type=""textbox"" name=""txt_Name""-->
<--!input type=""textbox"" name=""txt_Name""-->

Now the html code has two objects with same description. So distinguish between these 2 objects we will use the "index" property. Here is the description for both the object

For 1st textbox:
obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt_Name"
obj_Desc("index").value= "0"

For 2nd textbox:
obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt_Name"
obj_Desc("index").value= "1"

Consider the HTML Code given below:

<--!input type=""textbox"" name=""txt_Name""-->
<--!input type=""radio"" name=""txt_Name""-->

We can use the same description for both the objects and still distinguish between both of them
obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt_Name"

When I want to refer to the textbox then I will use the inside a WebEdit object and to refer to the radio button I will use the description object with the WebRadioGroup object.

Browser("Browser").Page("Page").WebEdit(obj_Desc).set "Test" 'Refers to the text box
Browser("Browser").Page("Page").WebRadioGroup(obj_Desc).set "Test" 'Refers to the radio button

But if we use WebElement object for the description then we must define the "index" property because for a webelement the current description would return two objects.

Getting Child Object:

We can use description object to get all the objects on the page that matches that specific description. Suppose we have to check all the checkboxes present on a web page. So we will first create an object description for a checkboxe and then get all the checkboxes from the page

Dim obj_ChkDesc

Set obj_ChkDesc=Description.Create
obj_ChkDesc("html tag").value = "INPUT"
obj_ChkDesc("type").value = "checkbox"

Dim allCheckboxes, singleCheckBox

Set allCheckboxes = Browse("Browser").Page("Page").ChildObjects(obj_ChkDesc)

For each singleCheckBox in allCheckboxes

singleCheckBox.Set "ON"


The above code will check all the check boxes present on the page. To get all the child objects we need to specify an object description.

If you wish to use string arguments [1], same thing can be accomplished by simple scripting.

Code for that would be:

Do While Browse("Browser").Page("Page").WebCheckBox("html tag:=INPUT",type:=checkbox, "index:="&i).Exist 
Browse("Browser").Page("Page").WebCheckBox("html tag:=INPUT",type:=checkbox, "index:="&i).Set "ON"
Possible Operation on Description Objects

Consider the below code for all the solutions
Dim obj_ChkDesc

Set obj_ChkDesc=Description.Create
obj_ChkDesc("html tag").value = "INPUT"
obj_ChkDesc("type").value = "checkbox"

Q: How to get the no. of description defined in a collection
A: obj_ChkDesc.Count 'Will return 2 in our case

Q: How to remove a description from the collection
A: obj_ChkDesc.remove "html tag" 'would delete the html tag property from the collection

Q: How do I check if property exists or not in the collection?
A: The answer is that it's not possible. Because whenever we try to access a property which is not defined its automatically added to the collection. The only way to determine is to check its value that is use a if statement "if obj_ChkDesc("html tag").value = empty then".

Q: How to browse through all the properties of a properties collection?
A: Two ways
For each desc in obj_ChkDesc
RE = desc.regularexpression
For i=0 to obj_ChkDesc.count - 1
Name= obj_ChkDesc(i).Name
Value= obj_ChkDesc(i).Value
RE = obj_ChkDesc(i).regularexpression

Hierarchy of test description:

When using programmatic descriptions from a specific point within a test object hierarchy, you must continue to use programmatic descriptions
from that point onward within the same statement. If you specify a test object by its object repository name after other objects in the hierarchy have
been described using programmatic descriptions, QuickTest cannot identify the object.

For example, you can use Browser(Desc1).Page(Desc1).Link(desc3), since it uses programmatic descriptions throughout the entire test object hierarchy.
You can also use Browser("Index").Page(Desc1).Link(desc3), since it uses programmatic descriptions from a certain point in the description (starting
from the Page object description).

However, you cannot use Browser(Desc1).Page(Desc1).Link("Example1"), since it uses programmatic descriptions for the Browser and Page objects but
then attempts to use an object repository name for the Link test object (QuickTest tries to locate the Link object based on its name, but cannot
locate it in the repository because the parent objects were specified using programmatic descriptions).

When and Where to use Descriptive programming?

Below are some of the situations when Descriptive Programming can be considered useful:

1. One place where DP can be of significant importance is when you are creating functions in an external file. You can use these function in various actions directly , eliminating the need of adding object(s) in object repository for each action[If you are using per action object repository]

2. The objects in the application are dynamic in nature and need special handling to identify the object. The best example would be of clicking a link which changes according to the user of the application, Ex. "Logout <>".

3. When object repository is getting huge due to the no. of objects being added. If the size of Object repository increases too much then it decreases the performance of QTP while recognizing a object. [For QTP8.2 and below Mercury recommends that OR size should not be greater than 1.5MB]

4. When you don't want to use object repository at all. Well the first question would be why not Object repository? Consider the following scenario which would help understand why not Object repository
Scenario 1: Suppose we have a web application that has not been developed yet.Now QTP for recording the script and adding the objects to repository needs the application to be up, that would mean waiting for the application to be deployed before we can start of with making QTP scripts. But if we know the descriptions of the objects that will be created then we can still start off with the script writing for testing
Scenario 2: Suppose an application has 3 navigation buttons on each and every page. Let the buttons be "Cancel", "Back" and "Next". Now recording action on these buttons would add 3 objects per page in the repository. For a 10 page flow this would mean 30 objects which could have been represented just by using 3 objects. So instead of adding these 30 objects to the repository we can just write 3 descriptions for the object and use it on any page.

5. Modification to a test case is needed but the Object repository for the same is Read only or in shared mode i.e. changes may affect other scripts as well.
6. When you want to take action on similar type of object i.e. suppose we have 20 textboxes on the page and there names are in the form txt_1, txt_2, txt_3 and so on. Now adding all 20 the Object repository would not be a good programming approach.

1) QuickTestProfessional Documentation
2) Tarun Lalwani's DP Document
3) Ankur learnQTP blog

Are you sure you want to use QTP?

More often than not, I have seen people going for test automation just because they want to go for this exciting buzzword. It is generally realized in the very end that automation has not given intended returns.I would suggest that before even thinking of Automation a thorough study of your application/system/module should be performed.

The following types of test cases are preferred (but not limited to) choice for automation:

  • Tests that need to run for every build.
  • Tests that use multiple data values for same action.
  • Identical tests that need to be executed using different browsers
The following types are not considered to be the good candidates for automation:

  • Test Cases that will only be executed once.
  • Test Cases used for Ad-hoc/random Testing.
  • Test Cases that are infrequently selected for execution.
  • Test Cases that will require manual intervention ie a task not possible to automate.
  • The most imp. one--based on the intuition and knowledge of application. Eg. if you find that you cannot escape from manual intervention.

Above all the effort required to develop and maintain the automated scripts should be weighed against the resources (no of hours, budget) that task would take while manual execution. The amount of changes the application undergoes from one build to another should also be taken into account while choosing for test automation.

If an app undergoes drastic changes that means most time will be exhausted in maintaining the scripts. These apps are then considered bad choice for automation.
Considering the above points Return On Investment (ROI) should be calculated and this should be the single most important criterion while going for Test Automation.

What is Mobile Device Testing??

Mobile Device Testing is the process to assure the quality of mobile devices, like mobile phones, PDAs, etc. The testing will be conducted on both hardware and software. And from the view of different procedures, the testing comprises R&D Testing, Factory Testing and Certificate Testing.

R&D Testing

R&D test is the main test phase for mobile device, and it happens during the developing phase of the mobile devices. It contains hardware testing, software testing, and mechanical testing.

Factory Testing

Factory Testing is a kind of sanity check on mobile devices. It's conducted automatically to verify that there are no defects brought by the manufacturing or assembling.

Mobile Testing contains

Mobile application testing Hardware testing

 Battery(Charging) Testing
Signal receiving
Network Testing

Protocol testing mobile games Testing Mobile software compatibility Testing.

Certification Testing

Certification Testing is the check before a mobile device goes to market. Many institutes or governments require mobile devices to conform with their stated specifications and protocols to make sure the mobile device will not harm users' health and are compatible with devices from other manufacturers. Once the mobile device passes all checks, a certification will be issued for it.

Source: Wikipedia

Are You a Fresher or Freelancer ? Try Beta Testing !

Are You a Fresher or Freelancer ? Try Beta Testing !

What is a beta testing?
A Beta test is a testing done for a piece of computer product prior to commercial release in the market. Beta testing can be considered the last stage of testing, and normally can involve sending the product to beta test sites outside the company for real-world exposure. Normally, a free trial version of the product is allowed to be downloaded over the Internet. Beta test versions of software are distributed to a wide range of users to get the software tested in different combination of hardware and associated software.
What is in for a fresher or freelancer?
So what should we do as a fresher or a freelance software tester? Beta testing provides a platform for us to test some real world software. There are many companies which release beta version of their software before they ship it to the market for commercial selling.
By signing up as a beta tester and downloading a test version of the software, you can get a first hand preview of the software. It is an exciting experience to get the software free of cost before the world gets to see the product. If you can find few bugs or write some reviews (positive or negative) about the beta version of the software, most of the companies give a free licensed version of the product when they release in the market.
Benefits of beta testing to a fresher?
Also as a fresher, you can show this experience of testing a beta product. You can list down what all defects or bugs you have submitted to the company. There are few companies which I know who pay for finding bugs in there products. Your resume will shine if you can prove that you earned few dollars by finding bugs for this and that company.

Sign Up with and get paid for finding bugs. I am sure there are lots of other companies which pay for pointing out their defects.
Beta products from Microsoft

Microsoft also provides wide range of Beta software for testing. Sign up with them and start testing a free beta product today.
Here are some tips for running a Beta Test  
  • Take back up of your PC before you install a Beta version software. You never know when the software crashes.
  • Read the terms and conditions carefully. Find out which bugs have already been reported. You don't want to waste time finding bugs which someone has already reported.
  • Get the specification or requirement document first. How do you test a product if you don't know what it is supposed to do?
  • Create a network of Beta Testers. This will help you get informed of latest happenings around the world.

***Happy Testing***


Testing Glossary - 100 Most Popular Software Testing Terms

Are you confused of testing terms and acronyms? Here are few glossaries that might help you. Here are 100 most popular testing terms compiled from International Software Testing Qualifications Board's website. A complete and exhaustive list of terms is available for download at that site.

100 Most Popular Software Testing Terms

Acceptance testing

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Ad hoc testing

Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

Agile testing

Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. 

Alpha testing

Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Back-to-back testing

Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.

Beta testing

Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Big-bang testing

A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.

Black-box testing

Testing, either functional or non-functional, without reference to the internal structure of the component or system.

Black-box test design technique

Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

Blocked test case

A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Bottom-up testing

An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.

Boundary value

An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis

A black box test design technique in which test cases are designed based on boundary values.

Branch testing

A white box test design technique in which test cases are designed to execute branches.

Business process-based testing

An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

Capture/playback tool

A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.


The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

Code coverage

An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

Compliance testing

The process of testing to determine the compliance of the component or system.

Component integration testing

Testing performed to expose defects in the interfaces and interaction between integrated components.

Condition testing

A white box test design technique in which test cases are designed to execute condition outcomes.

Conversion testing

Testing of software used to convert data from existing systems for use in replacement systems.

Data driven testing

A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.

Database integrity testing

Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated or created.


A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect masking

An occurrence in which one defect prevents the detection of another.

Defect report

A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.

Development testing

Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers.


A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.

Equivalence partitioning

A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.


A human action that produces an incorrect result.

Error guessing

A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

Exhaustive testing

A test approach in which the test suite comprises all combinations of input values and preconditions.

Exploratory testing

An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.


Deviation of the component or system from its expected delivery, service or result.

Functional test design technique

Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure.

Functional testing

Testing based on an analysis of the specification of the functionality of a component or system.

Functionality testing

The process of testing to determine the functionality of a software product.

Heuristic evaluation

A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called "heuristics").

High level test case

A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available.


International Software Testing Qualification Board. Click here for more details.

Incident management tool

A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.

Installability testing

The process of testing the installability of a software product.

Integration testing

Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Isolation testing

Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

Keyword driven testing

A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.

Load testing

A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.

Low level test case

A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators.

Maintenance testing

Testing the changes to an operational system or the impact of a changed environment to an operational system.

Monkey testing

Testing by means of a random selection from a large range of inputs and by randomly pushing buttons, ignorant on how the product is being used.

Negative testing

Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions.

Non-functional testing

Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

Operational testing

Testing conducted to evaluate a component or system in its operational environment.

Pair testing

Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.

Peer review

A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

Performance testing

The process of testing to determine the performance of a software product.

Portability testing

The process of testing to determine the portability of a software product.

Post-execution comparison

Comparison of actual and expected results, performed after the software has finished running.


The level of (business) importance assigned to an item, e.g. defect.

Quality assurance

Part of quality management focused on providing confidence that quality requirements will be fulfilled.

Random testing

A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

Recoverability testing

The process of testing to determine the recoverability of a software product.

Regression testing

Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Requirements-based testing

An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.


Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Risk-based testing

An approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.


The degree of impact that a defect has on the development or operation of a component or system.

Site acceptance testing

Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.

Smoke test

A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Statistical testing

A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases.

Stress testing

Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.


A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

Syntax testing

A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.

System integration testing

Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

System testing

The process of testing an integrated system to verify that it meets specified requirements.

Test automation

The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

Test case specification

A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.

Test design specification

A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.

Test environment

An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

Test harness

A test environment comprised of stubs and drivers needed to execute a test.

Test log

A chronological record of relevant details about the execution of tests.

Test management tool

A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, and the logging of results, progress tracking, incident management and test reporting.

Test oracle

A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual's specialized knowledge, but should not be the code.

Test plan

A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

Test strategy

A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

Test suite

A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.


Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

Thread testing

A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

Top-down testing

An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.


The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

Usability testing

Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use case

A sequence of transactions in a dialogue between a user and the system with a tangible result.

Use case testing


A black box test design technique in which test cases are designed to execute user scenarios.


Unit test framework


A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities.



Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.



Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

Vertical traceability


The tracing of requirements through the layers of development documentation to components.

Volume testing


Testing where the system is subjected to large volumes of data.




A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.



White-box testing


Testing based on an analysis of the internal structure of the component or system.