Pages

Custom Search

Does Quality Assurance Remove Need for Quality Control?



Does Quality Assurance Remove Need for Quality Control?


"If QA (Quality Assurance) is done then why do we need to perform QC (Quality Control)?", this thought may come to our mind some times and looks a valid point too.  This means if we have followed all the pre-defined processes, policies and standards correctly and completely then why do we need to perform a round of QC?

In my opinion QC is required after QA is done. While in 'QA' we define the processes, policies, strategies, establish standards, developing checklists etc. to be used and followed through out the life cycle of a project. And while in QC we follow all those defined processes, standards and policies to make sure that the project has been developed with high quality and at least meets customer's expectations.

QA does not assure quality, rather it creates and ensures the processes are being followed to assure quality. QC does not control quality, rather it measures quality. QC measurement results can be utilized to correct/modify QA processes which can be successfully implemented in new projects as well.

Quality control activities are focused on the deliverable itself. Quality assurance activities are focused on the processes used to create the deliverable. QA and QC are both powerful techniques which can be used to ensure that the deliverables meet high quality expectations of customers.

E.g.: we have to use an Issue tracking system to log the bugs during testing a web application. QA would include defining the standard for adding a bug and what all details should be there in a bug, like summary of the issue, where it is observed, steps to reproduce the bugs, screenshots etc. This is a process to create deliverable 'bug–report'. When a bug is actually added in issue tracking system based on these standards then that bug report is our deliverable.

Now, suppose some time at later stage of project we realize that adding 'probable root cause' to the bug based on tester's analysis would provide some more insight to the Dev team, then we will update our pre-defined process and finally it will be reflected in our bug reports as well. This is how QC gives inputs to QA to further improve the QA.

Following is an example of a real life scenario for QA / QC:

QA Example:

Quality Assurance

Suppose our team has to work on completely new technology for upcoming project. Our team members are new to the technology. So for that we need to create a plan for training the team members in the new technology. Based on our knowledge we need to collect pre-requisites like understanding documents, design of the product along with the documents etc. and share with the team, which would be helpful while working on the new technology and even would be useful for any new comer in the team. This is QA.

 

QC Example:

Quality Control

Once the training is done how we can make sure that the training was successfully done for all the team members? For this purpose we will have to collect statistics e.g. number of marks the trainees got in each subject and minimum number of marks expected after completing the training. Also we can make sure that everybody has taken training in full by verifying the attendance record of candidates. If the number of marks of candidates are up to the expectations of the trainer/evaluators then we can say that the training is successful otherwise we will have to improve our process in order to deliver high quality training.

Hope this explains the difference between QA and QC.

Database Testing – Practical Tips and Insight on How to Test Database


Database Testing – Practical Tips and Insight on How to Test Database

Database is one of the inevitable parts of a software application these days. It does not matter at all whether it is web or desktop, client server or peer to peer, enterprise or individual business, database is working at backend. Similarly, whether it is healthcare of finance, leasing or retail, mailing application or controlling spaceship, behind the scene a database is always in action.

Moreover, as the complexity of application increases the need of stronger and secure database emerges. In the same way, for the applications with high frequency of transactions (e.g. banking or finance application), necessity of fully featured DB Tool is coupled.

Currently, several database tools are available in the market e.g. MS-Access2010, MS SQL Server 2008 r2, Oracle 10g, Oracle Financial, MySQL, PostgreSQL, DB2 etc.  All of these vary in cost, robustness, features and security. Each of these DBs possesses its own benefits and drawbacks. One thing is certain; a business application must be built using one of these or other DB Tools.

Before I start digging into the topic, let me comprehend the foreword. When the application is under execution, the end user mainly utilizes the 'CRUD' operations facilitated by the DB Tool.

C: Create – When user 'Save' any new transaction, 'Create' operation is performed.
R: Retrieve – When user 'Search' or 'View' any saved transaction, 'Retrieve' operation is performed.
U: Update – when user 'Edit' or 'Modify' an existing record, the 'Update' operation of DB is performed.
D: Delete – when user 'Remove' any record from the system, 'Delete' operation of DB is performed.

It does not matter at all, which DB is used and how the operation is preformed. End user has no concern if any join or sub-query, trigger or stored-procedure, query or function was used to do what he wanted. But, the interesting thing is that all DB operations performed by user, from UI of any application, is one of the above four, acronym as CRUD.

Database Testing

As a database tester one should be focusing on following DB testing activities:

What to test in database testing:

1) Ensure data mapping:

Make sure that the mapping between different forms or screens of AUT and the Relations of its DB is not only accurate but is also according to design documents. For all CRUD operations, verify that respective tables and records are updated when user clicks 'Save', 'Update', 'Search' or 'Delete' from GUI of the application.

2) Ensure ACID Properties of Transactions:

ACID properties of DB Transactions refer to the 'Atomicity', 'Consistency', 'Isolation' and 'Durability'. Proper testing of these four properties must be done during the DB testing activity. This area demands more rigorous, thorough and keen testing when the database is distributed.

3) Ensure Data Integrity:

Consider that different modules (i.e. screens or forms) of application use the same data in different ways and perform all the CRUD operations on the data. In that case, make it sure that the latest state of data is reflected everywhere. System must show the updated and most recent values or the status of such shared data on all the forms and screens. This is called the Data Integrity.

4) Ensure Accuracy of implemented Business Rules:

Today, databases are not meant only to store the records. In fact, DBs have been evolved into extremely powerful tools that provide ample support to the developers in order to implement the business logic at DB level. Some simple examples of powerful features of DBs are 'Referential Integrity', relational constrains, triggers and stored procedures. So, using these and many other features offered by DBs, developers implement the business logic on DB level. Tester must ensure that the implemented business logic is correct and works accurately.

Above points describe the four most important 'What Tos' of database testing. Now, I will put some light on 'How Tos' of DB Testing. But, first of all I feel it better to explicitly mention an important point. DB Testing is a business critical task, and it should never be assigned to a fresh or inexperienced resource without proper training.

How To Test Database:

1. Create your own Queries

In order to test the DB properly and accurately, first of all a tester should have very good knowledge of SQL and specially DML (Data Manipulation Language) statements. Secondly, the tester should acquire good understanding of internal DB structure of AUT. If these two pre-requisites are fulfilled, then the tester is ready to test DB with complete confidence. (S)He will perform any CRUD operation from the UI of application, and will verify the result using SQL query.

This is the best and robust way of DB testing especially for applications with small to medium level of complexity. Yet, the two pre-requisites described are necessary. Otherwise, this way of DB testing cannot be adopted by the tester.

Moreover, if the application is very complex then it may be hard or impossible for the tester to write all of the needed SQL queries himself or herself. However, for some complex queries, tester may get help from the developer too. I always recommend this method for the testers because it does not only give them the confidence on the testing they have performed but, also enhance their SQL skill.

2. Observe data table by table

If the tester is not good in SQL, then he or she may verify the result of CRUD operation, performed using GUI of the application, by viewing the tables (relations) of DB. Yet, this way may be a bit tedious and cumbersome especially when the DB and tables have large amount of data.

Similarly, this way of DB testing may be extremely difficult for tester if the data to be verified belongs to multiple tables. This way of DB testing also requires at least good knowledge of Table structure of AUT.

3. Get query from developer

This is the simplest way for the tester to test the DB. Perform any CRUD operation from GUI and verify its impacts by executing the respective SQL query obtained from the developer. It requires neither good knowledge of SQL nor good knowledge of application's DB structure.

So, this method seems easy and good choice for testing DB. But, its drawback is havoc. What if the query given by the developer is semantically wrong or does not fulfill the user's requirement correctly? In this situation, the client will report the issue and will demand its fix as the best case. While, the worst case is that client may refuse to accept the application.

 

Conclusion:

Database is the core and critical part of almost every software application. So DB testing of an application demands keen attention, good SQL skills, proper knowledge of DB structure of AUT and proper training.

In order to have the confident test report of this activity, this task should be assigned to a resource with all the four qualities stated above. Otherwise, shipment time surprises, bugs identification by the client, improper or unintended application's behavior or even wrong outputs of business critical tasks are more likely to be observed. Get this task done by most suitable resources and pay it the well-deserved attention.

 



Risk Analysis in Software Testing

Risk Analysis

In this tutorial you will learn about Risk Analysis, Technical Definitions, Risk Analysis, Risk Assessment, Business Impact Analysis, Product Size Risks, Business Impact Risks, Customer-Related Risks, Process Risks, Technical Issues, Technology Risk, Development Environment Risks, Risks Associated with Staff Size and Experience.

Risk Analysis is one of the important concepts in Software Product/Project Life Cycle. Risk analysis is broadly defined to include risk assessment, risk characterization, risk communication, risk management, and policy relating to risk. Risk Assessment is also called as Security risk analysis.


Technical Definitions:

Risk Analysis: A risk analysis involves identifying the most probable threats to an organization and analyzing the related vulnerabilities of the organization to these threats.


Risk Assessment: A risk assessment involves evaluating existing physical and environmental security and controls, and assessing their adequacy relative to the potential threats of the organization.


Business Impact Analysis: A business impact analysis involves identifying the critical business functions within the organization and determining the impact of not performing the business function beyond the maximum acceptable outage. Types of criteria that can be used to evaluate the impact include: customer service, internal operations, legal/statutory and financial.


Risks for a software product can be categorized into various types. Some of them are:


Product Size Risks:

The following risk item issues identify some generic risks associated with product size:


  • Estimated size of the product and confidence in estimated size? 
  • Estimated size of product? 
  • Size of database created or used by the product? 
  • Number of users of the product? 
  • Number of projected changes to the requirements for the product?

Risk will be high, when a large deviation occurs between expected values and the previous experience. All the expected information must be compared to previous experience for analysis of risk.


Business Impact Risks:

The following risk item issues identify some generic risks associated with business impact:


  • Affect of this product on company revenue? 
  • Reasonableness of delivery deadline? 
  • Number of customers who will use this product and the consistency of their needs relative to the product? 
  • Number of other products/systems with which this product must be interoperable? 
  • Amount and quality of product documentation that must be produced and delivered to the customer? 
  • Costs associated with late delivery or a defective product?

Customer-Related Risks:

Different Customers have different needs. Customers have different personalities. Some customers accept what is delivered and some others complain about the quality of the product. In some other cases, customers may have very good association with the product and the producer and some other customers may not know. A bad customer represents a significant threat to the project plan and a substantial risk for the project manager.


The following risk item checklist identifies generic risks associated with different customers:


  • Have you worked with the customer in the past? 
  • Does the customer have a solid idea of what is required? 
  • Will the customer agree to spend time in formal requirements gathering meetings to identify project scope? 
  • Is the customer willing to participate in reviews? 
  • Is the customer technically sophisticated in the product area? 
  • Does the customer understand the software engineering process?

Process Risks:

If the software engineering process is ill-defined or if analysis, design and testing are not conducted in a planned fashion, then risks are high for the product.


  • Has your organization developed a written description of the software process to be used on this project? 
  • Are the team members following the software process as it is documented? 
  • Are the third party coders following a specific software process and is there any procedure for tracking the performance of them? 
  • Are formal technical reviews are done regularly at both development and testing teams? 
  • Are the results of each formal technical review documented, including defects found and resources used? 
  • Is configuration management used to maintain consistency among system/software requirements, design, code, and test cases? 
  • Is a mechanism used for controlling changes to customer requirements that impact the software?

Technical Issues:

  • Are specific methods used for software analysis? 
  • Are specific conventions for code documentation defined and used? 
  • Are any specific methods used for test case design? 
  • Are software tools used to support planning and tracking activities? 
  • Are configuration management software tools used to control and track change activity throughout the software process? 
  • Are tools used to create software prototypes? 
  • Are software tools used to support the testing process? 
  • Are software tools used to support the production and management of documentation? 
  • Are quality metrics collected for all software projects? 
  • Are productivity metrics collected for all software projects?

Technology Risk:

  • Is the technology to be built new to your organization? 
  • Does the software interface with new hardware configurations? 
  • Does the software to be built interface with a database system whose function and performance have not been proven in this application area? 
  • Is a specialized user interface demanded by product requirements? 
  • Do requirements demand the use of new analysis, design or testing methods? 
  • Do requirements put excessive performance constraints on the product?

Development Environment Risks:

  • Is a software project and process management tool available? 
  • Are tools for analysis and design available? 
  • Do analysis and design tools deliver methods that are appropriate for the product to be built? 
  • Are compilers or code generators available and appropriate for the product to be built? 
  • Are testing tools available and appropriate for the product to be built? 
  • Are software configuration management tools available? 
  • Does the environment make use of a database or repository? 
  • Are all software tools integrated with one another? 
  • Have members of the project team received training in each of the tools?

Risks Associated with Staff Size and Experience:

  • Are the best people available and are they enough for the project? 
  • Do the people have the right combination of skills? 
  • Are staffs committed for entire duration of the project? 

QTP 11: Complete List of New Features

Here is the complete list of new features that are available in QTP 11

XPath and CSS based object identification

Identify objects not only using normal object identification but with XPath and CSS identifier properties. A much awaited and a killer feature

Good Looking and Enhanced Results Viewer

The new improved results viewer provides an executive summary page with summary data, pie charts and statistics for both the current and previous runs and a quick link to the previous run results.

Easy Regular Expressions

You can now create reg ex with the help of syntax hints. Regular Expression Evaluator is available to test regular expressions that you have created. Good One.

Now identify objects not only in relation to each other but in relation to neighboring objects.

With this feature, QTP 11 has moved beyond the unreliability of ordinal identifiers. Objects identified with ordinal identifiers are good only as long as they maintain their relative positions with respect to each other in the new build of application. In case if this position changes or gets interchanged, ordinal identifiers may go for a toss.
HP has now introduced Visual Relation Identifier.

visual relation identifier is a set of definitions that enable you to identify the object in the application according its neighboring objects in the application. You can select neighboring objects that will maintain the same relative location to your object, even if the user interface design changes. You define visual relations in the Visual Relation Identifier dialog box, which is accessible from the local or shared object repository, and from the Object Properties dialog box.

Load Function Libraries at Run Time

With the help of LoadFunctionLibrary statement. You can now load a function library when a step runs instead of at the beginning of a run session.

Test Your GUI and UI-Less Application Functionality in One Test

Since QTP is integrated with Service Test, you can now test your GUI and non-GUI based apps in a single run.

Record Support

For FireFox is now available.

Much Awaited Log Tracking is available now

QTP 11 is capable of receiving Java or .NET log framework messages from your application which can then be embedded in the run results.

Embed/Run Javascript in web pages

You can use the new EmbedScript/EmbedScriptFromFile andRunScript/RunScriptFromFile functions to embed JavaScripts in all loaded browser pages. You can use these scripts to perform operations on, or retrieve data from, the browser pages in your application.

Manage Test Data

Improved test data management when integrated with Quality Center

Web 2.0 Toolkit Applications Support

QTP 11 now supports Web 2.0 Toolkit Applications out-of-the-box similar to any other add-ins.

Automatically Parameterize Steps

You can instruct QTP 11 to automatically parameterize test steps at the end of record session.

Silverlight Add-in

To test objects in Silverlight 2 and Silverlight 3 applications. [After installation, Silverlight Add-in is displayed in the Add-in Manager as a child add-in under the WPF Add-in]

Extend WPF and Silverlight Support

You can use WPF and Silverlight Add-in Extensibility SDK to develop support for testing third-party and custom WPF and Silverlight controls that are not supported out-of-the-box

Use Extensibility Accelerator for Web Add-in Extensibility Development

Avoid Downtime Due to License Server Failures

Useful for concurrent license users. With redundant license serversyou can create failover, so that if your main license server fails, your remaining servers maintain availability of your licenses without causing any downtime or loss of licenses for users.


QTP Descriptive Programming

In this post we will discuss:

Introduction to Descriptive Programming.
How to write Descriptive Programming?
When and Where to use Descriptive programming?
Some points to note with 
Descriptive Programming.

Introduction to Descriptive Programming:
Descriptive programming is used when we want to perform an operation on an object that is not present in the object repository. There can be various valid reason to do so. We will discuss them later in this article.

How to write Descriptive Programming? 


There are two ways in which descriptive programming can be used

1. By giving the description in form of the string arguments.
2. By creating properties collection object for the description.

1. By giving the description in form of the string arguments.

This is a more commonly used method for Descriptive Programming.
You can describe an object directly in a statement by specifying property:=value pairs describing the object instead of specifying an object's
name. The general syntax is:

TestObject("PropertyName1:=PropertyValue1", "..." , "PropertyNameX:=PropertyValueX")

TestObject—the test object class could be WebEdit, WebRadioGroup etc….

PropertyName:=PropertyValue—the test object property and its value. Each property:=value pair should be separated by commas and quotation
marks. Note that you can enter a variable name as the property value if you want to find an object based on property values you retrieve during a run session.

Consider the HTML Code given below:

<--!input type=""textbox"" name=""txt_Name""-->
<--!input type=""radio"" name=""txt_Name""-->

Now to refer to the textbox the statement would be as given below

Browser("Browser").Page("Page").WebEdit("Name:=txt_Name","html tag:=INPUT").set "Test"

And to refer to the radio button the statement would be as given below

Browser("Browser").Page("Page").WebRadioGroup("Name:=txt_Name","html tag:=INPUT").set "Test"

If we refer to them as a web element then we will have to distinguish between the 2 using the index property

Browser("Browser").Page("Page").WebElement("Name:=txt_Name","html tag:=INPUT","Index:=0").set "Test" ' Refers to the textbox
Browser("Browser").Page("Page").WebElement("Name:=txt_Name","html tag:=INPUT","Index:=1").set "Test" ' Refers to the radio button

To determine which property and value pairs to use, you can use the Object Spy:
1. Go to Tools -> Object Spy.
2. Select the "Test Object Properties" radio button.
3. Spy on the desired object.
4. In the Properties list, find and write down the properties and values that can be used to identify the object.


2. By creating properties collection object for the description.

Properties collection also does the same thing as string arguments. The only difference is that it "collects" all the properties of a particular object in an instance of that object. Now that object can be referenced easily by using the instance, instead of writing "string arguments" again and again. It is my observation that people find "string arguments" [1] method much easier and intuitive to work with.

To use this method you need first to create an empty description
Dim obj_Desc 'Not necessary to declare
Set obj_Desc = Description.Create

Now we have a blank description in "obj_Desc". Each description has 3 properties "Name", "Value" and "Regular Expression".

obj_Desc("html tag").value= "INPUT"

When you use a property name for the first time the property is added to the collection and when you use it again the property is modified. By default each property that is defined is a regular expression. Suppose if we have the following description

obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt.*"

This would mean an object with html tag as INPUT and name starting with txt. Now actually that ".*" was considered as regular expression. So, if you want the property "name" not to be recognized as a regular expression then you need to set the "regularexpression" property as FALSE

obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt.*"
obj_Desc("name").regularexpression= "txt.*"

This is how we create a description. Now below is the way we can use it

Browser("Browser").Page("Page").WebEdit(obj_Desc).set "Test"

When we say .WebEdit(obj_Desc) we define one more property for our description that was not earlier defined that is it's a text box (because QTPs WebEdit boxes map to text boxes in a web page).

If we know that we have more than 1 element with same description on the page then we must define "index" property for the that description

Consider the HTML code given below

<--!input type=""textbox"" name=""txt_Name""-->
<--!input type=""textbox"" name=""txt_Name""-->

Now the html code has two objects with same description. So distinguish between these 2 objects we will use the "index" property. Here is the description for both the object

For 1st textbox:
obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt_Name"
obj_Desc("index").value= "0"

For 2nd textbox:
obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt_Name"
obj_Desc("index").value= "1"

Consider the HTML Code given below:

<--!input type=""textbox"" name=""txt_Name""-->
<--!input type=""radio"" name=""txt_Name""-->

We can use the same description for both the objects and still distinguish between both of them
obj_Desc("html tag").value= "INPUT"
obj_Desc("name").value= "txt_Name"

When I want to refer to the textbox then I will use the inside a WebEdit object and to refer to the radio button I will use the description object with the WebRadioGroup object.

Browser("Browser").Page("Page").WebEdit(obj_Desc).set "Test" 'Refers to the text box
Browser("Browser").Page("Page").WebRadioGroup(obj_Desc).set "Test" 'Refers to the radio button

But if we use WebElement object for the description then we must define the "index" property because for a webelement the current description would return two objects.

Getting Child Object:

We can use description object to get all the objects on the page that matches that specific description. Suppose we have to check all the checkboxes present on a web page. So we will first create an object description for a checkboxe and then get all the checkboxes from the page

Dim obj_ChkDesc

Set obj_ChkDesc=Description.Create
obj_ChkDesc("html tag").value = "INPUT"
obj_ChkDesc("type").value = "checkbox"

Dim allCheckboxes, singleCheckBox

Set allCheckboxes = Browse("Browser").Page("Page").ChildObjects(obj_ChkDesc)

For each singleCheckBox in allCheckboxes

singleCheckBox.Set "ON"

Next

The above code will check all the check boxes present on the page. To get all the child objects we need to specify an object description.

If you wish to use string arguments [1], same thing can be accomplished by simple scripting.

Code for that would be:

i=0 
Do While Browse("Browser").Page("Page").WebCheckBox("html tag:=INPUT",type:=checkbox, "index:="&i).Exist 
Browse("Browser").Page("Page").WebCheckBox("html tag:=INPUT",type:=checkbox, "index:="&i).Set "ON"
i=i+1
Loop
Possible Operation on Description Objects

Consider the below code for all the solutions
Dim obj_ChkDesc

Set obj_ChkDesc=Description.Create
obj_ChkDesc("html tag").value = "INPUT"
obj_ChkDesc("type").value = "checkbox"

Q: How to get the no. of description defined in a collection
A: obj_ChkDesc.Count 'Will return 2 in our case

Q: How to remove a description from the collection
A: obj_ChkDesc.remove "html tag" 'would delete the html tag property from the collection

Q: How do I check if property exists or not in the collection?
A: The answer is that it's not possible. Because whenever we try to access a property which is not defined its automatically added to the collection. The only way to determine is to check its value that is use a if statement "if obj_ChkDesc("html tag").value = empty then".

Q: How to browse through all the properties of a properties collection?
A: Two ways
1st:
For each desc in obj_ChkDesc
Name=desc.Name
Value=desc.Value
RE = desc.regularexpression
Next
2nd:
For i=0 to obj_ChkDesc.count - 1
Name= obj_ChkDesc(i).Name
Value= obj_ChkDesc(i).Value
RE = obj_ChkDesc(i).regularexpression
Next

Hierarchy of test description:

When using programmatic descriptions from a specific point within a test object hierarchy, you must continue to use programmatic descriptions
from that point onward within the same statement. If you specify a test object by its object repository name after other objects in the hierarchy have
been described using programmatic descriptions, QuickTest cannot identify the object.

For example, you can use Browser(Desc1).Page(Desc1).Link(desc3), since it uses programmatic descriptions throughout the entire test object hierarchy.
You can also use Browser("Index").Page(Desc1).Link(desc3), since it uses programmatic descriptions from a certain point in the description (starting
from the Page object description).

However, you cannot use Browser(Desc1).Page(Desc1).Link("Example1"), since it uses programmatic descriptions for the Browser and Page objects but
then attempts to use an object repository name for the Link test object (QuickTest tries to locate the Link object based on its name, but cannot
locate it in the repository because the parent objects were specified using programmatic descriptions).


When and Where to use Descriptive programming?

Below are some of the situations when Descriptive Programming can be considered useful:

1. One place where DP can be of significant importance is when you are creating functions in an external file. You can use these function in various actions directly , eliminating the need of adding object(s) in object repository for each action[If you are using per action object repository]

2. The objects in the application are dynamic in nature and need special handling to identify the object. The best example would be of clicking a link which changes according to the user of the application, Ex. "Logout <>".

3. When object repository is getting huge due to the no. of objects being added. If the size of Object repository increases too much then it decreases the performance of QTP while recognizing a object. [For QTP8.2 and below Mercury recommends that OR size should not be greater than 1.5MB]

4. When you don't want to use object repository at all. Well the first question would be why not Object repository? Consider the following scenario which would help understand why not Object repository
Scenario 1: Suppose we have a web application that has not been developed yet.Now QTP for recording the script and adding the objects to repository needs the application to be up, that would mean waiting for the application to be deployed before we can start of with making QTP scripts. But if we know the descriptions of the objects that will be created then we can still start off with the script writing for testing
Scenario 2: Suppose an application has 3 navigation buttons on each and every page. Let the buttons be "Cancel", "Back" and "Next". Now recording action on these buttons would add 3 objects per page in the repository. For a 10 page flow this would mean 30 objects which could have been represented just by using 3 objects. So instead of adding these 30 objects to the repository we can just write 3 descriptions for the object and use it on any page.

5. Modification to a test case is needed but the Object repository for the same is Read only or in shared mode i.e. changes may affect other scripts as well.
6. When you want to take action on similar type of object i.e. suppose we have 20 textboxes on the page and there names are in the form txt_1, txt_2, txt_3 and so on. Now adding all 20 the Object repository would not be a good programming approach.


References:
1) QuickTestProfessional Documentation
2) Tarun Lalwani's DP Document
3) Ankur learnQTP blog

Are you sure you want to use QTP?

More often than not, I have seen people going for test automation just because they want to go for this exciting buzzword. It is generally realized in the very end that automation has not given intended returns.I would suggest that before even thinking of Automation a thorough study of your application/system/module should be performed.

The following types of test cases are preferred (but not limited to) choice for automation:

  • Tests that need to run for every build.
  • Tests that use multiple data values for same action.
  • Identical tests that need to be executed using different browsers
The following types are not considered to be the good candidates for automation:

  • Test Cases that will only be executed once.
  • Test Cases used for Ad-hoc/random Testing.
  • Test Cases that are infrequently selected for execution.
  • Test Cases that will require manual intervention ie a task not possible to automate.
  • The most imp. one--based on the intuition and knowledge of application. Eg. if you find that you cannot escape from manual intervention.


Above all the effort required to develop and maintain the automated scripts should be weighed against the resources (no of hours, budget) that task would take while manual execution. The amount of changes the application undergoes from one build to another should also be taken into account while choosing for test automation.

If an app undergoes drastic changes that means most time will be exhausted in maintaining the scripts. These apps are then considered bad choice for automation.
Considering the above points Return On Investment (ROI) should be calculated and this should be the single most important criterion while going for Test Automation.

What is Mobile Device Testing??


Mobile Device Testing is the process to assure the quality of mobile devices, like mobile phones, PDAs, etc. The testing will be conducted on both hardware and software. And from the view of different procedures, the testing comprises R&D Testing, Factory Testing and Certificate Testing.

R&D Testing

R&D test is the main test phase for mobile device, and it happens during the developing phase of the mobile devices. It contains hardware testing, software testing, and mechanical testing.

Factory Testing

Factory Testing is a kind of sanity check on mobile devices. It's conducted automatically to verify that there are no defects brought by the manufacturing or assembling.

Mobile Testing contains

Mobile application testing Hardware testing

 Battery(Charging) Testing
Signal receiving
Network Testing

Protocol testing mobile games Testing Mobile software compatibility Testing.

Certification Testing

Certification Testing is the check before a mobile device goes to market. Many institutes or governments require mobile devices to conform with their stated specifications and protocols to make sure the mobile device will not harm users' health and are compatible with devices from other manufacturers. Once the mobile device passes all checks, a certification will be issued for it.


Source: Wikipedia

Are You a Fresher or Freelancer ? Try Beta Testing !

Are You a Fresher or Freelancer ? Try Beta Testing !

What is a beta testing?
A Beta test is a testing done for a piece of computer product prior to commercial release in the market. Beta testing can be considered the last stage of testing, and normally can involve sending the product to beta test sites outside the company for real-world exposure. Normally, a free trial version of the product is allowed to be downloaded over the Internet. Beta test versions of software are distributed to a wide range of users to get the software tested in different combination of hardware and associated software.
 
What is in for a fresher or freelancer?
So what should we do as a fresher or a freelance software tester? Beta testing provides a platform for us to test some real world software. There are many companies which release beta version of their software before they ship it to the market for commercial selling.
 
By signing up as a beta tester and downloading a test version of the software, you can get a first hand preview of the software. It is an exciting experience to get the software free of cost before the world gets to see the product. If you can find few bugs or write some reviews (positive or negative) about the beta version of the software, most of the companies give a free licensed version of the product when they release in the market.
 
Benefits of beta testing to a fresher?
Also as a fresher, you can show this experience of testing a beta product. You can list down what all defects or bugs you have submitted to the company. There are few companies which I know who pay for finding bugs in there products. Your resume will shine if you can prove that you earned few dollars by finding bugs for this and that company.

 
uTest.com

Sign Up with uTest.com and get paid for finding bugs. I am sure there are lots of other companies which pay for pointing out their defects.
 
 
Beta products from Microsoft

Microsoft also provides wide range of Beta software for testing. Sign up with them and start testing a free beta product today.
 
 
 
Here are some tips for running a Beta Test  
  • Take back up of your PC before you install a Beta version software. You never know when the software crashes.
  • Read the terms and conditions carefully. Find out which bugs have already been reported. You don't want to waste time finding bugs which someone has already reported.
  • Get the specification or requirement document first. How do you test a product if you don't know what it is supposed to do?
  • Create a network of Beta Testers. This will help you get informed of latest happenings around the world.


***Happy Testing***

 

Testing Glossary - 100 Most Popular Software Testing Terms

Are you confused of testing terms and acronyms? Here are few glossaries that might help you. Here are 100 most popular testing terms compiled from International Software Testing Qualifications Board's website. A complete and exhaustive list of terms is available for download at that site.


100 Most Popular Software Testing Terms

Acceptance testing

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Ad hoc testing

Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

Agile testing

Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. 

Alpha testing

Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Back-to-back testing

Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.

Beta testing

Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Big-bang testing

A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.

Black-box testing

Testing, either functional or non-functional, without reference to the internal structure of the component or system.

Black-box test design technique

Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

Blocked test case

A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Bottom-up testing

An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.

Boundary value

An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis

A black box test design technique in which test cases are designed based on boundary values.

Branch testing

A white box test design technique in which test cases are designed to execute branches.

Business process-based testing

An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

Capture/playback tool

A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

Certification

The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

Code coverage

An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

Compliance testing

The process of testing to determine the compliance of the component or system.

Component integration testing

Testing performed to expose defects in the interfaces and interaction between integrated components.

Condition testing

A white box test design technique in which test cases are designed to execute condition outcomes.

Conversion testing

Testing of software used to convert data from existing systems for use in replacement systems.

Data driven testing

A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.

Database integrity testing

Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated or created.

Defect

A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect masking

An occurrence in which one defect prevents the detection of another.

Defect report

A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.

Development testing

Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers.

Driver

A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.

Equivalence partitioning

A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

Error

A human action that produces an incorrect result.

Error guessing

A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

Exhaustive testing

A test approach in which the test suite comprises all combinations of input values and preconditions.

Exploratory testing

An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

Failure

Deviation of the component or system from its expected delivery, service or result.

Functional test design technique

Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure.

Functional testing

Testing based on an analysis of the specification of the functionality of a component or system.

Functionality testing

The process of testing to determine the functionality of a software product.

Heuristic evaluation

A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called "heuristics").

High level test case

A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available.

ISTQB

International Software Testing Qualification Board. Click here for more details.

Incident management tool

A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.

Installability testing

The process of testing the installability of a software product.

Integration testing

Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Isolation testing

Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

Keyword driven testing

A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.

Load testing

A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.

Low level test case

A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators.

Maintenance testing

Testing the changes to an operational system or the impact of a changed environment to an operational system.

Monkey testing

Testing by means of a random selection from a large range of inputs and by randomly pushing buttons, ignorant on how the product is being used.

Negative testing

Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions.

Non-functional testing

Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

Operational testing

Testing conducted to evaluate a component or system in its operational environment.

Pair testing

Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.

Peer review

A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

Performance testing

The process of testing to determine the performance of a software product.

Portability testing

The process of testing to determine the portability of a software product.

Post-execution comparison

Comparison of actual and expected results, performed after the software has finished running.

Priority

The level of (business) importance assigned to an item, e.g. defect.

Quality assurance

Part of quality management focused on providing confidence that quality requirements will be fulfilled.

Random testing

A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

Recoverability testing

The process of testing to determine the recoverability of a software product.

Regression testing

Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Requirements-based testing

An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

Re-testing

Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Risk-based testing

An approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.

Severity

The degree of impact that a defect has on the development or operation of a component or system.

Site acceptance testing

Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.

Smoke test

A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Statistical testing

A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases.

Stress testing

Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Stub

A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

Syntax testing

A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.

System integration testing

Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

System testing

The process of testing an integrated system to verify that it meets specified requirements.

Test automation

The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

Test case specification

A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.

Test design specification

A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.

Test environment

An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

Test harness

A test environment comprised of stubs and drivers needed to execute a test.

Test log

A chronological record of relevant details about the execution of tests.

Test management tool

A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, and the logging of results, progress tracking, incident management and test reporting.

Test oracle

A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual's specialized knowledge, but should not be the code.

Test plan

A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

Test strategy

A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

Test suite

A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Testware

Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

Thread testing

A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

Top-down testing

An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Traceability

The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

Usability testing

Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use case

A sequence of transactions in a dialogue between a user and the system with a tangible result.

Use case testing

 

A black box test design technique in which test cases are designed to execute user scenarios.

 

Unit test framework

 

A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities.

Validation

 

Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Verification

 

Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

Vertical traceability

 

The tracing of requirements through the layers of development documentation to components.

Volume testing

 

Testing where the system is subjected to large volumes of data.

Walkthrough

 

 

A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.

 

 

White-box testing

 

Testing based on an analysis of the internal structure of the component or system.