Custom Search

QTP Supports


QTP is a Functionality Testing Tool developed by Mercury Interactive later merged with HP

QTP has released in following Versions. 5.5 6.5 8.0 8.2 9.0 9.1 9.2


10.0 (Latest)


They are Two types of licenses (1) Seat License (Single User) and (2) Concurrent License (Multiple Users).


QTP 9.2 Supports following Technologies

.Net, VB, JAVA, ActiveX Controls, Web Servers, People Soft, SAP, Oracle, Terminal Emulators, HTML, DHTML, XML ….. etc.,

QTP 9.5 Supports following Technologies

Power Builder, Oracle Forms10, Apps 12, New Terminal Emulator Versions (Mainframe Application Technologies), .Net (3.5)


QTP 9.2 Support on following Environment

Windows 2000 Server, 2000 Professional, 2003 Server, XP etc.,

QTP 9.5 Support on following Environment

Windows Vista (64 Bit), Eclipse (3.2 & 3.3), Netscape 9.0, Firefox (3.0)


QTP does not support on Linux / Unix Operating Systems. X-Runner supports on Linux / Unix Operating System.

QTP can support client / Server, Web Applications.

QTP Records the business operations in VB Script (It can also support JAVA Script)

QTP can support Multimedia Applications such as Flash, Windows Media Player, Real Video etc.,



Testing an application with human interaction is called Manual Testing.

Drawbacks of Manual Testing

(i) Time consuming.

(ii) More resources required.

(iii) Human Errors

(iv) Repetition of the Task is not much

(v) Tiredness

(vi) Simultaneous auctions are not possible (Parallel)

Automation Testing

Testing an application with 3rd Party Software help is called Automation



Testing an application with an Automation Tool is also called as

Automation Testing.

Benefits of Automation Testing:

a) Fast

b) Reliable

c) Repeatable

d) Reusable

e) Comprehensive

f) Programmable.

a) Fast

Automation tool runs tests significantly more faster than human


b) Reliable

Automation tool can perform the same operation each time, if you repeated multiple times, so that we can eliminate human errors.

c) Repeatable

We can check how application or website reacts after repeated the same operation with multiple times.

d) Reusable

Automation scripts are reusable on different version of the application of websites even if the user interfaces changes.

e) Comprehensive

In automation testing we can build a suit of tests that covers every feature in the application or website.

f) Programmable.

We can program sophisticated Tests that bring out hidden information from the application.

Drawbacks of Automation Testing

1) It is expensive

2) We cannot automate all areas.

3) Lack of expertisation.

4) It has some limitations (It cannot test every thing)

Which Software Testing should be automated?

Tests that need to be execute of every build of the application (Sanity Testing)

Tests that use multiple data values (Retesting / Data Drives Testing) Tests that required data from application intimates (G.U.I. Attributes) Load and Stress Testing

Which Software Testing should not be automated?

Usability Testing One time testing

Quick look Tests or A.S.A.P (As soon as possible) Testing Ad-hoc testing / Random Testing

Customers requirement are frequently changing.

Type of Tools

Generally they are (4) Types of tools are available in the market. They are

1) Functional Tools:

QTP, WinRunner, Silktest, Rational Robot, TestPartner, etc.,

2) Performance Tools:

LoadRunner, J-Meter etc.,

3) Test Management Tools:

Quality Center (QC), Test Director.

4) Version Control Tools

VSS (Visual Safe Source), PVCS (Polytron Version Control System) etc.,

Automation Frameworks

There are several test automation frameworks available, among these the selection is made based on the factors such as reusability of both the scripts and the test assets. The different test automation frameworks available are as follows,
 Test Script Modularity
 Test Library Architecture
 Data-Driven Testing
 Keyword-Driven or Table-Driven Testing
 Hybrid Test Automation

Framework 1: Test Script Modularity

The test script modularity framework is the most basic of the frameworks. It's a well-known programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application.
This insulates the application from modifications in the component and provides modularity in the application design. When working with test scripts (in any language or proprietary environment) this can be achieved by creating small, independent scripts that represent modules, sections, and functions of the application-under-test.
Then these small scripts are taken and combined them in a hierarchical fashion to construct larger tests. The use of this framework will yield a higher degree of modularization and add to the overall maintainability of the test scripts.

Framework 2 : Test Library Architecture

The test library architecture framework is very similar to the test script modularity framework and offers the same advantages, but it divides the application-under-test into procedures and functions (or objects and methods depending on the implementation language) instead of scripts.
This framework requires the creation of library files (SQABasic libraries, APIs, DLLs, and such) that represent modules, sections, and functions of the application-under-test. These library files are then called directly from the test case script.

Much like script modularization this framework also yields a high degree of modularization and adds to the overall maintainability of the tests.

Framework 3: Data-Driven Testing

A data-driven framework is where test input and output values are read from data files (ODBC sources, CVS files, Excel files, DAO objects, ADO objects, and such) and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values.

Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script. This is similar to table-driven testing (which is discussed shortly) in that the test case is contained in the data file and not in the script; the script is just a "driver," or delivery mechanism, for the data. In data-driven testing, only test data is contained in the data files.

Merits of data-driven testing

The merits of the Data-Driven test automation framework are as follows,

Ø Scripts may be developed while application development is still in progress

Ø Utilizing a modular design, and using files or records to both input and verify data, reduces redundancy and duplication of effort in creating automated test scripts

Ø If functionality changes, only the specific "Business Function" script needs to be updated

Ø Data input/output and expected results are stored as easily maintainable text records.

Ø Functions return "TRUE" or "FALSE" values to the calling script, rather than aborting, allowing for more effective error handling, and increasing the robustness of the test scripts. This, along with a well-designed "recovery" routine, enables "unattended" execution of test scripts.

Demerits of data-driven testing

The demerits of the Data-Driven test automation framework are as follows,

Ø Requires proficiency in the Scripting language used by the tool (technical personnel)

Ø Multiple data-files are required for each Test Case. There may be any number of data-inputs and verifications required, depending on how many different screens are accessed. This usually requires data-files to be kept in separate directories by Test Case

Ø Tester must not only maintain the Detail Test Plan with specific data, but must also re-enter this data in the various required data-files

Ø If a simple "text editor" such as Notepad is used to create and maintain the data-files, careful attention must be paid to the format required by the scripts/functions that process the files, or script-processing errors will occur due to data-file format and/or content being incorrect

Framework 5: Hybrid Test Automation Framework

The most commonly implemented framework is a combination of all of the above techniques, pulling from their strengths and trying to mitigate their weaknesses. This hybrid test automation framework is what most frameworks evolve into over time and multiple projects. The most successful automation frameworks generally accommodate both Keyword-Driven testing as well as Data-Driven scripts.

This allows data driven scripts to take advantage of the powerful libraries and utilities that usually accompany a keyword driven architecture. The framework utilities can make the data driven scripts more compact and less prone to failure than they otherwise would have been.
The utilities can also facilitate the gradual and manageable conversion of existing scripts to keyword driven equivalents when and where that appears desirable. On the other hand, the framework can use scripts to perform some tasks that might be too difficult to re-implement in a pure keyword driven approach, or where the keyword driven capabilities are not yet in place.


Labels: Automation Framework

Framework 4: Keyword-Driven Testing

This requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. In this method, the entire process is data-driven, including functionality.


In order to open a window, the following table is devised, and it can be used for any other application, just it requires just changing the window name.

Test Table for Opening a Window





Window Name



File, Open

Window Name




Window Name



Folder Name

Window Name



Once creating the test tables, a driver script or a set of scripts is written that reads in each step executes the step based on the keyword contained the Action field, performs error checking, and logs any relevant information.

Merits of keyword driven testing

The merits of the Keyword Driven Testing are as follows,

Ø The Detail Test Plan can be written in Spreadsheet format containing all input and verification data.

Ø If "utility" scripts can be created by someone proficient in the automated tool’s Scripting language prior to the Detail Test Plan being written, then the tester can use the Automated Test Tool immediately via the "spreadsheet-input" method, without needing to learn the Scripting language.

Ø The tester need only learn the "Key Words" required, and the specific format to use within the Test Plan. This allows the tester to be productive with the test tool very quickly, and allows more extensive training in the test tool to be scheduled at a more convenient time.

Demerits of keyword driven testing

The demerits of the Keyword Driven Testing are as follows,

Ø Development of "customized" (Application-Specific) Functions and Utilities requires proficiency in the tool’s Scripting language. (Note that this is also true for any method)

Ø If application requires more than a few "customized" Utilities, this will require the tester to learn a number of "Key Words" and special formats. This can be time-consuming, and may have an initial impact on Test Plan Development. Once the testers get used to this, however, the time required to produce a test case is greatly improved.

Web Terminologies: Useful for web application testers

This article basically covers following terminologies:

What is: Internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web servers, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Thick client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status codes


A global network connecting millions of computers.

World Wide Web (the Web)

An information sharing model that is built on top of the Internet,

utilizes HTTP protocol and browsers (such as Internet Explorer) to

access Web pages formatted in HTML that are linked via hyperlinks

and the Web is only a subset of the Internet (other uses of the

Internet include email (via SMTP), Usenet, instant messaging and

file transfer (via FTP)

URL (Uniform Resource Locator)

The address of documents and other content on the Web. It is

consisting of protocol, domain and the file. Protocol can be either

HTTP, FTP, Telnet, News etc., domain name is the DNS name of

the server and file can be Static HTML, DOC, Jpeg, etc., . In other

words URLs are strings that uniquely identify resources on internet.


protocol suite used to send data over the Internet. TCP/IP

consists of only 4 layers - Application layer, Transport layer,

Network layer & Link layer

Internet Protocols:

Application Layer - DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC,



Transport Layer- TCP, UDP, DCCP, SCTP, IL, RUDP,

Network Layer - IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, …

Link Ethernet Layer- Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM,

DTM, Frame Relay, SMDS,

TCP (Transmission Control Protocol)

Enables two devices to establish a connection and exchange data.

In the Internet protocol suite, TCP is the intermediate layer

between the Internet Protocol below it, and an application above it.

Applications often need reliable pipe-like connections to each other,

whereas the Internet Protocol does not provide such streams, but

rather only unreliable packets. TCP does the task of the transport

layer in the simplified OSI model of computer networks.

It is one of the core protocols of the Internet protocol suite. Using

TCP, applications on networked hosts can create connections to one

another, over which they can exchange data or packets. The

protocol guarantees reliable and in-order delivery of sender to

receiver data. TCP also distinguishes data for multiple, concurrent

applications (e.g. Web server and e-mail server) running on the

same host.


Specifies the format of data packets and the addressing protocol.

The Internet Protocol (IP) is a data-oriented protocol used for

communicating data across a packet-switched internet work. IP is a

network layer protocol in the internet protocol suite. Aspects of IP

are IP addressing and routing. Addressing refers to how end hosts

become assigned IP addresses. IP routing is performed by all hosts,

but most importantly by internetwork routers

IP Address

A unique number assigned to each connected device, often assigned

dynamically to users by an ISP on a session-by-session basis –

dynamic IP address. Increasingly becoming dedicated, particularly

with always-on broadband connections – static IP address.


A portion of a message sent over a TCP/IP Network. It contains

content and destination

HTTP (Hypertext Transfer Protocol)

Underlying protocol of the World Wide Web. Defines how messages

are formatted and transmitted over a TCP/IP network for Web

sites. Defines what actions Web servers and Web browsers take in

response to various commands.

HTTP is stateless. The advantage of a stateless protocol is that hosts

don't need to retain information about users between requests, but

this forces the use of alternative methods for maintaining users'

state, for example, when a host would like to customize content for

a user who has visited before. The common method for solving this

problem involves the use of sending and requesting cookies. Other

methods are session control, hidden variables, etc

example: when you enter a URL in your browser, an HTTP

command is sent to the Web server telling to fetch and transmit the

requested Web page


HEAD: Asks for the response identical to the one that

would correspond to a GET request, but without the

response body. This is useful for retrieving metainformation

written in response headers, without

having to transport the entire content.


GET : Requests a representation of the specified

resource. By far the most common method used on

the Web today.


POST : Submits user data (e.g. from a HTML form) to

the identified resource. The data is included in the

body of the request.


PUT: Uploads a representation of the specified



DELETE: Deletes the specified resource (rarely



TRACE: Echoes back the received request, so that a

client can see what intermediate servers are adding or

changing in the request.




Returns the HTTP methods that the server supports.

This can be used to check the functionality of a web



CONNECT: For use with a proxy that can change to

being an SSL tunnel.

SSL (Secure Sockets Layer)

Protocol for establishing a secure connection for transmission, it

uses the HTTPS convention

SSL provides endpoint authentication and communications privacy

over the Internet using cryptography. In typical use, only the server

is authenticated (i.e. its identity is ensured) while the client remains

unauthenticated; mutual authentication requires public key

infrastructure (PKI) deployment to clients. The protocols allow

client/server applications to communicate in a way designed to

prevent eavesdropping, tampering, and message forgery.

SSL involves a number of basic phases:


Peer negotiation for algorithm support


Public key encryption-based key exchange and

certificate-based authentication


Symmetric cipher-based traffic encryption


During the first phase, the client and server negotiate

which cryptographic algorithms will be used. Current

implementations support the following choices:


for public-key cryptography: RSA, Diffie-Hellman,

DSA or Fortezza;


for symmetric ciphers: RC2, RC4, IDEA, DES, Triple



For one-way hash functions: MD5 or SHA.


is a URI scheme which is syntactically identical to the http: scheme

normally used for accessing resources using HTTP. Using an https:

URL indicates that HTTP is to be used, but with a different default

port and an additional encryption/authentication layer between

HTTP and TCP. This system was invented by Netscape

Communications Corporation to provide authentication and

encrypted communication and is widely used on the Web for

security-sensitive communication, such as payment transactions.

HTML (Hypertext Markup Language)

The authoring language used to create documents on the World

Wide Web

Hundreds of tags can be used to format and layout a Web page's

content and to hyperlink to other Web content.


Used to connect a user to other parts of a web site and to other web

sites and web-enabled services.

Web server

A computer that is connected to the Internet. Hosts Web content

and is configured to share that content.

Webserver is responsible for accepting HTTP requests from clients,

which are known as Web browsers, and serving them Web pages,

which are usually HTML documents and linked objects (images,




Apache HTTP Server from the Apache Software



Internet Information Services (IIS) from Microsoft.


Sun Java System Web Server from Sun Microsystems,

formerly Sun ONE Web Server, iPlanet Web Server,

and Netscape Enterprise Server.


Zeus Web Server from Zeus Technology

Web client

Most commonly in the form of Web browser software such as

Internet Explorer or Netscape

Used to navigate the Web and retrieve Web content from Web

servers for viewing.

Proxy server

An intermediary server that provides a gateway to the Web (e.g.,

employee access to the Web most often goes through a proxy)

Improves performance through caching and filters the Web

The proxy server will also log each user interaction.


Web browsers and proxy servers save a local copy of the

downloaded content – pages that display personal information

should be set to prohibit caching.

Web form

A portion of a Web page containing blank fields that users can fill in

with data (including personal info) and submits for Web server to

process it.

Web server log

Every time a Web page is requested, the Web server may

automatically logs the following information:


the IP address of the visitor


date and time of the request


the URL of the requested file


the URL the visitor came from immediately before

(referrer URL)


the visitor's Web browser type and operating system


A small text file provided by a Web server and stored on a users PC

the text can be sent back to the server every time the browser

requests a page from the server. Cookies are used to identify a user

as they navigate through a Web site and/or return at a later time.

Cookies enable a range of functions including personalization of


Session vs. persistent cookies

A Session is a unique ID assigned to the client browser by a web

server to identify the state of the client because web servers are


A session cookie is stored only while the user is connected to the

particular Web server – the cookie is deleted when the user


Persistent cookies are set to expire at some point in the future –

many are set to expire a number of years forward


A socket is a network communications endpoint.

Application Server

An application server is a server computer in a computer network

dedicated to running certain software applications. The term also

refers to the software installed on such a computer to facilitate the

serving of other applications. Application server products typically

bundle middleware to enable applications to intercommunicate

with various qualities of service — reliability, security, nonrepudiation,

and so on. Application servers also provide an API to

programmers, so that they don't have to be concerned with the

operating system or the huge array of interfaces required of a

modern web-based application. Communication occurs through the

web in the form of HTML and XML, as a link to various databases,

and, quite often, as a link to systems and devices ranging from huge

legacy applications to small information devices, such as an atomic

clock or a home appliance.

An application server exposes business logic to client applications

through various protocols, possibly including HTTP. the server

exposes this business logic through a component API, such as the

EJB (Enterprise JavaBean) component model found on J2EE (Java

2 Platform, Enterprise Edition) application servers. Moreover, the

application server manages its own resources. Such gate-keeping

duties include security, transaction processing, resource pooling,

and messaging

Ex: JBoss (Red Hat), WebSphere (IBM), Oracle Application Server

10g (Oracle Corporation) and WebLogic (BEA)

Thin Client

A thin client is a computer (client) in client-server architecture

networks which has little or no application logic, so it has to depend

primarily on the central server for processing activities. It is

designed to be especially small so that the bulk of the data

processing occurs on the server.

Thick client

It is a client that performs the bulk of any data processing

operations itself, and relies on the server it is associated with

primarily for data storage.


It is a computer program that runs in the background, rather than

under the direct control of a user; they are usually instantiated as

processes. Typically daemons have names that end with the letter

"d"; for example, syslogd is the daemon which handles the system

log. Daemons typically do not have any existing parent process, but

reside directly under init in the process hierarchy. Daemons usually

become daemons by forking a child process and then making the

parent process kill itself, thus making init adopt the child. This

practice is commonly known as "fork off and die." Systems often

start (or "launch") daemons at boot time: they often serve the

function of responding to network requests, hardware activity, or

other programs by performing some task. Daemons can also

configure hardware (like devfsd on some Linux systems), run

scheduled tasks (like cron), and perform a variety of other tasks.

Client-side scripting

Generally refers to the class of computer programs on the web that

are executed client-side, by the user's web browser, instead of

server-side (on the web server). This type of computer

programming is an important part of the Dynamic HTML

(DHTML) concept, enabling web pages to be scripted; that is, to

have different and changing content depending on user input,

environmental conditions (such as the time of day), or other


Web authors write client-side scripts in languages such as

JavaScript (Client-side JavaScript) or VBScript, which are based on

several standards:


HTML scripting



Document Object Model

Client-side scripts are often embedded within

an HTML document, but they may also be

contained in a separate file, which is referenced

by the document (or documents) that use it.

Upon request, the necessary files are sent to

the user's computer by the web server (or

servers) on which they reside. The user's web

browser executes the script, then displays the

document, including any visible output from

the script. Client-side scripts may also contain

instructions for the browser to follow if the

user interacts with the document in a certain

way, e.g., clicks a certain button. These

instructions can be followed without further

communication with the server, though they

may require such communication.

Server-side Scripting

It is a web server technology in which a user's request is fulfilled by

running a script directly on the web server to generate dynamic

HTML pages. It is usually used to provide interactive web sites that

interface to databases or other data stores. This is different from

client-side scripting where scripts are run by the viewing web

browser, usually in JavaScript. The primary advantage to serverside

scripting is the ability to highly customize the response based

on the user's requirements, access rights, or queries into data



ASP: Microsoft designed solution allowing various

languages (though generally VBscript is used) inside a

HTML-like outer page, mainly used on Windows but

with limited support on other platforms.


ColdFusion: Cross platform tag based commercial

server side scripting system.


JSP: A Java-based system for embedding code in

HTML pages.


Lasso: A Datasource neutral interpreted programming

language and cross platform server.


SSI: A fairly basic system which is part of the common

apache web server. Not a full programming

environment by far but still handy for simple things

like including a common menu.

PHP : Common opensource solution based on

including code in its own language into an HTML



Server-side JavaScript: A language generally used on

the client side but also occasionally on the server side.


SMX : Lisplike opensource language designed to be

embedded into an HTML page.

Common Gateway Interface (CGI)

is a standard protocol for interfacing external application software

with an information server, commonly a web server. This allows the

server to pass requests from a client web browser to the external

application. The web server can then return the output from the

application to the web browser.

Dynamic Web pages:

can be defined as: (1) Web pages containing dynamic content (e.g.,

images, text, form fields, etc.) that can change/move without the

Web page being reloaded or (2) Web pages that are produced onthe-

fly by server-side programs, frequently based on parameters in

the URL or from an HTML form. Web pages that adhere to the first

definition are often called Dynamic HTML or DHTML pages.

Client-side languages like JavaScript are frequently used to produce

these types of dynamic web pages. Web pages that adhere to the

second definition are often created with the help of server-side

languages such as PHP, Perl, ASP/.NET, JSP, and languages. These

server-side languages typically use the Common Gateway Interface

(CGI) to produce dynamic web pages.

Digital Certificates

In cryptography, a public key certificate (or identity certificate) is a certificate

which uses a digital signature to bind together a public key with an identity —

information such as the name of a person or an organization, their address, and

so forth. The certificate can be used to verify that a public key belongs to an


In a typical public key infrastructure (PKI) scheme, the signature will be of a

certificate authority (CA). In a web of trust s

"endorsements"). In either case, the signatures on a certificate are attestations by

the certificate signer that the identity information and the public key belong


Certificates can be used for the large-scale use of public-key cryptography.

Securely exchanging secret keys amongst users becomes impractical to the point

of effective impossibility for anything other than quite small networks. Public key

cryptography provides a way to avoid this problem. In principle, if Alice wants

others to be able to send her secret messages, she need only publish her public

key. Anyone possessing it can then send her secure information. Unfortunately,

David could publish a different public key (for which he knows the related private

key) claiming that it is Alice's public key. In so doing, David could intercept and

read at least some of the messages meant for Alice. But if Alice builds her public

key into a certificate and has it digitally signed by a trusted third party (Trent),

anyone who trusts Trent can merely check the certificate to see whether Trent

thinks the embedded public key is Alice's. In typical Public-key Infrastructures

(PKIs), Trent will be a CA, who is trusted by all participants. In a web of trust,

Trent can be any user, and whether to trust that user's attestation that a

particular public key belongs to Alice will be up to the person wishing to send a

message to Alice.

In large-scale deployments, Alice may not be familiar with Bob's certificate

authority (perhaps they each have a different CA — if both use employer CAs,

different employers would produce this result), so Bob's certificate may also

include his CA's public key signed by a "higher level" CA2, which might be

recognized by Alice. This process leads in general to a hierarchy of certificates,

and to even more complex trust relationships. Public key infrastructure refers,

mostly, to the software that manages certificates in a large-scale setting. In X.509

PKI systems, the hierarchy of certificates is always a top-down tree, with a root

certificate at the top, representing a CA that is 'so central' to the scheme that it

does not need to be authenticated by some trusted third party.

A certificate may be revoked if it is discovered that its related private key has

been compromised, or if the relationship (between an entity and a public key)

embedded in the certificate is discovered to be incorrect or has changed; this

might occur, for example, if a person changes jobs or names. A revocation will

likely be a rare occurrence, but the possibility means that when a certificate is

trusted, the user should always check its validity. This can be done by comparing

it against a certificate revocation list (CRL) — a list of revoked or cancelled

certificates. Ensuring that such a list is up-to-date and accurate is a core function

in a centralized PKI, one which requires both staff and budget and one which is

therefore sometimes not properly done. To be effective, it must be readily

available to any who needs it whenever it is needed and must be updated

frequently. The other way to check a certificate validity is to query the certificate

authority using the Online Certificate Status Protocol (OCSP) to know the status

of a specific certificate.

Both of these methods appear to be on the verge of being supplanted by XKMS.

This new standard, however, is yet to see widespread implementation.

A certificate typically includes:

The public key being signed.

A name, which can refer to a person, a computer or an organization.

A validity period.

The location (URL) of a revocation center.

The most common certificate standard is the ITU-T X.509. X.509 is being

adapted to the Internet by the IETF PKIX working group.


Verisign introduced the concept of three classes of digital certificates:

Class 1 for individuals, intended for email;

Class 2 for organizations, for which proof of identity is required; and

Class 3 for servers and software signing, for which independent verification and

checking of identity and authority is done by the issuing certificate authority (CA)

List of HTTP status codes

1xx Informational

Request received, continuing process.

100: Continue

101: Switching Protocols

2xx Success

The action was successfully received, understood, and accepted.

200: OK

201: Created

202: Accepted

203: Non-Authoritative Information

204: No Content

205: Reset Content

206: Partial Content

3xx Redirection

The client must take additional action to complete the request.

300: Multiple Choices

301: Moved Permanently

302: Moved Temporarily (HTTP/1.0)

302: Found (HTTP/1.1)

see 302 Google Jacking

303: See Other (HTTP/1.1)

304: Not Modified

305: Use Proxy

Many HTTP clients (such as Mozilla and Internet Explorer) don't correctly

handle responses with this status code.

306: (no longer used, but reserved)

307: Temporary Redirect

4xx Client Error

The request contains bad syntax or cannot be fulfilled.

400: Bad Request

401: Unauthorized

Similar to 403/Forbidden, but specifically for use when authentication is possible

but has failed or not yet been provided. See basic authentication scheme and

digest access authentication.

402: Payment Required

403: Forbidden

404: Not Found

405: Method Not Allowed

406: Not Acceptable

407: Proxy Authentication Required

408: Request Timeout

409: Conflict

410: Gone

411: Length Required

412: Precondition Failed

413: Request Entity Too Large

414: Request-URI Too Long

415: Unsupported Media Type

416: Requested Range Not Satisfiable

417: Expectation Failed

5xx Server Error

The server failed to fulfill an apparently valid request.

500: Internal Server Error

501: Not Implemented

502: Bad Gateway

503: Service Unavailable

504: Gateway Timeout

505: HTTP Version Not Supported

509: Bandwidth Limit Exceeded