Thursday, August 25, 2011

Web Application Load, Stress and Performance Testing Using WAPT

Why most of the manual testers fail when testing websites for performance? There are couple of reasons.
- They don’t have proper tools to test website for performance and
- They don’t have required skills for performance testing.

Does that mean you should wait till your stakeholder report the performance glitches in web application you developed? Definitely not. Many testers are good at testing websites manually and they report almost every defect while testing under standard tests. BUT, when same tester performs load or stress tests they stuck either at resource (required tools) or skill level.

I suggest not to take any risk if you are committed to defect free service. Ask for required tools and train your staff for necessary skills. Today, I’m going to review load, stress and performance testing tool for websites. The tool is called WAPT – Web Application Load, Stress and Performance Testing


WAPT allows you to perform website load and performance testing by creating heavy load from a single or multiple workstations. When you set and run your tests with this tool within a matter of minutes you can get performance report of your website or web application. WAPT uses powerful virtual users same as the real world users with full control over how to customize these virtual users.

Measuring website performance:

Did you ever wonder?
- How many users can work simultaneously on your website with acceptable quality of service?
- How many visitors your website can handle by day or hour?
- What is your website response time under load?

These all questions are nothing but the measure of website “performance characteristic”.

WAPT – website performance tool performs test by emulating activity of many virtual users. Each virtual user can have its own profile settings. You can have thousands of virtual users acting simultaneously on your website performing any activity like reading or writing with your web server. Once you set number of virtual users to act on your website you have option to run your tests for specified time or specified user sessions.

Analyzing the test report:
Test result consists of charts updated in real time which you can monitor when your tests are running. The final comprehensive report is provided at the end of the tests.

Here are the important parameters to be monitored on the test report:
Error Rate: Failure rate against total number of tests run. The error may be due to the high load on server or due to the network problems and timeouts.

Response Time: Obviously a great parameter to check when you run tests for website performance. This response time indicates time required by server to provide correct reply to the request.

Number of pages per second:
Number of page requests successfully completed by server per second.
How to conclude performance tests?
These performance criteria change during each test-pass with different load conditions. You need to conclude what is your acceptable load limit and whether your server is able to serve this load.

E.g.: If you expect your server to handle 100 requests successfully per second then anything below this will be failure of your server which needs to be tackled.

How to Record tests:

WAPT works like any other record and playback tool but the real strength is behind it’s parametrization where you can configure any parameter from website url or user session to act as a real user.

Testing with WAPT in simple 5 steps:

Record->Configure->Verify->Execute->Analyze

WAPT uses inline Microsoft internet explorer which is used to record your interaction with website. When you record your test all dynamic parameters are recorded as static values which can be configured later while test execution. You then need to configure each user with different settings like unique sessions, number of virtual users, values for dynamic parameters etc. Once you done with recording and configuration just verify your test if it’s ready to run and then execute performance tests if everything looks ok. Finally analyze reports to decide website performance test as accepted or failed against your set of defined standards. That’s it.

WAPT is available in two versions
- Standard version (Latest WAPT 7.1)
- Professional version of this stress and performance testing tool (Latest WAPT Pro 2.1)

What WAPT Pro can do for you?
- Use several computers to generate load on website
- Measure web server performance in terms of CPU, RAM or network usage
- You can include the execution of a JavaScript code into virtual user profiles.

If you don’t want to specify every parameter manually you can use some technology specific modules to significantly improve your test experience.

Following additional modules can be downloaded and installed along with standard or professional version of WAPT:
- Module for ASP.NET testing
- Module for Adobe Flash testing
- Module for JSON format

Finally, any review can’t be complete without the list of Pros and cons.

WAPT Pros:

- Easy to install – Takes only 5 minutes to install
- Easy to use with very short learning curve
- You get run-time reports so that you can decide whether to continue the test or not, saving your big time.
- Detailed test report with graphical representation.
- Supports secure HTTPS protocol.
- 30 days free trial available!

WAPT Cons:

- Only windows platform supported to install this tool. (But you can test your website running under any OS and technology)
- No scripting ability
- It’s not free ;-)

How to try this tool?
You can download 30 day trial version of WAPT from here.

What is difference between Performance Testing, Load Testing and Stress Testing?

1) Performance Testing:

Performance testing is the testing, which is performed, to ascertain how the components of a system are performing, given a particular situation. Resource usage, scalability and reliability of the product are also validated under this testing. This testing is the subset of performance engineering, which is focused on addressing performance issues in the design and architecture of software product.

Performance Testing Goal:

The primary goal of performance testing includes establishing the benchmark behaviour of the system. There are a number of industry-defined benchmarks, which should be met during performance testing.

Performance testing does not aim to find defects in the application, it address a little more critical task of testing the benchmark and standard set for the application. Accuracy and close monitoring of the performance and results of the test is the primary characteristic of performance testing.

Example:

For instance, you can test the application network performance on Connection Speed vs. Latency chart. Latency is the time difference between the data to reach from source to destination. Thus, a 70kb page would take not more than 15 seconds to load for a worst connection of 28.8kbps modem (latency=1000 milliseconds), while the page of same size would appear within 5 seconds, for the average connection of 256kbps DSL (latency=100 milliseconds). 1.5mbps T1 connection (latency=50 milliseconds) would have the performance benchmark set within 1 second to achieve this target.

For example, the time difference between the generation of request and acknowledgement of response should be in the range of x ms (milliseconds) and y ms, where x and y are standard digits. A successful performance testing should project most of the performance issues, which could be related to database, network, software, hardware etc…

2) Load Testing:

Load testing is meant to test the system by constantly and steadily increasing the load on the system till the time it reaches the threshold limit. It is the simplest form of testing which employs the use of automation tools such as LoadRunner or any other good tools, which are available. Load testing is also famous by the names like volume testing and endurance testing.

The sole purpose of load testing is to assign the system the largest job it could possible handle to test the endurance and monitoring the results. An interesting fact is that sometimes the system is fed with empty task to determine the behaviour of system in zero-load situation.

Load Testing Goal:

The goals of load testing are to expose the defects in application related to buffer overflow, memory leaks and mismanagement of memory. Another target of load testing is to determine the upper limit of all the components of application like database, hardware and network etc… so that it could manage the anticipated load in future. The issues that would eventually come out as the result of load testing may include load balancing problems, bandwidth issues, capacity of the existing system etc…

Example:

For example, to check the email functionality of an application, it could be flooded with 1000 users at a time. Now, 1000 users can fire the email transactions (read, send, delete, forward, reply) in many different ways. If we take one transaction per user per hour, then it would be 1000 transactions per hour. By simulating 10 transactions/user, we could load test the email server by occupying it with 10000 transactions/hour.

3) Stress testing

Under stress testing, various activities to overload the existing resources with excess jobs are carried out in an attempt to break the system down. Negative testing, which includes removal of the components from the system is also done as a part of stress testing. Also known as fatigue testing, this testing should capture the stability of the application by testing it beyond its bandwidth capacity.

The purpose behind stress testing is to ascertain the failure of system and to monitor how the system recovers back gracefully. The challenge here is to set up a controlled environment before launching the test so that you could precisely capture the behaviour of system repeatedly, under the most unpredictable scenarios.

Stress Testing Goal:

The goal of the stress testing is to analyse post-crash reports to define the behaviour of application after failure. The biggest issue is to ensure that the system does not compromise with the security of sensitive data after the failure. In a successful stress testing, the system will come back to normality along with all its components, after even the most terrible break down.

Example:

As an example, a word processor like Writer1.1.0 by OpenOffice.org is utilized in development of letters, presentations, spread sheets etc… Purpose of our stress testing is to load it with the excess of characters.

To do this, we will repeatedly paste a line of data, till it reaches its threshold limit of handling large volume of text. As soon as the character size reaches 65,535 characters, it would simply refuse to accept more data. The result of stress testing on Writer 1.1.0 produces the result that, it does not crash under the stress and that it handle the situation gracefully, which make sure that application is working correctly even under rigorous stress conditions.

Database Testing – Practical Tips and Insight on How to Test Database

Database is one of the inevitable parts of a software application these days. It does not matter at all whether it is web or desktop, client server or peer to peer, enterprise or individual business, database is working at backend. Similarly, whether it is healthcare of finance, leasing or retail, mailing application or controlling spaceship, behind the scene a database is always in action.

Moreover, as the complexity of application increases the need of stronger and secure database emerges. In the same way, for the applications with high frequency of transactions (e.g. banking or finance application), necessity of fully featured DB Tool is coupled.

Currently, several database tools are available in the market e.g. MS-Access2010, MS SQL Server 2008 r2, Oracle 10g, Oracle Financial, MySQL, PostgreSQL, DB2 etc. All of these vary in cost, robustness, features and security. Each of these DBs possesses its own benefits and drawbacks. One thing is certain; a business application must be built using one of these or other DB Tools.

Before I start digging into the topic, let me comprehend the foreword. When the application is under execution, the end user mainly utilizes the ‘CRUD’ operations facilitated by the DB Tool.

C: Create – When user ‘Save’ any new transaction, ‘Create’ operation is performed.
R: Retrieve – When user ‘Search’ or ‘View’ any saved transaction, ‘Retrieve’ operation is performed.
U: Update – when user ‘Edit’ or ‘Modify’ an existing record, the ‘Update’ operation of DB is performed.
D: Delete – when user ‘Remove’ any record from the system, ‘Delete’ operation of DB is performed.

It does not matter at all, which DB is used and how the operation is preformed. End user has no concern if any join or sub-query, trigger or stored-procedure, query or function was used to do what he wanted. But, the interesting thing is that all DB operations performed by user, from UI of any application, is one of the above four, acronym as CRUD.

As a database tester one should be focusing on following DB testing activities:

What to test in database testing:

1) Ensure data mapping:

Make sure that the mapping between different forms or screens of AUT and the Relations of its DB is not only accurate but is also according to design documents. For all CRUD operations, verify that respective tables and records are updated when user clicks ‘Save’, ‘Update’, ‘Search’ or ‘Delete’ from GUI of the application.

2) Ensure ACID Properties of Transactions:

ACID properties of DB Transactions refer to the ‘Atomicity’, ‘Consistency’, ‘Isolation’ and ‘Durability’. Proper testing of these four properties must be done during the DB testing activity. This area demands more rigorous, thorough and keen testing when the database is distributed.

3) Ensure Data Integrity:

Consider that different modules (i.e. screens or forms) of application use the same data in different ways and perform all the CRUD operations on the data. In that case, make it sure that the latest state of data is reflected everywhere. System must show the updated and most recent values or the status of such shared data on all the forms and screens. This is called the Data Integrity.

4) Ensure Accuracy of implemented Business Rules:

Today, databases are not meant only to store the records. In fact, DBs have been evolved into extremely powerful tools that provide ample support to the developers in order to implement the business logic at DB level. Some simple examples of powerful features of DBs are ‘Referential Integrity’, relational constrains, triggers and stored procedures. So, using these and many other features offered by DBs, developers implement the business logic on DB level. Tester must ensure that the implemented business logic is correct and works accurately.

Above points describe the four most important ‘What Tos’ of database testing. Now, I will put some light on ‘How Tos’ of DB Testing. But, first of all I feel it better to explicitly mention an important point. DB Testing is a business critical task, and it should never be assigned to a fresh or inexperienced resource without proper training.

How To Test Database:

1. Create your own Queries

In order to test the DB properly and accurately, first of all a tester should have very good knowledge of SQL and specially DML (Data Manipulation Language) statements. Secondly, the tester should acquire good understanding of internal DB structure of AUT. If these two pre-requisites are fulfilled, then the tester is ready to test DB with complete confidence. (S)He will perform any CRUD operation from the UI of application, and will verify the result using SQL query.

This is the best and robust way of DB testing especially for applications with small to medium level of complexity. Yet, the two pre-requisites described are necessary. Otherwise, this way of DB testing cannot be adopted by the tester.

Moreover, if the application is very complex then it may be hard or impossible for the tester to write all of the needed SQL queries himself or herself. However, for some complex queries, tester may get help from the developer too. I always recommend this method for the testers because it does not only give them the confidence on the testing they have performed but, also enhance their SQL skill.

2. Observe data table by table

If the tester is not good in SQL, then he or she may verify the result of CRUD operation, performed using GUI of the application, by viewing the tables (relations) of DB. Yet, this way may be a bit tedious and cumbersome especially when the DB and tables have large amount of data.

Similarly, this way of DB testing may be extremely difficult for tester if the data to be verified belongs to multiple tables. This way of DB testing also requires at least good knowledge of Table structure of AUT.

3. Get query from developer

This is the simplest way for the tester to test the DB. Perform any CRUD operation from GUI and verify its impacts by executing the respective SQL query obtained from the developer. It requires neither good knowledge of SQL nor good knowledge of application’s DB structure.

So, this method seems easy and good choice for testing DB. But, its drawback is havoc. What if the query given by the developer is semantically wrong or does not fulfill the user’s requirement correctly? In this situation, the client will report the issue and will demand its fix as the best case. While, the worst case is that client may refuse to accept the application.

Conclusion:

Database is the core and critical part of almost every software application. So DB testing of an application demands keen attention, good SQL skills, proper knowledge of DB structure of AUT and proper training.

In order to have the confident test report of this activity, this task should be assigned to a resource with all the four qualities stated above. Otherwise, shipment time surprises, bugs identification by the client, improper or unintended application’s behavior or even wrong outputs of business critical tasks are more likely to be observed. Get this task done by most suitable resources and pay it the well-deserved attention.

Monday, August 22, 2011

How to write a good bug report? Tips and Tricks

Why good Bug report?
If your bug report is effective, chances are higher that it will get fixed. So fixing a bug depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell you how to achieve this skill.



“The point of writing problem report(bug report) is to get bugs fixed” – By Cem Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”, “Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)



What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to distinguish between average bug report and a good bug report. How to distinguish a good or bad bug report? It’s simple, apply following characteristics and techniques to report a bug.



1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported.



2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix.



3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way. Do not combine multiple problems even they seem to be similar. Write different reports for each problem.



How to Report a Bug?



Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you are writing bug report manually then some fields need to specifically mention like Bug number which should be assigned manually.



Reporter: Your name and email address.



Product: In which product you found this bug.



Version: The product version if any.



Component: These are the major sub modules of the product.



Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.



Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.



Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.



Severity:
This describes the impact of the bug.
Types of Severity:





  • Blocker: No further testing work can be done.


  • Critical: Application crash, Loss of data.


  • Major: Major loss of function.


  • Minor: minor loss of function.


  • Trivial: Some UI enhancements.


  • Enhancement: Request for new feature or some enhancement in existing one.

Status:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.



Bug life cycle


What is Bug/Defect?

Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”

Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.

or
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

Lastly the general definition of bug is: “failure to conform to specifications”.

If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.

We will discuss more on Writing effective bug report in another article. Let’s concentrate here on bug/defect life cycle.

Life cycle of Bug:

1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce

In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or screenshots.

The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module owner.

The Above figure is quite complicated but when you consider the significant steps in bug life cycle you will get quick idea of bug life.

On successful logging the bug is reviewed by Development or Test manager. Test manager can set the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.

When bug gets assigned to developer and can start working on it. Developer can set bug status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.

If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or Reopen.

Bug status description:
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.

1) New: When QA files new bug.

2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.

3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.

4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.

5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.

6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.

7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.

8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.

9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.



Software Testing Advice for Novice

Novice testers have many questions about software testing and the actual work that they are going to perform. As novice testers, you should be aware of certain facts in the software testing profession. The tips below will certainly help to advance you in your software-testing career. These ‘testing truths’ are applicable to and helpful for experienced testing professionals as well. Apply each and every testing truth mentioned below in your career and you will never regret what you do.


Know Your Application
Don’t start testing without understanding the requirements. If you test without knowledge of the requirements, you will not be able to determine if a program is functioning as designed and you will not be able to tell if required functionality is missing. Clear knowledge of requirements, before starting testing, is a must for any tester.


Know Your Domain
As I have said many times, you should acquire a thorough knowledge of the domain on which you are working. Knowing the domain will help you suggest good bug solutions. Your test manager will appreciate your suggestions, if you have valid points to make. Don’t stop by only logging the bug. Provide solutions as well. Good domain knowledge will also help you to design better test cases with maximum test coverage.


No Assumptions In Testing
Don’t start testing with the assumption that there will be no errors. As a tester, you should always be looking for errors.


Learn New Technologies
No doubt, old testing techniques still play a vital role in day-to-day testing, but try to introduce new testing procedures that work for you. Don’t rely on book knowledge. Be practical. Your new testing ideas may work amazingly for you.


You Can’t Guarantee a Bug Free Application
No matter how much testing you perform, you can’t guarantee a 100% bug free application. There are some constraints that may force your team to advance a product to the next level, knowing some common or low priority issues remain. Try to explore as many bugs as you can, but prioritize your efforts on basic and crucial functions. Put your best efforts doing good work.


Think Like An End User
This is my top piece of advice. Don’t think only like a technical guy. Think like customers or end users. Also, always think beyond your end users. Test your application as an end user. Think how an end user will be using your application. Technical plus end user thinking will assure that your application is user friendly and will pass acceptance tests easily. This was the first advice to me from my test manager when I was a novice tester.


100% Test Coverage Is Not Possible
Don’t obsess about 100% test coverage. There are millions of inputs and test combinations that are simply impossible to cover. Use techniques like boundary value analysis and equivalence partitioning testing to limit your test cases to manageable sizes.


Build Good Relations With Developers
As a tester, you communicate with many other team members, especially developers. There are many situations where tester and developer may not agree on certain points. It will take your skill to handle such situations without harming a good relationship with the developer. If you are wrong, admit it. If you are right, be diplomatic. Don’t take it personally. After all, it is a profession, and you both want a good product.


Learn From Mistakes
As a novice, you will make mistakes. If you don’t make mistakes, you are not testing hard enough! You will learn things as you get experience. Use these mistakes as your learning experience. Try not to repeat the same mistakes. It hurts when the client files any bug in an application tested by you. It is definitely an embracing situation for you and cannot be avoided. However, don’t beat yourself up. Find the root cause of the failure. Try to find out why you didn’t find that bug, and avoid the same mistake in the future. If required, change some testing procedures you are following.


Don’t Underestimate Yourself if Some of Your bugs Are Not Fixed
Some testers have assumptions that all bugs logged by them should get fixed. It is a good point to a certain level but you must be flexible according to the situation. All bugs may or may not be fixed. Management can defer bugs to fix later as some bugs have low priority, low severity or no time to fix. Over time you will also learn which bugs can be deferred until the next release.


Over To You:
If you are an experienced tester, what advice do you like to give to novice testers?

What if there isn’t enough time for thorough testing

Sometimes Tester need common sense to test a application!!!

I am saying this because most of the times it is not possible to test the whole application within the specified time. In such situations it’s better to find out the risk factors in the projects and concentrate on them.

Here are some points to be considered when you are in such a situation:
1) Find out Important functionality is your project?
2) Find out High-risk module of the project?
3) Which functionality is most visible to the user?
4) Which functionality has the largest safety impact?
5) Which functionality has the largest financial impact on users?
6) Which aspects of the application are most important to the customer?
7) Which parts of the code are most complex, and thus most subject to errors?
8) Which parts of the application were developed in rush or panic mode?
9) What do the developers think are the highest-risk aspects of the application?
10) What kinds of problems would cause the worst publicity?
11) What kinds of problems would cause the most customer service complaints?
12) What kinds of tests could easily cover multiple functionalities?

Considering these points you can greatly reduce the risk of project releasing under less time constraint.

Thursday, August 18, 2011

Regression Testing with Regression Testing Tools and methods

What is Regression Software Testing?
Regression means retesting the unchanged parts of the application. Test cases are re-executed in order to check whether previous functionality of application is working fine and new changes have not introduced any new bugs.

This is the method of verification. Verifying that the bugs are fixed and the newly added feature have not created in problem in previous working version of software.

Why regression Testing?
Regression testing is initiated when programmer fix any bug or add new code for new functionality to the system. It is a quality measure to check that new code complies with old code and unmodified code is not getting affected.
Most of the time testing team has task to check the last minute changes in the system. In such situation testing only affected application area in necessary to complete the testing process in time with covering all major system aspects.

How much regression testing?
This depends on the scope of new added feature. If the scope of the fix or feature is large then the application area getting affected is quite large and testing should be thoroughly including all the application test cases. But this can be effectively decided when tester gets input from developer about the scope, nature and amount of change.

What we do in regression testing?

  • Rerunning the previously conducted tests
  • Comparing current results with previously executed test results.

Regression Testing Tools:
Automated Regression testing is the testing area where we can automate most of the testing efforts. We run all the previously executed test cases this means we have test case set available and running these test cases manually is time consuming. We know the expected results so automating these test cases is time saving and efficient regression testing method. Extent of automation depends on the number of test cases that are going to remain applicable over the time. If test cases are varying time to time as application scope goes on increasing then automation of regression procedure will be the waste of time.

Most of the regression testing tools are record and playback type. Means you will record the test cases by navigating through the AUT and verify whether expected results are coming or not.
Example regression testing tools are:

    * Win runner
    * QTP
    * AdventNet QEngine
    * Regression Tester
    * vTest
    * Watir
    * Selenium
    * actiWate
    * Rational Functional Tester
    * SilkTest

Most of the tools are both Functional as well as regression testing tools.

Regression Testing Of GUI application:
It is difficult to perform GUI(Graphical User Interface) regression testing when GUI structure is modified. The test cases written on old GUI either becomes obsolete or need to reuse. Reusing the regression testing test cases means GUI test cases are modified according to new GUI. But this task becomes cumbersome if you have large set of GUI test cases.

What you need to know about Build Verification Testing

What is BVT?

Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.

BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:

  • Build validation
  • Build acceptance

Some BVT basics:

  • It is a subset of tests that verify main functionalities.
  • The BVT’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.
  • The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.
  • Design BVTs carefully enough to cover basic functionality.
  • Typically BVT should not run more than 30 minutes.
  • BVT is a type of regression testing, on each and every new build.

BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.

What is the main task in build release?

Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether – all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.
These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.

Which test cases should be included in BVT?

This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.

Here are some simple tips to include test cases in your BVT automation suite:

  • Include only critical test cases in BVT.
  • All test cases included in BVT should be stable.
  • All the test cases should have known expected result.
  • Make sure all included critical functionality test cases are sufficient for application test coverage.

Also do not includes modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.

You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.

Example: Test cases to be included in BVT for Text editor application (Some sample tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.

BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.

What happens when BVT suite run:
Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.
3) If BVT fails then BVT owner diagnose the cause of failure.
4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes.

This process gets repeated for every new build.

Why BVT or build fails?
BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.

Tips for BVT success:
1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.
4) Automate BVT process as much as possible. Right from build release process to BVT result – automate everything.
5) Have some penalties for breaking the build . Some chocolates or team coffee party from developer who breaks the build will do.

Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, resources and after all no frustration of test team for incomplete build.

Wednesday, August 17, 2011

Types of Risks in Software Projects

Are you developing any Test plan or test strategy for your project? Have you addressed all risks properly in your test plan or test strategy?

As testing is the last part of the project, it’s always under pressure and time constraint. To save time and money you should be able to prioritize your testing work. How will prioritize testing work? For this you should be able to judge more important and less important testing work. How will you decide which work is more or less important? Here comes need of risk-based testing.

What is Risk?
“Risk are future uncertain events with a probability of occurrence and a potential for loss”

Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.

In this article I will cover what are the “types of risks”. In next articles I will try to focus on risk identification, risk management and mitigation.

Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.

Categories of risks:

Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:

  • Wrong time estimation
  • Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
  • Failure to identify complex functionalities and time required to develop those functionalities.
  • Unexpected project scope expansions.

Budget Risk:

  • Wrong budget estimation.
  • Cost overruns
  • Project scope expansion

Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:

  • Failure to address priority conflicts
  • Failure to resolve the responsibilities
  • Insufficient resources
  • No proper subject training
  • No resource planning
  • No communication in team.

Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:

  • Continuous changing requirements
  • No advanced technology available or the existing technology is in initial stages.
  • Product is complex to implement.
  • Difficult project modules integration.

Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:

  • Running out of fund.
  • Market development
  • Changing customer product strategy and priority
  • Government rule changes.

Tuesday, August 16, 2011

Requirement Traceability Matrix (RTM)

Requirements tracing is the process of documenting the links between the user requirements for the system you're building and the work products developed to implement and verify those requirements. These work products include Software requirements, design specifications, Software code, test plans and other artifacts of the systems development process. Requirements tracing helps the project team to understand which parts of the design and code implement the user's requirements, and which tests are necessary to verify that the user's requirements have been implemented correctly.

Requirements Traceability Matrix Document is the output of Requirements Management phase of SDLC.

The Requirements Traceability Matrix (RTM) captures the complete user and system requirements for the system, or a portion of the system. The RTM captures all requirements and their traceability in a single document, and is a mandatory deliverable at the conclusion of the lifecycle.

The RTM is used to record the relationship of the requirements to the design, development, testing and release of the software as the requirements are allocated to a specific release of the software. Changes to the requirements are also recorded and tracked in the RTM. The RTM is maintained throughout the lifecycle of the release, and is reviewed and baselined at the end of the release.
It is very useful document to track Time, Change Management and Risk Management in the Software Development.
Here I am providing the sample template of Requirement Traceability Matrix, which gives detailed idea of the importance of RTM in SDLC.

The RTM Template shows the Mapping between the actual Requirement and User Requirement/System Requirement.
Any changes that happens after the system has been built we can trace the impact of the change on the Application through RTM Matrix. This is also the mapping between actual Requirement and Design Specification. This helps us in tracing the changes that may happen with respect to the Design Document during the Development process of the application. Here we will give specific Document unique ID, which is associated with that particular requirement to easily trace that particular document.

In any case, if you want to change the Requirement in future then you can use the RTM to make the respective changes and you can easily judge how many associated test scripts will be changing.

Requirements Traceability Matrix Template Instructions:


Introduction
This document presents the requirements traceability matrix (RTM) for the Project Name [workspace/workgroup] and provides traceability between the [workspace/workgroup] approved requirements, design specifications, and test scripts.
The table below displays the RTM for the requirements that were approved for inclusion in [Application Name/Version]. The following information is provided for each requirement:
1. Requirement ID
2. Risks
3. Requirement Type (User or System)
4. Requirement Description
5. Trace to User Requirement/Trace From System Requirement
6. Trace to Design Specification
7. UT * Unit Test Cases
8. IT * Integration Test Cases
9. ST * System Test Cases
10. UAT * User Acceptance Test Cases
11. Trace to Test Script

The following is the sample Template of Requirements Traceability Matrix.

Requirements Traceability Matrix

Disadvantages of not using Traceability Matrix
What happens if the Traceability factor is not considered while developing the software?
a) The system that is built may not have the necessary functionality to meet the customers and users needs and expectations
b) If there are modifications in the design specifications, there is no means of tracking the changes
c) If there is no mapping of test cases to the requirements, it may result in missing a major defect in the system
d) The completed system may have �Extra� functionality that may have not been specified in the design specification , resulting in wastage of manpower, time and effort.
e) If the code component that constitutes the customer�s high priority requirements is not known, then the areas that need to be worked first may not be known thereby decreasing the chances of shipping a useful product on schedule
f) A seemingly simple request might involve changes to several parts of the system and if proper Traceability process is not followed, the evaluation of the work that may be needed to satisfy the request may not be correctly evaluated


Where can a Traceability Matrix be used?
Is the Traceability Matrix applicable only for big projects?
The Traceability Matrix is an essential part of any Software development process, and hence irrespective of the size of the project, whenever there is a requirement to build a Software this concept comes into focus.
The biggest advantage of Traceability Matrix is backward and forward traceability. (i.e) At any point of time in the development life cycle the status of the project and the modules that have been tested could be easily determined thereby reducing the possibility of speculations about the status of the project.


Developing a Traceability Matrix
How is the Traceability Matrix developed?

Requirements Traceability Matrix1

In the diagram, based on the requirements the design is carried out and based on the design, the codes are developed. Finally, based on these the tests are created. At any point of time , there is always the provision for checking which test case was developed for which design and for which requirement that design was carried out. Such a kind of Traceability in the form of a matrix is the Traceability Matrix.

In the design document, if there is a Design description A, which can be traced back to the Requirement Specification A, implying that the design A takes care of Requirement A. Similarly in the test plan, Test Case A takes care of testing the Design A, which in turn takes care of Requirement A and so on.

Requirements Traceability Matrix2

There has to be references from Design document back to Requirement document, from Test plan back to Design document and so on.

Usually Unit test cases will have Traceability to Design Specification and System test cases /Acceptance Test cases will have Traceability to Requirement Specification. This helps to ensure that no requirement is left uncovered (either un-designed / un-tested).

Requirements Traceability enhances project control and quality. It is a process of documenting the links between user requirements for a system and the work products developed to implement and verify those requirements. It is a technique to support an objective of requirements management, to make certain that the application will meet end-users needs.


Traceability Matrix in testing
Where exactly does the Traceability Matrix gets involved in the broader picture of Testing?
The Traceability Matrix is created even before any test cases are written, because it is a complete list indicating what has to be tested. Sometimes there is one test case for each requirement or several requirements can be validated by one test scenario. This purely depends on the kind of application that is available for testing.
Creating a Traceability Matrix
Traceability Matrix gives a cross-reference between a test case document and the Functional/design specification document. This document helps to identify, if the Test case document contains tests for all the identified unit functions from the design specification. From this matrix we can collect the percentage of test coverage taking into account the percentage of functionalities to the total tested and not tested.

Requirements Traceability Matrix1


According to the above table, the requirement specifications are clearly spelt out in the requirement column. The functional specification notified as BRD section 6.5.8 tells about the requirement that has been specified in the Requirement document (i.e it tells about the requirement to which a test case is designed). With the help of the Design Specification, it is possible to drill down to the level of identifying the Low Level Design for the Requirement specified.
Based on the requirements, code is developed. At any point of time, the program corresponding to a particular requirement could be easily traced back. The test cases corresponding to the requirements are available in the Test Case column.


Bidirectional traceability is the ability to trace both forward and backward (i.e., from requirements to end products and from end product back to requirements).

It means tracing the code from requirements and vice-versa throughout a Software Development Life Cycle (SDLC). When you are in the course of your project, your fundamental aim would be to ensure that each and every requirement is translated into code. Tracing right from the requirements through the architecture and design, to the code, to ensure that the objectives or more precisely requirements have been met is referred to as the process of establishing Forward Traceability.

When you think about maintenance, invariably, due to economic and technical considerations your client will come back to you. At this juncture, you will debug; or rather you will hunt for various types of bugs. You should be in a position to go back to the requirements of corresponding code or architectural components. To achieve this you ought to trace back your code and design to their corresponding requirements. This process is called Backward Traceability.


Wednesday, August 10, 2011

How to Test Banking Applications

Banking applications are considered to be one of the most complex applications in today’s software development and testing industry. What makes Banking application so complex? What approach should be followed in order to test the complex workflows involved? In this article we will be highlighting different stages and techniques involved in testing Banking applications.

The characteristics of a Banking application are as follows:

  • Multi tier functionality to support thousands of concurrent user sessions
  • Large scale Integration , typically a banking application integrates with numerous other applications such as Bill Pay utility and Trading accounts
  • Complex Business workflows
  • Real Time and Batch processing
  • High rate of Transactions per seconds
  • Secure Transactions
  • Robust Reporting section to keep track of day to day transactions
  • Strong Auditing to troubleshoot customer issues
  • Massive storage system
  • Disaster Management.

The above listed ten points are the most important characteristics of a Banking application.

Banking applications have multiple tiers involved in performing an operation. For Example, a banking application may have:

  1. Web Server to interact with end users via Browser
  2. Middle Tier to validate the input and output for web server
  3. Data Base to store data and procedures
  4. Transaction Processor which could be a large capacity Mainframe or any other Legacy system to carry out Trillions of transactions per second.

If we talk about testing banking applications it requires an end to end testing methodology involving multiple software testing techniques to ensure:

  • Total coverage of all banking workflows and Business Requirements
  • Functional aspect of the application
  • Security aspect of the application
  • Data Integrity
  • Concurrency
  • User Experience
Typical stages involved in testing Banking Applications are shown in below workflow which we will be discussing individually.















1) Requirement Gathering:

Requirement gathering phase involves documentation of requirements either as Functional Specifications or Use Cases. Requirements are gathered as per customer needs and documented by Banking Experts or Business Analyst. To write requirements on more than one subject experts are involved as banking itself has multiple sub domains and one full fledge banking application will be the integration of all. For Example: A banking application may have separate modules for Transfers, Credit Cards, Reports, Loan Accounts, Bill Payments, Trading Etc.

2) Requirement Review:

The deliverable of Requirement Gathering is reviewed by all the stakeholders such as QA Engineers, Development leads and Peer Business Analysts. They cross check that neither existing business workflows nor new workflows are violated.

3) Business Scenario Preparations:

In this stage QA Engineers derive Business Scenarios from the requirement documents (Functions Specs or Use Cases); Business Scenarios are derived in such a way that all Business Requirements are covered. Business Scenarios are high level scenarios without any detailed steps, further these Business Scenarios are reviewed by Business Analyst to ensure all of Business Requirements are met and its easier for BAs to review high level scenarios than reviewing low level detailed Test Cases.

4) Functional Testing:

In this stage functional testing is performed and the usual software testing activities are performed such as:

Test Case Preparation:
In this stage Test Cases are derived from Business Scenarios, one Business Scenario leads to several positive test cases and negative test cases. Generally tools used during this stage are Microsoft Excel, Test Director or Quality Center.
Test Case Review:
Reviews by peer QA Engineers
Test Case Execution:
Test Case Execution could be either manual or automatic involving tools like QC, QTP or any other.

5) Database Testing

Banking Application involves complex transaction which are performed both at UI level and Database level, Therefore Database testing is as important as functional testing. Database in itself is an entirely separate layer hence it is carried out by database specialists and it uses techniques like

  • Data loading
  • Database Migration
  • Testing DB Schema and Data types
  • Rules Testing
  • Testing Stored Procedures and Functions
  • Testing Triggers
  • Data Integrity

6) Security Testing

Security Testing is usually the last stage in the testing cycle as completing functional and non functional are entry criteria to commence Security testing. Security testing is one of the major stages in the entire Application testing cycle as this stage ensures that application complies with Federal and Industry standards. Security testing cycle makes sure the application does not have any web vulnerability which may expose sensitive data to an intruder or an attacker and complies with standards like OWASP.

In this stage the major task involves in the whole application scan which is carried out using tools like IBM Appscan or HP WebInspect(2 Most popular tools).

Once the Scan is complete the Scan Report is published out of which False Positives are filtered out and rest of the vulnerability are reported to Development team for fixing depending on the Severity.

Other Manual tools for Security Testing used are: Paros Proxy, Http Watch, Burp Suite, Fortify tools Etc.

Apart from the above stages there might be different stages involved like Integration Testing and Performance Testing.

In today’s scenario majority of Banking Projects are using: Agile/Scrum, RUP and Continuous Integration methodologies, and Tools packages like Microsoft’s VSTS and Rational Tools.

As we mentioned RUP above, RUP stands for Rational Unified Process, which is an iterative software development methodology introduced by IBM which comprises of four phases in which development and testing activities are carried out.

Four phases are:
i) Inception
ii) Collaboration
iii) Construction and
iv) Transition
RUP widely involves IBM Rational tools.


Priority and Severity

“Priority” is associated with scheduling, and “severity” is associated with standards.
“Priority” means something is afforded or deserves prior attention; a precedence established by order of importance (or urgency). “Severity” is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior.
The words priority and severity do come up in bug tracking. A variety of commercial, input of software test engineers, give the team complete information so developers problemtracking/management software tools are available. These tools, with the detailed can understand the bug, get an idea of its ‘severity’, reproduce it and fix it.The fixes
are based on project ‘priorities’ and ‘severity’ of bugs.
The ‘severity’ of a problem is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool. A buggy software can ‘severely’ affect schedules, which, in turn can lead to a reassessment and renegotiation of ‘priorities’.

Web terminologies

Web terminologies covered in this Article:
What is internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer),
HTTPS, HTML, Web server, Web client, Proxy server, Caching, Cookies,
Application server, Thin client, Daemon, Client side scripting, Server side
scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status
codes.
• Internet
– A global network connecting millions of computers.

• World Wide Web (the Web)
– An information sharing model that is built on top of the Internet,
utilizes HTTP protocol and browsers (such as Internet Explorer) to
access Web pages formatted in HTML that are linked via hyperlinks
and the Web is only a subset of the Internet (other uses of the
Internet include email (via SMTP), Usenet, instant messaging and
file transfer (via FTP)

• URL (Uniform Resource Locator)
– The address of documents and other content on the Web. It is
consisting of protocol, domain and the file. Protocol can be either
HTTP, FTP, Telnet, News etc., domain name is the DNS name of
the server and file can be Static HTML, DOC, Jpeg, etc., . In other
words URLs are strings that uniquely identify resources on internet.

• TCP/IP
– TCP/IP protocol suite used to send data over the Internet. TCP/IP
consists of only 4 layers - Application layer, Transport layer,
Network layer & Link layer
Internet Protocols:
Application Layer - DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC,
NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP,
rlogin.
Transport Layer- TCP, UDP, DCCP, SCTP, IL, RUDP,
Network Layer - IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, …
Link Ethernet Layer- Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM,
DTM, Frame Relay, SMDS,

• TCP (Transmission Control Protocol)
– Enables two devices to establish a connection and exchange data.
– In the Internet protocol suite, TCP is the intermediate layer
between the Internet Protocol below it, and an application above it.
Applications often need reliable pipe-like connections to each other,
whereas the Internet Protocol does not provide such streams, but
rather only unreliable packets. TCP does the task of the transport
layer in the simplified OSI model of computer networks.
– It is one of the core protocols of the Internet protocol suite. Using
TCP, applications on networked hosts can create connections to one
another, over which they can exchange data or packets. The
protocol guarantees reliable and in-order delivery of sender to
receiver data. TCP also distinguishes data for multiple, concurrent
applications (e.g. Web server and e-mail server) running on the
same host.

• IP
– Specifies the format of data packets and the addressing protocol.
The Internet Protocol (IP) is a data-oriented protocol used for
communicating data across a packet-switched internet work. IP is a
network layer protocol in the internet protocol suite. Aspects of IP
are IP addressing and routing. Addressing refers to how end hosts
become assigned IP addresses. IP routing is performed by all hosts,
but most importantly by internetwork routers

• IP Address
– A unique number assigned to each connected device, often assigned
dynamically to users by an ISP on a session-by-session basis –
dynamic IP address. Increasingly becoming dedicated, particularly
with always-on broadband connections – static IP address.
Packet
– A portion of a message sent over a TCP/IP Network. It contains
content and destination

• HTTP (Hypertext Transfer Protocol)
– Underlying protocol of the World Wide Web. Defines how messages
are formatted and transmitted over a TCP/IP network for Web
sites. Defines what actions Web servers and Web browsers take in
response to various commands.
– HTTP is stateless. The advantage of a stateless protocol is that hosts
don't need to retain information about users between requests, but
this forces the use of alternative methods for maintaining users'
state, for example, when a host would like to customize content for
a user who has visited before. The common method for solving this
problem involves the use of sending and requesting cookies. Other
methods are session control, hidden variables, etc
– example: when you enter a URL in your browser, an HTTP
command is sent to the Web server telling to fetch and transmit the
requested Web page
o HEAD: Asks for the response identical to the one that
would correspond to a GET request, but without the
response body. This is useful for retrieving metainformation
written in response headers, without
having to transport the entire content.
o GET : Requests a representation of the specified
resource. By far the most common method used on
the Web today.
o POST : Submits user data (e.g. from a HTML form) to
the identified resource. The data is included in the
body of the request.
o PUT: Uploads a representation of the specified
resource.
o DELETE: Deletes the specified resource (rarely
implemented).
o TRACE: Echoes back the received request, so that a
client can see what intermediate servers are adding or
changing in the request.
o OPTIONS:
o Returns the HTTP methods that the server supports.
This can be used to check the functionality of a web
server.
o CONNECT: For use with a proxy that can change to
being an SSL tunnel.

HTTP pipelining
– appeared in HTTP/1.1. It allows clients to send multiple requests at
once, without waiting for an answer. Servers can also send multiple
answers without closing their socket. This results in fewer
roundtrips and faster load times. This is particularly useful for
satellite Internet connections and other connections with high
latency as separate requests need not be made for each file. Since it
is possible to fit several HTTP requests in the same TCP packet,
HTTP pipelining allows fewer TCP packets to be sent over the
network, reducing network load. HTTP pipelining requires both the
client and the server to support it. Servers are required to support it
in order to be HTTP/1.1 compliant, although they are not required
to pipeline responses, just to accept pipelined requests.

• HTTP-Tunnel
– technology allows users to perform various Internet tasks despite
the restrictions imposed by firewalls. This is made possible by
sending data through HTTP (port 80). Additionally, HTTP-Tunnel
technology is very secure, making it indispensable for both average
and business communications. The HTTP-Tunnel client is an
application that runs in your system tray acting as a SOCKS server,
managing all data transmissions between the computer and the
network.

• HTTP streaming
– It is a mechanism for sending data from a Web server to a Web
browser in response to an event. HTTP Streaming is achieved
through several common mechanisms. In one such mechanism the
web server does not terminate the response to the client after data
has been served. This differs from the typical HTTP cycle in which
the response is closed immediately following data transmission.
The web server leaves the response open such that if an event is
received, it can immediately be sent to the client. Otherwise the
data would have to be queued until the client's next request is made
to the web server. The act of repeatedly queing and re-requesting
information is known as a Polling mechanism. Typical uses for
HTTP Streaming include market data distribution (stock tickers),
live chat/messaging systems, online betting and gaming, sport
results, monitoring consoles and Sensor network monitoring.

HTTP referrer
– It signifies the webpage which linked to a new page on the Internet.
By checking the referer, the new page can see where the request
came from. Referer logging is used to allow websites and web
servers to identify where people are visiting them from, for
promotional or security purposes. Since the referer can easily be
spoofed (faked), however, it is of limited use in this regard except
on a casual basis. A dereferer is a means to strip the details of the
referring website from a link request so that the target website
cannot identify the page which was clicked on to originate a request.
Referer is a common misspelling of the word referrer. It is so
common, in fact that it made it into the official specification of
HTTP – the communication protocol of the World Wide Web – and
has therefore become the standard industry spelling when
discussing HTTP referers.

• SSL (Secure Sockets Layer)
– Protocol for establishing a secure connection for transmission, it
uses the HTTPS convention
– SSL provides endpoint authentication and communications privacy
over the Internet using cryptography. In typical use, only the server
is authenticated (i.e. its identity is ensured) while the client remains
unauthenticated; mutual authentication requires public key
infrastructure (PKI) deployment to clients. The protocols allow
client/server applications to communicate in a way designed to
prevent eavesdropping, tampering, and message forgery.
– SSL involves a number of basic phases:
o Peer negotiation for algorithm support
o Public key encryption-based key exchange and
certificate-based authentication
o Symmetric cipher-based traffic encryption
o During the first phase, the client and server negotiate
which cryptographic algorithms will be used. Current
implementations support the following choices:
o for public-key cryptography: RSA, Diffie-Hellman,
DSA or Fortezza;
o for symmetric ciphers: RC2, RC4, IDEA, DES, Triple
DES or AES;
o For one-way hash functions: MD5 or SHA.

HTTPS
– is a URI scheme which is syntactically identical to the http: scheme
normally used for accessing resources using HTTP. Using an https:
URL indicates that HTTP is to be used, but with a different default
port and an additional encryption/authentication layer between
HTTP and TCP. This system was invented by Netscape
Communications Corporation to provide authentication and
encrypted communication and is widely used on the Web for
security-sensitive communication, such as payment transactions.

• HTML (Hypertext Markup Language)
– The authoring language used to create documents on the World
Wide Web
– Hundreds of tags can be used to format and layout a Web page’s
content and to hyperlink to other Web content.

• Hyperlink
– Used to connect a user to other parts of a web site and to other web
sites and web-enabled services.

• Web server
– A computer that is connected to the Internet. Hosts Web content
and is configured to share that content.
– Webserver is responsible for accepting HTTP requests from clients,
which are known as Web browsers, and serving them Web pages,
which are usually HTML documents and linked objects (images,
etc.).
• Examples:
o Apache HTTP Server from the Apache Software
Foundation.
o Internet Information Services (IIS) from Microsoft.
o Sun Java System Web Server from Sun Microsystems,
formerly Sun ONE Web Server, iPlanet Web Server,
and Netscape Enterprise Server.
o Zeus Web Server from Zeus Technology
• Web client
– Most commonly in the form of Web browser software such as
Internet Explorer or Netscape
– Used to navigate the Web and retrieve Web content from Web
servers for viewing.

• Proxy server
– An intermediary server that provides a gateway to the Web (e.g.,
employee access to the Web most often goes through a proxy)
– Improves performance through caching and filters the Web
– The proxy server will also log each user interaction.

• Caching
– Web browsers and proxy servers save a local copy of the
downloaded content – pages that display personal information
should be set to prohibit caching.

• Web form
– A portion of a Web page containing blank fields that users can fill in
with data (including personal info) and submits for Web server to
process it.

• Web server log
– Every time a Web page is requested, the Web server may
automatically logs the following information:
o the IP address of the visitor
o date and time of the request
o the URL of the requested file
o the URL the visitor came from immediately before
(referrer URL)
o the visitor’s Web browser type and operating system

• Cookies
– A small text file provided by a Web server and stored on a users PC
the text can be sent back to the server every time the browser
requests a page from the server. Cookies are used to identify a user
as they navigate through a Web site and/or return at a later time.
Cookies enable a range of functions including personalization of
content.

• Session vs. persistent cookies
– A Session is a unique ID assigned to the client browser by a web
server to identify the state of the client because web servers are
stateless.
– A session cookie is stored only while the user is connected to the
particular Web server – the cookie is deleted when the user
disconnects
– Persistent cookies are set to expire at some point in the future –
many are set to expire a number of years forward

• Socket
– A socket is a network communications endpoint.

• Application Server
– An application server is a server computer in a computer network
dedicated to running certain software applications. The term also
refers to the software installed on such a computer to facilitate the
serving of other applications. Application server products typically
bundle middleware to enable applications to intercommunicate
with various qualities of service — reliability, security, nonrepudiation,
and so on. Application servers also provide an API to
programmers, so that they don't have to be concerned with the
operating system or the huge array of interfaces required of a
modern web-based application. Communication occurs through the
web in the form of HTML and XML, as a link to various databases,
and, quite often, as a link to systems and devices ranging from huge
legacy applications to small information devices, such as an atomic
clock or a home appliance.
– An application server exposes business logic to client applications
through various protocols, possibly including HTTP. the server
exposes this business logic through a component API, such as the
EJB (Enterprise JavaBean) component model found on J2EE (Java
2 Platform, Enterprise Edition) application servers. Moreover, the
application server manages its own resources. Such gate-keeping
duties include security, transaction processing, resource pooling,
and messaging
– Ex: JBoss (Red Hat), WebSphere (IBM), Oracle Application Server
10g (Oracle Corporation) and WebLogic (BEA)

• Thin Client
– A thin client is a computer (client) in client-server architecture
networks which has little or no application logic, so it has to depend
primarily on the central server for processing activities. It is
designed to be especially small so that the bulk of the data
processing occurs on the server.

• Thick client
– It is a client that performs the bulk of any data processing
operations itself, and relies on the server it is associated with
primarily for data storage.

• Daemon
– It is a computer program that runs in the background, rather than
under the direct control of a user; they are usually instantiated as
processes. Typically daemons have names that end with the letter
"d"; for example, syslogd is the daemon which handles the system
log. Daemons typically do not have any existing parent process, but
reside directly under init in the process hierarchy. Daemons usually
become daemons by forking a child process and then making the
parent process kill itself, thus making init adopt the child. This
practice is commonly known as "fork off and die." Systems often
start (or "launch") daemons at boot time: they often serve the
function of responding to network requests, hardware activity, or
other programs by performing some task. Daemons can also
configure hardware (like devfsd on some Linux systems), run
scheduled tasks (like cron), and perform a variety of other tasks.

• Client-side scripting
– Generally refers to the class of computer programs on the web that
are executed client-side, by the user's web browser, instead of
server-side (on the web server). This type of computer
programming is an important part of the Dynamic HTML
(DHTML) concept, enabling web pages to be scripted; that is, to
have different and changing content depending on user input,
environmental conditions (such as the time of day), or other
variables.
– Web authors write client-side scripts in languages such as
JavaScript (Client-side JavaScript) or VBScript, which are based on
several standards:
o HTML scripting
o HTTP
o Document Object Model

• Client-side scripts are often embedded within
an HTML document, but they may also be
contained in a separate file, which is referenced
by the document (or documents) that use it.
Upon request, the necessary files are sent to
the user's computer by the web server (or
servers) on which they reside. The user's web
browser executes the script, then displays the
document, including any visible output from
the script. Client-side scripts may also contain
instructions for the browser to follow if the
user interacts with the document in a certain
way, e.g., clicks a certain button. These
instructions can be followed without further
communication with the server, though they
may require such communication.

• Server-side Scripting
– It is a web server technology in which a user's request is fulfilled by
running a script directly on the web server to generate dynamic
HTML pages. It is usually used to provide interactive web sites that
interface to databases or other data stores. This is different from
client-side scripting where scripts are run by the viewing web
browser, usually in JavaScript. The primary advantage to serverside
scripting is the ability to highly customize the response based
on the user's requirements, access rights, or queries into data
stores.
o ASP: Microsoft designed solution allowing various
languages (though generally VBscript is used) inside a
HTML-like outer page, mainly used on Windows but
with limited support on other platforms.
o ColdFusion: Cross platform tag based commercial
server side scripting system.
o JSP: A Java-based system for embedding code in
HTML pages.
o Lasso: A Datasource neutral interpreted programming
language and cross platform server.
o SSI: A fairly basic system which is part of the common
apache web server. Not a full programming
environment by far but still handy for simple things
like including a common menu.
o PHP : Common opensource solution based on
including code in its own language into an HTML
page.
o Server-side JavaScript: A language generally used on
the client side but also occasionally on the server side.
o SMX : Lisplike opensource language designed to be
embedded into an HTML page.

• Common Gateway Interface (CGI)
– is a standard protocol for interfacing external application software
with an information server, commonly a web server. This allows the
server to pass requests from a client web browser to the external
application. The web server can then return the output from the
application to the web browser.

• Dynamic Web pages:
– can be defined as: (1) Web pages containing dynamic content (e.g.,
images, text, form fields, etc.) that can change/move without the
Web page being reloaded or (2) Web pages that are produced onthe-
fly by server-side programs, frequently based on parameters in
the URL or from an HTML form. Web pages that adhere to the first
definition are often called Dynamic HTML or DHTML pages.
Client-side languages like JavaScript are frequently used to produce
these types of dynamic web pages. Web pages that adhere to the
second definition are often created with the help of server-side
languages such as PHP, Perl, ASP/.NET, JSP, and languages. These
server-side languages typically use the Common Gateway Interface
(CGI) to produce dynamic web pages.

• Digital Certificates
In cryptography, a public key certificate (or identity certificate) is a certificate
which uses a digital signature to bind together a public key with an identity —
information such as the name of a person or an organization, their address, and
so forth. The certificate can be used to verify that a public key belongs to an
individual.
In a typical public key infrastructure (PKI) scheme, the signature will be of a
certificate authority (CA). In a web of trust s
"endorsements"). In either case, the signatures on a certificate are attestations by
the certificate signer that the identity information and the public key belong
together.
Certificates can be used for the large-scale use of public-key cryptography.
Securely exchanging secret keys amongst users becomes impractical to the point
of effective impossibility for anything other than quite small networks. Public key
cryptography provides a way to avoid this problem. In principle, if Alice wants
others to be able to send her secret messages, she need only publish her public
key. Anyone possessing it can then send her secure information. Unfortunately,
David could publish a different public key (for which he knows the related private
key) claiming that it is Alice's public key. In so doing, David could intercept and
read at least some of the messages meant for Alice. But if Alice builds her public
key into a certificate and has it digitally signed by a trusted third party (Trent),
anyone who trusts Trent can merely check the certificate to see whether Trent
thinks the embedded public key is Alice's. In typical Public-key Infrastructures
(PKIs), Trent will be a CA, who is trusted by all participants. In a web of trust,
Trent can be any user, and whether to trust that user's attestation that a
particular public key belongs to Alice will be up to the person wishing to send a
message to Alice.
In large-scale deployments, Alice may not be familiar with Bob's certificate
authority (perhaps they each have a different CA — if both use employer CAs,
different employers would produce this result), so Bob's certificate may also
include his CA's public key signed by a "higher level" CA2, which might be
recognized by Alice. This process leads in general to a hierarchy of certificates,
and to even more complex trust relationships. Public key infrastructure refers,
mostly, to the software that manages certificates in a large-scale setting. In X.509
PKI systems, the hierarchy of certificates is always a top-down tree, with a root
certificate at the top, representing a CA that is 'so central' to the scheme that it
does not need to be authenticated by some trusted third party.
A certificate may be revoked if it is discovered that its related private key has
been compromised, or if the relationship (between an entity and a public key)
embedded in the certificate is discovered to be incorrect or has changed; this
might occur, for example, if a person changes jobs or names. A revocation will
likely be a rare occurrence, but the possibility means that when a certificate is
trusted, the user should always check its validity. This can be done by comparing
it against a certificate revocation list (CRL) — a list of revoked or cancelled
certificates. Ensuring that such a list is up-to-date and accurate is a core function
in a centralized PKI, one which requires both staff and budget and one which is
therefore sometimes not properly done. To be effective, it must be readily
available to any who needs it whenever it is needed and must be updated
frequently. The other way to check a certificate validity is to query the certificate
authority using the Online Certificate Status Protocol (OCSP) to know the status
of a specific certificate.
Both of these methods appear to be on the verge of being supplanted by XKMS.
This new standard, however, is yet to see widespread implementation.
A certificate typically includes:
The public key being signed.
A name, which can refer to a person, a computer or an organization.
A validity period.
The location (URL) of a revocation center.
The most common certificate standard is the ITU-T X.509. X.509 is being
adapted to the Internet by the IETF PKIX working group.
Classes
Verisign introduced the concept of three classes of digital certificates:
Class 1 for individuals, intended for email;
Class 2 for organizations, for which proof of identity is required; and
Class 3 for servers and software signing, for which independent verification and
checking of identity and authority is done by the issuing certificate authority (CA)

• List of HTTP status codes
1xx Informational
Request received, continuing process.
100: Continue
101: Switching Protocols
2xx Success
The action was successfully received, understood, and accepted.
200: OK
201: Created
202: Accepted
203: Non-Authoritative Information
204: No Content
205: Reset Content
206: Partial Content
3xx Redirection
The client must take additional action to complete the request.
300: Multiple Choices
301: Moved Permanently
302: Moved Temporarily (HTTP/1.0)
302: Found (HTTP/1.1)
see 302 Google Jacking
303: See Other (HTTP/1.1)
304: Not Modified
305: Use Prox+Many HTTP clients (such as Mozilla and Internet Explorer) don't correctly
handle responses with this status code.
306: (no longer used, but reserved)
307: Temporary Redirect
4xx Client Error
The request contains bad syntax or cannot be fulfilled.
400: Bad Request
401: Unauthorized
Similar to 403/Forbidden, but specifically for use when authentication is possible
but has failed or not yet been provided. See basic authentication scheme and
digest access authentication.
402: Payment Required
403: Forbidden
404: Not Found
405: Method Not Allowed
406: Not Acceptable
407: Proxy Authentication Required
408: Request Timeout
409: Conflict
410: Gone
411: Length Required
412: Precondition Failed
413: Request Entity Too Large
414: Request-URI Too Long
415: Unsupported Media Type
416: Requested Range Not Satisfiable
417: Expectation Failed
5xx Server Error
The server failed to fulfill an apparently valid request.
500: Internal Server Error
501: Not Implemented
502: Bad Gateway
503: Service Unavailable
504: Gateway Timeout
505: HTTP Version Not Supported
509: Bandwidth Limit Exceeded

Difference between desktop, Client-Server and Web Testing?

Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

I think this will have an idea of all three testing environment. Keep in mind that even the difference exist in these three environment, the basic quality assurance and testing principles remains same and applies to all.