Monday 21 November 2011

Checkpoints in QTP


Checkpoints are used to verify the runtime value against the predefined value in a Test. Checkpoint are used to set pass/fail status in the Test Results file.
We have different types of checkpoints in QTP.

Standard Checkpoints:
Standard Checkpoints are used to verify a set of property values of standard objects.
Examples: Buttons,Radio Buttons, CheckBoxes etc.

Image Checkpoints:
Image checkpoints are used to verify a set of property values image objects.
Example: Image location, width, height etc.

Bitmap Checkpoints:
Bitmap checkpoints are used to compare the on screen bit map image with the bitmap already captured.
Example: Pixel by pixel comparison of on screen bit map with existing bit map

Table Checkpoints:
Table checkpoints are used to compare the value of table displayed on the screen with the predefined values.

Text Checkpoints:
Text checkpoints are used to compare the text displayed on the application with the expected text.

TextArea Checkpoint:
This checkpoint is used to verify whether the text is displayed in a specified area on the application.

Accessibility Checkpoint:
This checkpoint is used to verify whether the areas in a web application that do not comly with W3C standards.

Page Checkpoints:
These checkpoints are used to verify the properties on a page.
Example: Links, Page Load Time etc.

DataBase Checkpoints:
These checkpoints are used to validate the database entry specified in the checkpoint.

XML Checkpoints:
Used to validate the contents of a XML file.

Tuesday 15 November 2011

Tips To Become A Good Software Tester

In this article am trying to give some tips for the software test engineers to be at their best.

01. Test engineers should be involved from the requirement phase of the project. They should understand the requirement of all modules in the project.
02. Based on the understanding of the module, the Test Engineer should implement some process.The same process should be followed by all the resources in the ream. If we do so, we can filter most of the issues at initial phase itself.
03. The test engineers can create some template to follow.They should force all resources in the team to follow the same template.
04. If there are any changes in the requirement the same should be updated in the testing document. It's always better to keep the back up of the document based on the old requirement. It'll not be useful if the requirement is changed. Still we can keep it.
05.Once the requirement is finalized divide the project into chunks. It'll be helpful to know which module have more priority than other module.
06. Once the project is divided into chunks, identify the impacted areas. It means that the issue in one module may impact the behavior of other module.So, those type of modules should be given high priority.
07. Then prepare test scenarios for each module first.Then we can write the test scenarios for the impacted areas.
08. Do the peer review as soon as the test scenarios are ready.It'll be helpful to clarify if there is any doubt. Then we can use the standard template to write the test cases.
09. Test cases should clearly describe the action to be done. Number of steps should be less as well as descriptive.Based on the functionality we can give the priority of the test cases as High,Medium, Low.
10.Once the test cases are ready we can match the cases with requirement to make sure that all modules has been covered.
11. The most important thing is identify the module which has the complex code with the help of developer.We have to give more attention to the specific modules.

Sunday 13 November 2011

Tips To Write Quality Test Cases


This article gives some basic ideas to write the complete test cases.
Before writing the test cases the tester should know the complete functionality of the module. Then only we can develop both positive and negative test cases.We can divide the test case relation as Test Scenario > Test Case > Test Steps.
Test Scenario: It comes directly from the requirements or user story. It represents the list of test cases and often their sequences.
Test Case: It consists of the list of test steps need to be performed. And may linked to environmental situations, link to bugs etc.
Test Step: It represents the action to be performed and the expected result from the application.
So, as a tester, we need to make sure that the quality test cases are in place to test against the application and deliver it with high quality.
I am trying to give some of the tips to write quality test cases here. 
01. We need have one complete template to create the test cases. We can have template in the form of excel spread sheet or some customized tool.
02. Test cases should be descriptive and specific. The title of the test case should be short , concise and   descriptive. Test case should clearly define the purpose and scope of the operations.
03.Test cases should be reusable in many scenarios.
04.We should create both positive and negative test cases. So, it’s important to remember the order of test  cases. Positive test cases should cover the expected behavior of the application. In turn negative test cases should cover the un expected behavior of the application. So,it is always better to keep the relationship between the positive and negative test cases.
05.Test cases should be atomic in nature. Assume that the test case is too long and doing too much of dependency checks. It’ll be a nightmare for a tester to all dependency check each and every time and also reducing the tester’s motivation and vigilance.
06.We need to refactor the test cases. It means that we need to re write or update the test cases whenever there is a change in functionality. It should be strictly followed by all testers to make sure that the test cases are always matching or covering all functionalities in the project.
07.As a tester, we should always have the test data for the complex as well as easy test cases. The test data should be attached in the form of binary data along with the test cases.
08. We need to make sure that all required configurations are in place before executing the test cases. We can have a list of checklist to make sure that the required configurations are in place.
09.It’s always better to have the review with the peers as well as product analyst. It’ll help in big time to make sure that we are not missing anything.

Saturday 12 November 2011

Unit Testing


This articles give the details about Unit Testing. Unit Testing represents the validation of the individual units of the source code works as expected. In programming point of view a Unit can be function, procedure or an individual program etc.,


Unit testing is normally done by the Developers and not by the end users.


Usage Of Unit Testing:


The main purpose of the Unit testing is to isolate/separate part of the applications source code and make sure that the each unit is doing the expected task. Unit testing provides bottom up testing approach.In this we are making sure that the each part of the source code is working as expected then will be integrated with other modules. In this way we can eliminate the issues at earlier stage.


It provides a sort of the living documentation of the system.Developers looking to learn what functionality is provided by a Unit and also provides an idea to that how to use the specific units. It gives a basic understanding of an API.Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development.


Unit testing facilitates the change.It means that it allows the developer to refactor the code at later date and also make sure that the module is still working as expected. The test cases for the  unit testing can be written and also code change can be done easily. 


When software is developed using a test-driven approach, the unit test may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behavior.


Unit testing provides an option to separate the interface from implementation. Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on adatabase: in order to test the class, the tester often writes code that interacts with the database. This is a mistake, because a unit test should usually not go outside of its own class boundary, and especially should not cross such process/network boundaries because this can introduce unacceptable performance problems to the unit test-suite. Crossing such unit boundaries turns unit tests into integration tests, and when test cases fail, makes it less clear which component is causing the failure. See also Fakes, mocks and integration tests.


Instead, the software developer should create an abstract interface around the database queries, and then implement that interface with their own mock object. By abstracting this necessary attachment from the code (temporarily reducing the net effective coupling), the independent unit can be more thoroughly tested than may have been previously achieved. This results in a higher quality unit that is also more maintainable.
Limitations:
Like other testing types, Unit testing has its limitations.Testing can't be expected to catch every error in the program.We may have n number of ways to evaluate a unit. So, by using unit testing we can't execute all possible ways of executing a particular unit.So, we can simply tell that Unit testing will not be used to catch the errors caused due to integration.Unit testing should be done in conjunction with other testing activities. Like other testing, Unit testing can show the presence of the errors not the absence of the errors.

Wednesday 9 November 2011

Improving QTP Performance

This article explains the details to improve the performance of QTP tool if the performance reduces during the script execution.The following steps needs to be followed to improve the performance.

01. Load the required add ins:
Assume that we are working on Windows based application.To work on windows based application we need to load only the corresponding add ins. Otherwise it'll reduce the performance of the tool. For example, web add in used to load by default. We no need to add such kind of add ins for windows based application. If we do so, the performance of the automation tool will be drastically reduced.

02. Set the Run mode as Fast:
To set the run mode as Fast we need to enable  the script debugger during the tool installation.To do this navigate to Tools...Options.. Run Mode...and set to Fast.

03. Disable Active Screens:
Active screen used to take lot of disk space when the script is running. As soon as the tool is installed this screen will be activated by default. We can disable this by navigating to View.. Active screen option or by clicking on close the window('x').We can observe that the slowness during script editing if the option active screen is enabled.
We have to disable the active screen capture during the test saving also by un checking  "save active screen files" in save test dialog box.In some scenarios, capturing the screen shot is mandatory. We can do this by navigating If it is really needed, make sure in file settings>recovery scenario, you configure "activate recovery scenarios" appropriately. See if 'On error' would handle the job instead of 'on every step'.

04.Capturing Images & Movies:
Like active screens, capturing images and videos should be disabled during run time if it is not mandatory.We can do this by navigating to Tools.. Options.. Run.... uncheck "save movie results" and "save still images". But in some scenarios we may need to capture the videos and images.For example, some exception may happen in some of the screens. In such situations we need to select 'for errors' instead of 'always' option.

05. Recovery Scenarios:
This plays a major role in affecting the performance of the tool.Instead of using recovery scenarios we can use exist method to check whether a particular exception/error is occurred during the script execution. So, it's always better to avoid the recovery scenarios as much as possible.
If it's mandatory we can enable the recovery scenarios "on error" conditions by navigating to "activate recovery scenarios"

06. Run script in a cleaned machine:
I am sure that most of the organizations will not provide this option. If we are getting a separate machine with the required application and the automation tool, we can very well improve the performance of the tool due to the processor usage. We need to make sure that the pop up is disabled during the script recording to avoid the unwanted exceptions.
We can also avoid installing the unwanted software such as anti virus, printers etc.

I have given my points to increase the performance of the automation tool during the script execution. If anyone knows more information kindly let me know.

Thursday 3 November 2011

Dynamic Testing

This article explains the details about the Dynamic Testing. Dynamic Testing represents the actual execution of the software. It means we have to give input and expect some output from the software.These are all Validation activities.
Unit Testing, Integration Testing, Acceptance Testing, and System Testing are some of the Dynamic Testing methodologies.
Dynamic Testing or Dynamic Analysis is a software testing methodology which is used to check the dynamic behavior of the code. It means the physical response from the code/software based on the user input. In dynamic testing the code of the software must be compiled and executed against the user input and check the behavior of the software.It's the Validation portion of the Verification and Validation.
The following are the some of the Dynamic Testing methodologies.

01.Unit Testing
02.Integration Testing
03.System Testing
04.Acceptance Testing

In each of the above mentioned methodologies the code of the software is actually executed against the user input. The behavior of the software code should be consistent even though the input varies from time to time.


The following section explains the details about the overview of the dynamic testing methodologies. The details of each methodologies will be explained in a separate article.

Unit Testing represents to examine a certain unit of a code works as expected. Simply we can say that the validation of the specific unit. A unit may be a function, procedure, or a program. Actually performed by the developers.

Integration testing represents the behavior of the different units when integrated with other units. In this we are actually examining the behavior of the different units when integrated and executed. For example, we may pass data from one module to other module and check the behavior.

System testing represents the evaluation of a complete system. System testing is used to check whether the system meets the requirements.System testing falls under the category of the black box testing which required no internal knowledge of the software or system.

Acceptance testing represents the the requirement of the contracts are met. It's generally conducted by the customers to make sure that the software meets the requirements mentioned in the contract.

Static Testing

This article explains the details about static testing.

The verification activities fall into Static Category. We’ll have a checklist during verification process to check whether we are following the organizational standards. These standards can be applicable for coding, Integrating and deployment. 

The following are the static testing methodologies.

       Review , Inspection, Walk through

Static testing is a type of software testing in which the software will not be actually used. It’s just opposite to Dynamic Testing. In general, it’s not a detailed testing mainly checks the code syntax, algorithm and document. The main purpose of this testing is to make sure that there is no syntax or document errors. This type of testing is mainly used by the developer who wrote the code. He can use these type of testing in an isolation environment. He may use this type of testing for code reviews and inspections.

From the testing type view, this can be used for reviewing the requirements or specifications. This is the verification part of verification and validation. Bugs discovered during this stage is less expensive than fixing in other development cycles.

Static code analysis is performed on the computer software without executing its functionality. It means the portion of the code will be reviewed for checking the syntax and common errors.

A growing commercial use of static analysis is in the verification of properties of software used in safety-critical computer systems and locating potentially vulnerable code.

 
Design by Wordpress Theme | Bloggerized by Free Blogger Templates | JCPenney Coupons