A market leader in the field of online classifieds with annual revenues of over $100 million updates its public web applications with monthly releases. The applications are complex, the code changes are often significant, and a reliable, cost-effective regression test is a must in order to maintain an ongoing level of quality.
Business and technology stakeholders in this organization saw the need for automated testing after they realized there were not enough resources to sufficiently test critical customer-facing applications during the quality assurance phases. Applications were often going into production with an insufficient level of regression testing, including the firm’s main revenue-generating, customer-facing web application. This situation created an unacceptable level of risk.
In response, automated functional regression testing against the web application was initiated using a traditional modular approach to automation. Function libraries, object repositories, and scripts using industry-standard best practices were created. As the automation suite began to grow into a substantial set of tests and major changes occurred on a frequent basis, suite maintenance to the scripts became resource-intensive.
With several builds released internally on a weekly basis and over 200 automated test scripts in the suite, the automation resource pool could barely keep up with the execution and maintenance of the scripts. In addition, the business sought to increase the automation effort to include over 500 automated tests for the website and initiate automation efforts in its other key applications. This business demand required additional skilled engineering resources.
Unfortunately, additional qualified resources were beyond the budget allotment for the effort. A change in strategy was initiated to design and build the automation to allow non-programmers to play a substantial role in the configuring and execution of automation.
A new approach was formulated that is a better fit for the needs of the client. RTTS designed and prototyped a keyword automated testing framework using HP QuickTest Professional (QTP). This framework was structured to have the following features:
- Non-programmers have the ability to create automation scenarios
- Non-programmers have the ability to execute automation scenarios
- Automated scenarios can be efficiently maintained by test engineering resources
- A single framework can be used across multiple applications
Keywords. The framework design that emerged is a system that uses keywords for actions. These keywords can be set up as sequential steps that direct a flow of actions to execute in a series. The main components of each step are an action (keyword), an object to act on and an expected result (Figure 1).
For example, to click on a link called ‘help’ that navigates the application to a help page, a user would use the keyword ‘click’ on the object ‘help’ with the expected results being navigation to the URL of the help page. Using this simple action-object-expected result definition of a test step, a user can quickly build multiple related steps to create a test case for an application transaction. A test case consists of one or more action-object-expected result steps.
Framework. In the framework (Figure 2), the steps for a test case are stored in a table in a database. Other database tables control the flow of execution, store results and store configuration settings. A control table contains all the test case names in an automation suite, and is used for selecting tests for execution. A results table records all the pass/fail information for later analysis and reporting. This table also records total validations, timestamps and configuration data.
RTTS implemented the framework using QuickTest Professional’s user-friendly interface and object repository features. A QTP “driver” script drives the test execution. This driver script is launched on the run machines and performs the actions of retrieving test case steps (actions, objects and expected results), executing those steps and then logging the results. The script interfaces with the database tables to drive the tests. After test case completion, the driver records results to the run control table. The driver then monitors the run controller to determine if an additional test case is selected to run.
Underlying code libraries for the framework were created to implement the keyword instructions in the framework database. This involved creating modular functions for all the main keywords and object types in the keyword grammar. Keywords such as ‘click’, and ‘VerifyExists’ required implementation for each action and each object type. This activity was easily accomplished in the QTP environment.
Once the user (now a non-programmer tester!) creates a library of steps and test cases, the user can select test cases for an execution run. The framework parcels out the test cases to a group of run machines (PC’s running QTP), which start executing the selected test cases. By using the framework, the client achieved increased productivity – test case creation was quicker and required automation code maintenance was lower (Figure 3).
With the new QTP-based framework in place, the software quality team collected the following metrics:
- Within 2 – 3 months, the team automated more keyword tests then it previously had in the prior framework
- Over 25 months, a single resource created and maintained ~450 keyword automated tests compared to 250 for a single resource over the same period using the traditional approach
- Over the course of 12 months, 900 keyword test cases across 4 web applications were created
- Due to the higher testing volume, defects were discovered earlier in the development cycle (lowering the cost-per-fix), and fewer defects were delivered to production
- Several non-programmer quality assurance analysts participated in the automation effort
- Maintenance was much simpler – code changes were rarely necessary, and when they were, simple changes were the norm