A major vendor of business information, research and library services delivers one of its products wrapped in an SOA Web Service. Clients who use this service can create custom portals (for example, in their internal Web sites) to allow employees to search in real-time for the most current business information. The service accepts queries in four languages (English, French, German, and Japanese) and returns results in the same language, using a query syntax that is commonly used in library science applications.
How can a SOAP Web Service vendor regularly check the functionality of its service, when the service has no User Interface?
Implement an “automated test user” using an industry-standard test automation tool and standard SOAP/HTTP components.
The Web Service query functionality was divided into three general areas: Basic Queries, Complex Queries and Exception Queries. The distinction between basic and complex queries is based on the types of search modalities in the query, and the query length. Exception queries are queries that violate the query syntax and return an exception.
An analysis of the service and the expected usage pattern suggested that a breakdown of test queries should focus on English Queries (50% of the total) with the remainder of the queries divided equally among the other languages (~17% each). A total test query set of 350 queries was designed and split up as shown in Table 1.
Licensing to an industry-standard functional automation tool was already owned by the Web Service vendor, so RTTS used a standard SOAP/HTTP package to build a SOAP request-response engine, along with code to load the response messages into the XML DOM for extraction of result data for comparison to baseline values.
Baselines for the query engine were handled by using another query application vended by the same company. Because search results from the new Web Service were required to match those from the existing search application, this application provided an ideal choice as a dynamic baseline. Both the existing application and the Web Service were real-time applications, so data updates in one were reflected in the other.
The full library of queries could be executed and analyzed in approximately 4 hours. This meant that the development team could obtain regression information as many as two times per day. Typically, however, the regression runs were executed nightly, so that the development group could have fresh information in the morning on the previous day’s code.
Finally, an additional dimension of testing was implemented by capturing the response time for each query, along with the resulting data. This permitted changes in response times to be monitored along the course of development on a per-build basis, as a measurement of the user experience. The figure below shows how the query response time changed over several builds for a sample set of queries (each query is indicated by a different color).
Obviously, the performance of specific queries was quite dependent on the build; other queries were to be insensitive to software build changes. Information about how the behavior of specific queries changed in each build gave the development team a picture of the effects of their optimization choices.
- The test automation offered a rapid quality assessment of each build to the development team over the specified regression set of 350 queries. Without automation, manually checking each response would have been labor-intensive, onerous and fraught with errors.
- The software quality team was able to vet each build at the same high level, so build quality could be compared and the development trend was available to the whole team.
- The automation efforts provided an added dividend: the ability to track quality not only by data verification but by response time within each test run. This provided a pre-release view into the user experience of the product.