Part of an investment bank’s daily operation is to track the value of Credit Default Swaps on a daily basis. A Credit Default Swap (CDS) is a contract between a buyer and a seller in which the buyer of the CDS agrees to make a series of payments to the seller that are akin to insurance premiums against default of a credit instrument (such as a bond or mortgage backed security). In return, the buyer receives a payoff from the seller if the credit instrument goes into default; if the instrument does not go into default, the seller keeps the payments as profit.
Various Risk and P&L measures are calculated daily by a risk data warehouse in order to track the changing values of different CDSs. Frequent releases of this valuation system are necessitated because the calculations are changed in response to changing business and market conditions. The goal of the software quality effort is to assure that each new version of the valuation system calculates these measures the same as the previously deployed production system.
How can a data-warehouse software quality effort compare hundreds of books with thousands of CDS’s with the corresponding measures from another data source, and output the comparisons in a manner suitable for further investigation of any mismatched measures?
Implement a high-volume database comparison system to retrieve risk data from various databases and store the data in a results database, from which data analysis can be performed.
- RTTS designed and built a custom test application using Visual Studio.Net and VB.Net to set up high-volume SQL-based data comparisons between test systems and production systems.
- Approximately 1000 SQL queries were written against the combined systems to compare measures from the test systems to the analogous data in the production systems.
- Data comparison runs were performed on an ever expanding set of books and then investigations were performed on mismatched CDS’s.
- Several thousand CDS’s and their valuation measures could be compared in less than 10 minutes.
- The results of the analysis could then be used to investigate the reasons there are differences amongst the analogous measures.
- The automated framework decreased the necessary time to compare CDS’s by a factor of 100.
- After comparison runs were completed, manual investigations as to differences could be performed quickly because the discrepancies were pinpointed by the automated comparisons.
- Testers could run comparisons and begin investigations within minutes, as opposed to hours.