NW Business Intelligence

thoughts on technology, B.I., and more…

Posts Tagged ‘reporting’

Testing Reports, How Well Should We Do It?

Posted by Brad Greene on October 14, 2012

I think everyone would agree that one of the long held goals in Business Intelligence, the delivery of a known (hopefully high) quality deliverables, is a very critical success factor. If your BI team does not deliver this your customers (end users) quickly become disillusioned and suspicious of your work. Even small mistakes, made once too often, can do long term damage and you will spend months, maybe years, proving your work can be trusted. As the importance of the data warehouse has risen, the need for rigorous testing of deliverables has risen. However, my observation is in many cases the resources applied to testing has not kept pace; especially when compared to other areas of software development.

This posting is specifically about testing reports. This is an area where I think we can do much better. As developers we understand that every link in the BI delivery chain is critical to delivering quality. However, time and again, I see the a dramatic drop in the level of sophistication for testing reports from that applied to the other steps in BI delivery. Development teams work so hard to make sure their data is loaded on time, cleansed, transformed into business value and made available to their end users only to be presented in reports that may have defects. Those defects are slipping in because testing reports is not easy. It is deceptively complex.

I’m most familiar with the Cognos product suite but I believe that many of the challenges testing reports apply to other BI suites. Here are some I see:

Reports are typically not written in a procedure language (commonly XML, CSS, HTML, etc.)
Reports are database dependent so output is infinitely variable
Reports often contain graphic output like charts and images
Reports can be output in more than one format (PDF, CSV, Excel, XML, etc.)
Reports support variables or parameters, sometimes large numbers
Reports may require support for more than one language
Reports may be dependent on logic in metadata repositories than can be branched independently from the reports
Reports often involve user security, can be complex
Reports can support multiple data sources and types
Reports are often supported in multiple environments (browser, browser version, OS, bitness)
Reports can have strict formatting specifications (pixel perfect reports for printing)

Before I was doing dedicated BI work I was captive at a firm that was using Mercury as a testing tool. We developed a set of reports as part of a SaaS product suite. The testing tool was using basic “screen scraping” methods to compare report outputs between versions to find changes. We were able to set up automated testing of the reports to accommodate our 6 week release cycle. This was in the early days of very large databases and ours was constantly undergoing changes to tables and data. Sound familiar? We didn’t think of ourselves as data warehouse developers but that is what we did and fortunately we were able to invest in quality testing tools that included features for automating our report testing. Tools like Mercury were relatively expensive even then but the cost of delivering a bad report was part of the justification.

There are now tools available to do this job that are far more integrated and easier to use. I’m aware of one solid player that offers real testing features to the Cognos market. There may be other companies but I’ve not seen them. The one I have actually seen work is Motio. It doesn’t solve all the issues I listed above but nothing does yet. Testing reports takes a lot of process and effort to get it right. Is it worth the investment? That’s the question. A colleague of mine sent me an interesting article that might help find that answer. Here Douglas Hubbard discusses how we might better measure the right things when doing a cost-benefit analysis: The IT Measurement Inversion

Making things even more difficult is Agile. With Agile development methods making their way into most companies it is interesting to see how BI teams adapt to make the most of this methodology. You can find lots of others writing about their experiences with this. As the pressure mounts to quicken the pace of release cycles in BI the ability to hold the line on quality is even more important. The challenges for the QA team (often the very same BI developers) vary widely depending on the size of the organization. However, there are a few common themes;

Staffing testing roles to meet demand
Underestimating the difficulties testing BI reports and related BI analytic output
Getting overwhelmed by the challenges of managing multiple releases (branching is starting to become more common in BI)
Lack of tools to automate report testing (as discussed above)

So what to do?

Here is what I see most organizations do. Bluntly put, close their eyes and hope. Relying on unit testing by developers and spot checks by a few part time QA staff.

OK, that may be slightly dramatized but it is essentially what ends up happening. The costs of BI are typically under constant scrutiny (still). No one seems to ever budget $200K (just an example) or more for testing reports when they start a BI project. No one understands why you should spend that kind of money on testing. It’s not trivial for sure. I’m not saying that’s what these tools cost. Enterprise software pricing is complicated and you have people costs as well. However, that’s not the point. The software does something complex and saves companies a lot of money when it is used appropriately. BI teams have to start asking themselves about the cost of these actions?

Delivering reports with inaccurate values in them
Delivering reports that run slowly or unpredictably
Delivering reports to the wrong people
Delivering reports that allow access to sensitive data (regulatory compliance)
Delivering reports that are badly formatted (missing logos, misaligned columns, etc.)
Reports with misspelled words
Reports that fail under some conditions (one environment and not another)

Whether big or small, BI teams have to take testing of their BI reporting deliverables seriously. Not doing so is a sure sign of a lack of experience. It is only a matter of time before this approach leads to problems. Unless the project or company is quite small it can and should afford to invest in a dedicated tool. The pay off is huge in the long run. No team can afford to deliver bad reports, period. Smaller companies and projects may not be able to afford a dedicated tool. This doesn’t mean you can’t adopt good practices for testing it just means they will be more work and take more of people’s time. Taking the time to set up dedicated testing environments with known data sets, SQL scripts to run to validate report output, screen shots from prior reports to compare results to and so on, is how you mimic an automated tool. You just can’t do all that by clicking buttons. But the concepts are the same. You can learn more about this by just watching vendor demos to see what features they offer and building your own manual processes. Then hopefully, some day, your project will be worthy of the investment in automated report testing tools.

Posted in Business Intelligence | Tagged: , , , | 2 Comments »