We are seeing VERY poor performance when writing a RS report against a cube in 10.1. The cpu on the server is getting maxed out and the report runs for a long time. Are there any options to tune a cube that you are reporting against?
Is this one particular report or you are having this with all reports? What type of report (list, crosstab, etc.)? What's a long time? How many rows in your facts/dimensions and your expected result set? How big is the cube? What is your server configuration and how many simultaneous users? If you just write the report to list the highest level in one of your dimensions, is that quick to return? Also, you may have suboptimal RS queries; especially check to see you are not using detail filters (can be a real performance killer). Have you tried querying with Analysis Studio or Business Insight, and how is the performance there?
Hi there.
Also consider the version. 8.4.1 and 10 are supposed to have a supp'd up MDD component where as 8.4 is still running the old clunker MDD.
I'd suggest you have a read of Transformer in Production - http://www.ibm.com/developerworks/data/library/cognos/page354.html as this will, in conjunction with the questions posed by bdbits will give you a better understanding of the problem you are facing.
Also on this forum are details on how to turn on performance tracing which will, I hope, assist you with determining timing of the report vs cognos component.
Cheers
Peter
As of right now, yes it is one report giving us this problem(it is the only one set against the cube). The cube works quickly when using AS, but as soon as we put RS in front of it, the performance goes bad. We are maxing one of the cpu's out on a quad cpu box when running in RS and it takes over a minute to return any data. The box he is working on is a dev box with only a few users on it. As soon as the report is run, the cpu drops back to idle. We tried dropping the size of the cube down to only current year info with no luck either. The cube size has little to no effect on performance. We are also querying at the highest levels of our dimensions. The report is a crosstab report.
Hi There
Ok, with my support hat on, turn on performance tracing using the following - https://www-304.ibm.com/support/docview.wss?uid=swg21455407
Once you've created the performance trace, look through the trace for where you see the highest (et) in milliseconds. Capture the details and paste into this thread.
As this is a cube, I should have asked what type of cube. E.g. Powercube, Microsoft Analysis Cube, Essbase or are you talking a DMR (FM) cube?
Cheers
Peter
My guess is the issue lies with the way you're building the report.
In the report, go to tools--> view generated SQL/MDX. If the query is well written, you should only see an MDX statement. If you see Cognos SQL available in the dropdown, then your query is dumping everything into a temp file and querying that with SQL. Normally the temp file tries to stay in RAM, but depending on the complexity of the query it might generate huge (>2GB) files.
My general guidelines when working with any OLAP source:
- No detail filters
- Member summaries are okay if written in the format "aggregate([Value Expression] within set [Set Expression]
- No running totals of any kind
- No data casting
- No date calculations
- Use detail filters only in extremely rare and specific scenarios, and then only on measures
I recommend reading this (http://www.ibm.com/developerworks/data/library/cognos/page128.html) document. It very clearly explains why your query might be slow. Building queries against cubes requires an almost completely different mindset than when working against relational databases. The initial learning curve is fairly steep, but once you understand it you'll find dimensional reports significantly easier to work with than relational.