If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

User Defined In-Memory Aggregates - Cognos Cube Designer 10.2.2

Started by GeethaKL, 28 Mar 2019 08:01:59 PM

Previous topic - Next topic

GeethaKL

Hi All,
I have got a requirement to create User defined In Memory Aggregates in Cognos 10.2 Cube designer so as to improve the Performance of the report.
Started the Task:
1. I have taken a cube in which the FAct Table has got 80 Million rows.
2. It has 9 Dimensions and 3 Measures.
3. Currently the report execution is  greater than 2 minutes.
4. Created a User defined In Memory aggregate including single measure and the 8 dimensions
5. Selected two measures and selected few levels in some dimensions and all the levels in the rest of the dimensions as per the Report Criteria.
6. Published the cube and ran the Aggregate Advisor including only USer defined.
7. It ran successfully but left a message
     "USer defined Aggregate may not perform well because it is relatively large compared to the Fact Table."

Then I started the cube and loaded the In-memory aggregates.
There is a significant improvement in report execution( took 20 secs) to get the output.

My Question: IS it ok to ignore the message in the Aggregate Advisor as there is improvement? 
OR
How to define  the user defined in memory aggregates so that I dont get such messages.

Please help me the understand the concept.

Thank you

Regards
Geetha

bus_pass_man

The answer is: it depends on whether that aggregate is the most optimal under the circumstances.

A report with 8 dimensions in it seems big.  An aggregate with 8 dimensions in it seems big too.

Quote5. Selected two measures and selected few levels in some dimensions and all the levels in the rest of the dimensions as per the Report Criteria.

When you select a level from a dimension and bring it into an aggregate, you are telling the aggregate to include that level in the aggregate.   The lowest selected level will be the level of detail for that dimension in the aggregate.  Subsequently the fact aggregation level will be at that dimension level, which will mean that the aggregation roll-up and, thus, the size of the aggregate might be not as optimally-sized as desired.  Eventually you end up with your case: with an in-memory aggregate which is not that much smaller than the source fact table but faster just because it's in-memory.

It all depends on the circumstances, though.  I'm not quite sure about the report design.

Do you have workload logs generated? They might be a good place to identify smaller aggregates, which would have better performance than the aggregate that you created and probably not generate message like that.