If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

Cube Build Time Reduction

Started by andy_mason_84, 06 Dec 2012 05:04:23 AM

Previous topic - Next topic

andy_mason_84

Hi All,

I have one cube which has 10,207,768 rows and 55081 categories, leaving the Auto Partition at the default 10m the cube took 3 hours and 8 mins as stated below, the slider is on the Faster Cube Access and the Number of Passes is always on 5.
The Source is a Cognos Report.

I have played with changing the estimated number of consolidated records and managed to shave off some time as seen below: -

Auto Partition: 10m | Start: 13:15 - 16:24 (3h 8m)

Auto Partition: 15m | Start: 08:50 - 11:27 (2h 36m)

Auto Partition: 11m | Start: 13:21 - 16:42 (3h 18m)

Auto Partition: 20m | Start: 16:55- 19:20 (2h 25m)


Has anyone had any better results with cubes processing large numbers of rows?

Has anybody had positive results manually partitioning cubes?

Thanks for any advice,

Cheers,

AM

cognostechie

I have had a cube reading 64 million records from Sales table getting built in 3.5 hrs and I think that is pretty good for the amount of data it was reading. Cognos recommends 50 million as optimal but I exceeded that by 14 million and it was still getting built in 3.5 hrs.

Considering you have only 10 million, it should take less than that but it also depends on how you have designed the queries that are feeding the cube. Having multiple queries instead of having one query is recommended. That makes much more difference than partitioning.  The slider should always be set for  'Faster Cube Access' to improve the performance of the reports.

andy_mason_84

Thanks for the reply.

I think the main problem may be that we have two queries feeding the reports, one containing all Invoice information and the other containing Order Information.

Not sure how you can break these down further as all the data is required.

Will have a think,

Cheers.

cognostechie

So what about the dimension information? Like the customer name, ID, date etc. I presume all that is coming from those two queries so there are joins inside those queries. Cognos recommends to have seperate queries for each dimension to feed the dimension info and then have the Fact queries  associate the data with them. That would also help relate some data items with each other properly.

wyconian

Hi

When you say the source for the cube is a report do you mean that you've written a report based on a FM package and you are then using the report as the source.  If that's you mean then Transformer will run the report into memory first before it can use it as a datasource.  Depending on how complex the report is and how good the indexing is that could take some time.

I would suggest a more efficient way of building the cube would be to use the FM package as a source, that way transformer is directly accessing the database which should make it faster.

It may mean that you have to do more to write the report based on the cube but it should be still be possible.


andy_mason_84

Thanks,
I will look into it, I have inherited these cubes so not entirely sure on why they have been set up to use Reports based on a FM model as a source as opposed to the FM model itself.