If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

Cube & Fact table size

Started by ykud, 21 Jul 2005 08:33:00 AM

Previous topic - Next topic

ykud

PP, being a MOLAP and file-storage tool, suffers from fact data size increase.
Therefore:
1 what was your cube\fact table maximum?
2 query response time (complex Advanced Subsets i.e.)
3 did cube groups help you with this issue?

Mine, for start:
1 200 Mb \ 10 GB
2 30 sec
3 Nope, that was week's data

forsasi

I doesn't matter how big you cube.. it depends on the number of dimension and measurs.. if number of dimension increases then the query response time will increate and please check the optimization tips from Cognos PDFs...

ykud

Surely it's so. But there surely is a correlation between table size & quantity of dimensions :))).
And usually client's ask: "We've 0,5 Pb datawarehouse, what your cubes maximum?"

Yet I can bring dimension chart as well. Is it interesting?

Darek

I've been successfully running cubes in excess of 2.5GB, even before COGNOS officialy supported such monster. There are many factors that will decide how big your cubes is:

1. # of dimensions
2. # of measures and their storage type
3. # of partitions (auto or manual)

Having the cube time-partitioned helps performance as well, but there are limitations on what a time-partitioned cube can contain (like only one real time dimension, but you know, there are fake ones, too).

Reponse times are usually within 3-30 seconds range, depending on query complexity, if running on an Intel platform. Any RISC hardware would be a waste of time, regardless how good the industry think it is.

Darek

Additional comments:

1. There is a direct correlation between a cube size and its run-time performance. 5MB cube will perform much faster than a 5GB.

2. There is no physical limit on how big a cube can be. In my experience I used cubes as large as 10GB with decent performance.


[To other moderators: If you think the next paragraph is of self-marketing nature, please remove]
I'd like to invite everyone around Chicago Land interested in more details, to a presentation on this very subject during Performance 2005 on Oct 14th.

ykud

Darek, do you, be any chance, know anything about PP's internal processing algorithm?

I still hope on having time someday to perform rather a dull, but very informative test of loading the same table with different sorting. Results should vary, if any optimization techniques are used :)

Just a remark: rather fast molap engines are being devised in science(exp QC,DWARF), but it seems not to have any issue with Cognos. Pity.

Darek

Unfortunatelly, I don't. But that DWARF thing looks promising.

ykud

Yet a lot o work to do.
No incremental updates, no multi-measuring.

See Quotient Trees & Quotient Cubes -- brilliant idea inside.