If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

DMR Cube limitation

Started by kimmal, 24 Sep 2015 10:24:04 AM

Previous topic - Next topic

kimmal

I know there are no hard and fast rules surrounding the size of cubes, however I'm wondering if anyone has had any luck with modelling DMR and creating a cube against a fact table that has between 15M and 20M rows.  Or should we start looking at Dynamic Cubes with this number of fact records?

MFGF

Quote from: kimmal on 24 Sep 2015 10:24:04 AM
I know there are no hard and fast rules surrounding the size of cubes, however I'm wondering if anyone has had any luck with modelling DMR and creating a cube against a fact table that has between 15M and 20M rows.  Or should we start looking at Dynamic Cubes with this number of fact records?

Don't quote me on this, but my feeling is 15-20 million fact rows is right at the upper edge of what I'd use a DMR model for. If you're using CQM, then it's not going to fly - it will crawl. Using DQM might yield improvements, but I'd be recommending an alternative such as Dynamic Cubes or TM1. Whether or not you can use Dynamic Cubes hinges largely on where and how your underlying data is stored. If it's in star (or snowflake) schemas in a data warehouse database that supports DQM access, I'd go with Dynamic Cubes in a heartbeat. If not, then you might need to consider other choices.

All this is my personal opinion, and does not necessarily reflect the views of Muppet Enterprises or reality :)

MF.
Meep!

cognostechie

I have done business with Muppet Enterprises in the past and have found them to be quite knowledgeable and in this case I would agree about 15-20M being
the upper edge for a DMR. This is where a physical cube comes in ( I hope all those trying to advocate a DMR to be better then a Cube are reading this ). Dynamic Cube would certainly be a better choice and if there is any limitation to use that then a Transformer cube is a good option. 15-20 M for a Transformer Cube is nothing at all. The reports working from such a Cube will come up in seconds.

bdbits

I support several Transformer cubes here, but would recommend using one of the other Cognos cubing technologies if at all possible. Transformer is unlikely to ever be upgraded to 64-bit and you are tied to CQM maybe forever with Transformer. And there is no migration path - you have to reimplement the cubes if you want to use one of the other cube offerings.

I defer to those with more experience regarding DMR limitations on fact tables. We have several DMR packages around here that perform fine with a few million facts, but without checking I do not think any of them individually hit 15M rows.

cognostechie

Interesting ! I never tried it but I am going to. If I make a report from a DQM package which would obviously be 64-bit then Transformer won't be able to read the data from that report as a Data Source ?  Maybe, but if I have a 64-bit installation and create a seperate FM Model which I use to publish packages for all of my Transformer cubes using CQM then Transformer wold be able to read the data from those packages, correct?

bdbits

The source of the data does not matter. Transformer will read from DQM at build time just fine, but the cubes themselves run within Cognos using the CQM query engine. So you can never go all-64-bit and eliminate CQM, which is not going to see any further enhancement. Though, it is probably pretty tuned by now I suppose.

Sorry if that was misleading, cognostechie.

Michael75

This is a very interesting discussion!

Last week I was at an open day co-hosted by IBM and one of their premier partners here. One of the presentations was on Dynamic Cubes, and included a comparison of DC with the other technologies available. The speaker stressed that DMR with DQM performs vastly better than with CQM. He put the upper limit for DMR + DQM at 25M rows.

Obviously, this doesn't "prove" anything, and there's always an element of YMMV. I'm attaching the relevant slide from the presentation.

cognostechie

I wonder who made that slide? IBM or the partner? Which partner was that? It says that 2 GB is the data volume limit for Powercubes which is wrong ! That was the case years ago when Transformer cubes could be only a single MDC file and the limit of 2 GB was set by the operating system, especially Windows which would not let a single file grow more than 2 GB. In the newer versions of Transformer, a cube can be spread across multiple files and hence there is no limit to the size of the cube. This enhancement was done years ago. I have cubes of size 18+ GB.

On the topic of CQM/DQM, no worries bdbits. Actually, it's a good discussion which is helping me understand a little more. 

A CQM query would go back to the database to getch the data whereas a DQM query would go only till the cached data which would reside on the app server. Transformer cubes reside on the app server too. I wonder if the CQM engine that works for a relational package is the same as the one that works for a Transformer cube or maybe the CQM method doesnt work for the cube at all? The language is also supposed to be different. SQL vs MDX. Whatever the case may be, I see your point about no enhancements to Transformer and that might be because other cubing technologies are now available. 

Michael75

cognostechie wrote:
QuoteIt says that 2 GB is the data volume limit for Powercubes which is wrong ! That was the case years ago when Transformer cubes could be only a single MDC file and the limit of 2 GB was set by the operating system, especially Windows which would not let a single file grow more than 2 GB.

I think everybody and his dog knows that the 2GB limit is, and has been for a long time, the limit for a single physical cube, and that in recent version of Transformer there are several possibilities for circumventing this, as you described. The presenter certainly knew this! The slide text in my attachment was just a 'KISS' simplification in a presentation that was not about Transformer, but about Dynamic Cubes :)

MFGF

Quote from: cognostechie on 25 Sep 2015 02:53:48 AM
I wonder who made that slide? IBM or the partner? Which partner was that? It says that 2 GB is the data volume limit for Powercubes which is wrong ! That was the case years ago when Transformer cubes could be only a single MDC file and the limit of 2 GB was set by the operating system, especially Windows which would not let a single file grow more than 2 GB. In the newer versions of Transformer, a cube can be spread across multiple files and hence there is no limit to the size of the cube. This enhancement was done years ago. I have cubes of size 18+ GB.

On the topic of CQM/DQM, no worries bdbits. Actually, it's a good discussion which is helping me understand a little more. 

A CQM query would go back to the database to getch the data whereas a DQM query would go only till the cached data which would reside on the app server. Transformer cubes reside on the app server too. I wonder if the CQM engine that works for a relational package is the same as the one that works for a Transformer cube or maybe the CQM method doesnt work for the cube at all? The language is also supposed to be different. SQL vs MDX. Whatever the case may be, I see your point about no enhancements to Transformer and that might be because other cubing technologies are now available.

That's a very interesting slide, Michael! I happen to have a copy of the official IBM slide (no business partners involved - purely IBM staff) from the Dynamic Cubes PoT IBM runs regularly in the UK. They don't mention any limits for data volumes for DMR, and when asked they give the answer I posted above - 15 to 20 million fact rows is as big as you'd go. IBM doesn't have an official limit, it seems - but their technical people will give personal opinions when asked.

The slide you posted refers to "OLAP over Relational" - this was the term IBM used for DMR using DQM when it was first launched. They stopped using OOR a good while ago, because people just didn't know what it meant. These days they just refer to DMR using DQM. I get the feeling the slide you saw is old and isn't an official IBM message :)

Cheers!

MF.
Meep!

Michael75

MF wrote:

QuoteThe slide you posted refers to "OLAP over Relational" - this was the term IBM used for DMR using DQM when it was first launched. They stopped using OOR a good while ago

You are absolutely right, of course, and I didn't pick up this detail last week, because this presentation was about Dynamic Cubes. But the 25M figure popped up in what passes for my brain when I saw the lower figure in this exchange. The slide I showed came from the premier partner. I rather doubt that it dates back to the period when IBM Cognos still referred to DMR as "OLAP over Relational", because I happen to know that the presenter is very experienced with DQM, and unless I'm mistaken these are two different eras.

I'm not going to cite the name of the partner, because I don't want to get into an unproductive "You said X" "But he said Y" "OK, but she said Z" etc. discussion. The partner company has impeccable credentials, and I know the presenter (slightly), enough to have full confidence in him.

Let's leave it at "YMMV" :)

MFGF

Quote from: Michael75 on 25 Sep 2015 09:27:45 AM
MF wrote:

You are absolutely right, of course, and I didn't pick up this detail last week, because this presentation was about Dynamic Cubes. But the 25M figure popped up in what passes for my brain when I saw the lower figure in this exchange. The slide I showed came from the premier partner. I rather doubt that it dates back to the period when IBM Cognos still referred to DMR as "OLAP over Relational", because I happen to know that the presenter is very experienced with DQM, and unless I'm mistaken these are two different eras.

I'm not going to cite the name of the partner, because I don't want to get into an unproductive "You said X" "But he said Y" "OK, but she said Z" etc. discussion. The partner company has impeccable credentials, and I know the presenter (slightly), enough to have full confidence in him.

Let's leave it at "YMMV" :)

Sorry - just to clarify. DMR has always been DMR, but when Cognos 10 came along and introduced Dynamic Query Mode, the term OOR (OLAP Over Relational) was used initially to indicate DMR over DQM (as opposed to DMR over CQM). Of course, being IBM and having such fondness for TLAs, IBM staff then just referred to it as OOR, and everyone else from outside IBM sat at the back whispering to each other "What is OOR?" Eventually IBM figured this out and dropped OOR as a term - so these days they just call it DMR over DQM. It's a win-win situation for them, because a) people understand what they mean now, and b) it uses two TLAs instead of one :)

MF.
Meep!

Michael75

#12
MF wrote:
QuoteDMR has always been DMR, but when Cognos 10 came along and introduced Dynamic Query Mode, the term OOR (OLAP Over Relational) was used initially to indicate DMR over DQM (as opposed to DMR over CQM) etc. etc. etc.

And there I was, silly me, thinking that OOR was a pre-DQM TLA for DMR  :-[

Note to self: CV list containing EN, FR, DE & NL- must add IBM TLA  ;)

cognostechie

#13
Quote from: Michael75 on 25 Sep 2015 06:40:03 AM
cognostechie wrote:
I think everybody and his dog knows that the 2GB limit is, and has been for a long time, the limit for a single physical cube, and that in recent version of Transformer there are several possibilities for circumventing this, as you described. The presenter certainly knew this! The slide text in my attachment was just a 'KISS' simplification in a presentation that was not about Transformer, but about Dynamic Cubes :)

Don't want to drag this further and no intention of finger pointing but the facts are facts. Not everyone knows about the 2GB, leave alone the dog. I have seen people discontinuing a Powercube and going for DMRs with the argument that the cube is nearing 2 GB and will crash pretty soon. Those were consultants from a premier business partner and I am not going to name them either. We all know that the knowledge does not depend on the partner status but on the individuals doing the job and that's the reason for success/failure of the work regardless of which partner is doing that work. Companies lose their resources and have to hire whoever is available so the partner status doesn't matter and people have started realizing it. I would defend my contacts too and we all make mistakes ( I do too ) and that's what happened.  Lets just agree on the practical fact !


For the less technically savvy guys reading this, just so you do not get misled -  There is NO physical limit for a single cube to be 2 GB and no circumvention is required to make the cube grow more than 2 GB. It's a standard functionality in Transformer which you can use when you define the cube to make it grow more than 2 GB and it is designed to work and does work without doing anything to go around it.