If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

FM Model (DMR Issues )

Started by Rency, 12 Jun 2013 12:55:36 PM

Previous topic - Next topic

Rency

Hi Techies ,
I  Created a DMR Model (Star Schema ) , with one fact having around 11 million record . Customer want a report to generate data for one process date its around 200 thousand ,if i use 2 dimensions on cross tab  report is not getting generated in 4 mins .
i ran the query generated from   sql or mdx in report studio in db also taking more time to fetch the data .

Select col1 ,col2 ,sum(Measure coln) from

fact ,dim1,dim2
group by
col1,col2
where
fact.colx=dim1.col1 and fact.coly=dim2.colx

I have few doubts ,
a. can this be resolved if i create a  transfomer   cube .
b. Can you suggest in DMR model  is optimal for what kind of a data volume
c. if we migrate to transformer can we apply a row level security .


Thanks in advance
Rency


cognostechie

11 million is not too much so a DMR should work fine. It should not take 4 min ! Yes, a transformer cube would definately be faster but then it has other processes to execute like refreshing the cube regularly and of course building the cube which requires additional effort.

Just to explain here, a DMR is not a star schema ! A star schema is a relational way of structuring the data. The DMR is the OLAP way of structuring the data.

CognosPaul

4 minutes for 11 million rows seems excessive. Does it take that long when you run that SQL directly?

The first thing to check, do you have any suppression going on? Is your data container a crosstab, chart, or list? Roughly how many members in the specified levels in each dim?


blom0344

1. Check the execution plan on the database.
2. Verify your indexing strategy. Is your date a FK within the fact table? Are you using 'real' dates or integer representations?

With the right selectivity on the tables you should have MUCH better performance with a mere 11 million fact records

charon

I agree, 11 Million records are (almost) piece of cake.
Besides an effective index-strategy, it might be an option to check for partition possibilites (like in oracle 11 e.g.).
But to sum it up, like paul and blom i think you might get good results when optimizing the tables and dlls on the database itself.
cheerz :P