If you are unable to create a new account, please email support@bspsoftware.com

 

How to calculate a growth rate in Report when year object is a single object

Started by mederik, 03 Nov 2014 02:41:45 AM

Previous topic - Next topic

mederik

Hi everybody

I have a dumb question :

I will need to calculate a growth rate such as : (arrival year) - (start year) / (start year)
The problem is that if you want to do it in a crosstab by pulling the years from your framework time axis into the columns, the crosstab will show you all the years where there are data as a single year object. You won't be able to dissociate a year from another in order to do your calculation like you that one below :

[Year 2014]-[Year 2013]/[Year 2013]

Hence, my question is : how do you do to put at your developper disposal the years objects as a single object for every year. As I'm not english it's very difficult to make sure I'm explaining it properly. Hope this is right. I mean how do you when you have a classical relational time axis to create as much independant year object as there are years in order to manipulate those objects independantly from each other ?

Is it something to do with the way you modelize your framework or is it something to do in the Report no matter how you modelize your framework ?

NB : I thought to a solution that would concist of creating a dynamic time axis from my relational axis in order to get a hierarchy, hence as much members (hence objects) as there are years. Therefore I'd be able to choose a [2013 - year] object and a [2014 - year]. Is this a solution someone applied. Does it work ? Are there other solutions

Thank you very much for any help you'd provid me with

Mederik

MFGF

Quote from: mederik on 03 Nov 2014 02:41:45 AM
Hi everybody

I have a dumb question :

I will need to calculate a growth rate such as : (arrival year) - (start year) / (start year)
The problem is that if you want to do it in a crosstab by pulling the years from your framework time axis into the columns, the crosstab will show you all the years where there are data as a single year object. You won't be able to dissociate a year from another in order to do your calculation like you that one below :

[Year 2014]-[Year 2013]/[Year 2013]

Hence, my question is : how do you do to put at your developper disposal the years objects as a single object for every year. As I'm not english it's very difficult to make sure I'm explaining it properly. Hope this is right. I mean how do you when you have a classical relational time axis to create as much independant year object as there are years in order to manipulate those objects independantly from each other ?

Is it something to do with the way you modelize your framework or is it something to do in the Report no matter how you modelize your framework ?

NB : I thought to a solution that would concist of creating a dynamic time axis from my relational axis in order to get a hierarchy, hence as much members (hence objects) as there are years. Therefore I'd be able to choose a [2013 - year] object and a [2014 - year]. Is this a solution someone applied. Does it work ? Are there other solutions

Thank you very much for any help you'd provid me with

Mederik

Not a dumb question at all! :)

The dilemma you are describing here is typical of any relational reporting package in Cognos - data values are represented in terms of items, and it is tricky to deparate different values within a single item of data. With a relational package, you end up having to code some messy calculations for each data value you want to isolate, eg:

if ([your Year item] = 2013) then ([your measure value]) else (0)

You would then use these in your subsequent calculation for growth rate.

Messy, but do-able. :)

A much more elegant solution is to add another modelling layer to your Framework Manager model. The objects you currently see and use in your packages are query subjects and query items, and the idea here is to build another layer over these query subjects, consisting of Regular Dimensions and Measure Dimensions. A Regular Dimension contains one (or more) hierarchies - each with one (or more) levels. Each level has "members", identified by a BusinessKey and described by a Caption. A Measure Dimension is a collection of measure values.

If you publish a package containing these Regular Dimensions and Measure Dimensions, you give authors the ability to create member-based reports, where (for example) the Year level of your Date hierarchy has separate members for each year. This makes the process of building your growth rate calculation simple and intuitive - the user can just drag in the desired members to the expression.

Cheers!

MF.
Meep!

mederik

Thanks a lot for your answer.
You said :

Quoteto build another layer over these query subjects, consisting of Regular Dimensions and Measure Dimensions

Is this the same thing as the DMR or is it something else ?

If I go for Regular dimensions et Measure dimensions, are there possible drawbacks when considering the volume of data into the fact tables ?

Mederik

MFGF

Quote from: mederik on 03 Nov 2014 06:10:55 AM
Thanks a lot for your answer.
You said :

Is this the same thing as the DMR or is it something else ?

If I go for Regular dimensions et Measure dimensions, are there possible drawbacks when considering the volume of data into the fact tables ?

Mederik

Yes - these objects collectively are referred to as DMR - Dimensionally Modelled Relational objects.

DMR works well with smaller data volumes. Don't quote me here, but my gut feel is you will get acceptable performance on fact data sets of up to 10 to 15 million rows. Anything over this, you will probably not be happy with performance, simply because of the number of rows needing to be aggregated for each action (drill down/slice in a new dimension/etc). Using Dynamic Query Mode cam improve matters somewhat, but I'd still not recommend DMR for huge data sets. For larger volumes of data, you probably need to consider Dynamic Cubes (if your underlying data source supports this). I have seen these deliver near-instant response times with large data sets after they have been optimised - the biggest I have witnessed so far was around 50,000,000,000 rows in the underlying fact table.

Cheers!

MF.
Meep!

mederik