If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

(TR1901) PDS-PPE-0104 Error in step 19

Started by MMcBride, 20 Apr 2011 10:52:40 AM

Previous topic - Next topic

MMcBride

I have a transformer that uses 5 different Data Sources to create 2 different cubes.

There is a very large number of measures for 1 of the cubes (156 Measures of which 56 are straight feeds from the table. 20 are ratios calculated within the Cube and the rest are calculated within the Framework Manager model - ie SQL of the Datasources.) These measures are broken into 4 folders. Actual, Budget, PY Actual, PY Budget.

The input scale on all measures is set (by default) to 0
The output scale varied widely, but due to this error I tried 2 with a Precision of 2

All data sources process fine, but when it begins step 19 populate the cube I get the following error:
Wed 20 Apr 2011 9:36:50 AM   2   00000000   (TR1901) PDS-PPE-0104 A record was rejected. An arithmetic overflow occurred. (Data) in e:\cognos content\FPA Tree Cube1.mdc. [->OK]

The queries return good data that is balanced.
The cube when it builds also balances out as it should.

The cube was originally written against "Dummy Data", then migrated to real data. When this migration occured I recieved the error you see above.
Thinking the data was just returning funky values I verified everything and just rebuilt the measures exactly as they had been.

This resolved the problem.

We then loaded more data into the Data Mart and I rebuilt the cube.
Once again the error reared its ugly head.

I went out and manually verified the data is correct, no values over 4 Billion with a max decimals of 2 coming from the Queries.
I also at this time noticed my Output Scale and Precision was all over the place. 2,2 to 11,9
Thinking this was causing the issue I changed all Precision to 2 for every measure.

I also reviewed the Input scale and all were set to 0 by default so I left them as is. The output scale I changed all measures to have an output scale of 2.

Once again the cube built fine after these changes. We ran fine for about a week or so, adding new daily and rebuilding the cube daily.

Now once again I am getting the error.
The cube has not been modified so I thought it has to be the data. However once again the data is correct and values well within the accepted limits.
When I run the SQL outside of cognos and review each measure once again my values are all small with totals at the highest levels no greater than 4 Billion and no more then 2 places past the decimal.

This morning I read on the IBM Site Output and Input Scales should normally match so I set all the Outpute scales to match the 0 of the input but to no avail.

The IBM Tech Support site says:
QuoteQuestionOn some measure, that could even be very basic (no calculation, no weighted average rollup,...), an arithmetic overflow may occur during cube generation when source data exceed the maximum that Transformer can store in a cube. The error message then is : (TR1901) PDS-PPE-0104 A record was rejected. An arithmetic overflow occurred. (Data) in cube.mdc The underlying question is : what is the maximum value that Transformer can store when measure type is "64-bit floating point" ?

AnswerWhat Transformer can store is far less than what is theorically allowed by IEEE 754 standard for 64-bit floating point type (precision up to 308).

In Transformer, if Precision field is set to 0, the maximum [integer] value is :
9,223,372,036,854,775,296
[ and so the minimum value is -9,223,372,036,854,775,296 ]

For information this corresponds to the following values :
7FFFFFFFFFFFFE00 in hexadecimal
111111111111111111111111111111111111111111111111111111000000000 in binary
which can also be calculated as 2^63 - 2^9

If Precision is more that 0, then just move the coma to the left. For example, if Precision is set to 9 (which is the maximum in Transformer), the maximum [decimal] value that can be stored is :
9,223,372,036.854775296
(about 9 billions with nine decimal digits)
- This is public so it should be ok sharing :)

But when I look at my actual values they are all X,XXX,XXX,XXX.XX - which is well within the limits. These are the fully aggregated values, the lowest level values are in the thousands. XX,XXX.XX

So I simply cannot figure out why this erro keeps coming back. If Delete and recreate measures the error seems to go away for a while but after a few data runs it seems to always come back...

I have an open ticket with IBM at the moment but I thought I would post here as well.
Has anyone seen anything like this before?