If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

Changing Database Vendors - Same FM Model

Started by david.stachon, 27 Jan 2011 09:46:55 AM

Previous topic - Next topic

david.stachon

Hi,

I am in the process of migrating a data-mart that currently lives in SQL Server 2005, over to Teradata.

There are no changes in table structures between the two environments. It's a true lift and shift.

In terms of the FM model:

I should simply need to recreate my data-source (Cognos Connection), flip the Data Source interface type to TD, test all my data source query subjects, joins, republish, and I should be good to go.

Unfortunately, we're seeing quite poor performance with Cognos talking to TD versus MS SQL. ...so logged a call with support, and Cognos is recommending that I recreate the identical model from scratch. (but this time, starting with the TD schema)

...does this seem suspicious? Isn't the whole point of FM to design meta-data that is database vendor agnostic? What are OEM reporting applications doing?

Any thoughts are appreciated.

(interesting, we've looked at TD query execution times (from the TD server), and they are quite fast. The issue seems to be with Cognos (and perhaps query generation time?).

thanks,
david.

Lynn

Interesting. I'll be curious to hear how you manage. Some friends in a different division at my current client will be doing a similar thing later this year.

I would have guessed the same steps you listed along with revising expressions in model query subjects that reference database functions not supported or supported with slightly different syntax.

Sorry I don't have any great suggestions for you. When you say you looked at query execution times, did you try queries for a variety of scenarios (e.g., from a single table in physical layer, from two tables in the physical layer with a join, from model query subjects with different complexity)? I'm wondering if the issue lies with a particular set of circumstances but not across the board? Total shot in the dark....

RobsWalker68

#2
Hi,

As a long shot it may be worthwhile making sure that  "Allow Enhanced Model Portability at Run Time" is enabled in FM.  This will force FM to recollect the metadata from Terradata instead of relying on what it has cached already for SQL Server.

Once you've done this, and tested the model, if all is ok turn it off again.

Rgds

Rob

david.stachon

Thanks for the suggestions...

....regarding the database functions I was able to make a change that made quite a dramatic result.

I disabled all database vendor functions except Teradata and it made quite a dramatic difference. (I'm not sure why, because I'm actually not using any functions in the reports I'm testing against ...but I'll take it!)

I tried the model portability setting and it didn't seem to have an effect either way. (certainly makes sense though)  ..from the docs, it seems as though this has more to do with CRN 1.1 portability as opposed to db vendor.

The whole app just seems to be slower across the board.  ...and unfortunately, the entire application is built using DMR OLAP (which I warned was a big mistake from the beginning). Ultimately, I think this makes good case for us going to a robust MOLAP technology as our data-source (perhaps TM1, or Essbase)

RobsWalker68

Hi David,

The idea behind the portability setting was just to refresh the metadata and make sure it was working on metadata received from Terradata and if you have done that at least you can discount it.  As Lynn suggested testing the data source objects would also do the same thing.  I don't see rebuilding the model from scratch though, as suggested by customer support, is going to get you any further forward in this case but yet again I'm a complete Terradata newb so keep that it mind.

DMR unless you are using materialised views can be a bit of a pig and from my limited understanding of Terradata we arn't dealing with a nicely conformed star schema here but a normalised model albeit backed up by the grunt power of TD and its indexing.  Is that still the case out of interest as my knowledge of it is still formed around the Kimball/Inmon debates?

There are some specific things you can check like making sure your data source objects aren't complex i.e. no filters on them and on the DMR you can do some investigation to make sure the reports are following all the performance rules.

I guess if you have checked all the obvious the only thing you can do is find the simplest report that is running like a pig and and build it up a step at a time until you find the performance issue and then take it from there.

Good luck

Cheers

Rob

blom0344

Couple of thoughts:

1. AFAIK Cognos needs to recompile it's own SQL using different procedures for every RMDBS. I would not be surprised if this is more optimized for MS SQL than for TD (hence needing more query compilation time)

2. We're testing DMR for over a year now. We had no major performance issues UNTILL we started using seperate aggregation rules. (Time state rollup in Transformer) Actully, the generated SQL at the backend became a mess forcing us to come up with a workaround (seperate measures for time hierarchy levels)

3. The SQL that Cognos SEEMS to propose (including native) is just that; a proposition. Running traces often shows alternative solutions executed at the database in reality

Arsenal

Quote from: blom0344 on 28 Jan 2011 02:53:58 PM
Couple of thoughts:

1. AFAIK Cognos needs to recompile it's own SQL using different procedures for every RMDBS. I would not be surprised if this is more optimized for MS SQL than for TD (hence needing more query compilation time)

2. We're testing DMR for over a year now. We had no major performance issues UNTILL we started using seperate aggregation rules. (Time state rollup in Transformer) Actully, the generated SQL at the backend became a mess forcing us to come up with a workaround (seperate measures for time hierarchy levels)

3. The SQL that Cognos SEEMS to propose (including native) is just that; a proposition. Running traces often shows alternative solutions executed at the database in reality

To your point 3, at one client site (incidentally was also Teradata) we had some VERY strange SQL being sent to the DB. Performance of the report was not good, and I was seeing nothing out of the ordinary with the SQL being generated at the FM and report level. On checking the DB logs, we saw messy SQL that was quite different from what the report/FM was showing. Support checked it out and finally admitted to a bug with the way 8.2 generated Teradata SQL in our environment. They gave us a patch and it cleaned up the SQL

david.stachon

Great information. Thanks guys.

At this point, I think I'm going to pull the plug on DMR altogether in favour of using a more scalable and robust OLAP technology (I think Essbase is going to be the way to go for us as we already have a corporate license)

...then, hopefully, they'll be limited amounts of drill through down to relational data (i.e. to Teradata). It seems that just straight up list reports against Model Query Subjects is pretty good.

MFGF

Quote from: david.stachon on 03 Feb 2011 01:29:50 PM
I think Essbase is going to be the way to go for us as we already have a corporate license

<Gasp!>  Isn't that tantamount to swearing on an IBM-sponsored forum? ;) :D
Meep!

RobsWalker68

#9
Hi,

I'm not sure if this brings anything to the party but might be of interest.  While looking for something unconnected I came across this proven practice document on Cognos 8 / Teradata. 

http://www.ibm.com/developerworks/data/library/cognos/infrastructure/databases/page492.html?ca=drs-

Rgds

Rob


david.stachon

I think IBM would be okay with it.  :)

...as far as underlying OLAP data sources, Cognos is agnostic. (which I think is one of it's strongest features)  ....in terms of DMR, I think they'd be the first to admit it has some serious limitations (although, in some circumstances it's bloody convenient). PowerPlay, again, serious scalability problems, TM1 is just not quite there yet for BI, and I just don't want to go down the path of DB2 Cubing Services. (granted, I don't know much about it)

Yes, I did run across that article on TB / Cognos proven practices. It did make a difference but not quite as dramatic as I was hoping.