If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

Performance tuning ( Urgent)

Started by jatin.popat, 31 Aug 2009 04:40:19 AM

Previous topic - Next topic

jatin.popat

Please let me know whether it really contributes to performance tuning.................

Setting the usage proeprty of query items as Identifier, Fact, and Attribute in FM model.

And if suppose I dont define usage property of query items, then will it really effect the performance of my reports ????

blom0344

Yes and no. Setting the usage as should be is a good idea, and the general idea is:

1. Set objects that refer to keys,indices and dates to identifiers
2. Define your facts with the proper aggregate settings (usually sum)
3. If  1. and 2.  do not apply , then set to attribute

A database does not relate directly to the way Cognos categorizes its items.

Performance tuning is a matter of defining the slowest component. This could the query, but in some cases also the amount of work that needs to be done on the server.

culori

Quote from: blom0344 on 31 Aug 2009 06:47:52 AM
Yes and no....

I can't figure the yes part from your answer.
More precisely, what is the performance gain if the database does not relate to the fact/attribute/identifier categorisation?

Lets say I have a query subject with the date partition key set to attribute (wrong since it is recommended to be identifier for all dates); if the prompt/filter condition is set to use it than it should be use a condition on the partition key and it should be correct, right?

sofwhere

          Setting the usage property helps you identify the use of data represented by each query item if its a
Identifier, Fact, Attribute.

It helps to get the right data while building your reports or Building a model for building the cube.
If its set to Fact, we can use it for aggregating.

Alp

Good topic :-)

1. Is it Cognos cube you are talking about or relation model?

2. Agreeing with blom regarding let us call things what they really are, I would look into datamodel tuning options if it is relation model.
Simple example: I had a SQL Server with a SQL generated by Cognos and after running tuning adviser in SQL Server, I created some statistics and indexes and the response time went down from 5 min to 15 seconds.

3. Point 2. is good for relatively small databases, and when it is relation datamodel.

4. What your requirements are and how far away are you from the requirements?

- Alp

blom0344

Quote from: Alp on 23 Oct 2010 12:09:07 AM
Good topic :-)
3. Point 2. is good for relatively small databases, and when it is relation datamodel.
- Alp

???

Alp

Quote from: blom0344 on 23 Oct 2010 09:16:59 AM
???

I will take it as 3 question marks :-)

Assuming that Cognos data source is a relational database in this conversation, tuning options via creating indexes has some limitations. Primarily because of the size of the database. Some time in the past I had a case where the data warehouse required frequent updates / rebuilds. After about 150 mil records in a fact table the update slowed down dramatically. Regardles of what SQL Server Tuning adviser suggested on additional indexes that was not an option.
With big databases you may be better off with the database partitioning option.

- Alp

blom0344

Couple of remarks here:

SQL server tuning advisor has to be used with care; it certainly can help , but it may no be the best in case of DWH cases.

Partitioning CAN be a better option to a regular index, if the amount of distinct values (like partitioning by year) is low. In other RDBMS one could also use bitmap indices. Partitioning is not a good option if all columns that would be candidate for an index have medium or high cardinality. This is the typical case with regular B-tree indices. A good example would be the customer dimension. You may have tens of thousands of customers and if your queries often relate to specific customer information then a B-tree index will give the best access..

Finally, you mention frequent update/ rebuilds of the facttable. In some cases it is pretty good practice to drop indices, perform the ETL and then rebuild the indices afterwards. This can lead to much better performance overall of the entire process

Alp

Agreed with your remarks.
Like you said it is not always a good practice dropping the indexes (or indices ? there are different opinions on what spelling is correct for the database objects...). For small incremental updates you do not want to rebuild indexes as index rebuild may be more expensive than update.
I found quite useful some practices of creating a single composite index to replace separate indexes in a fact table. Under no circumstances it is a generic recommendation! One size does not fit all. But in my case it meets the requirements and does not slow down updates.

I think we are deviating from the original topic though ...
We should create another topic if this conversation is of any interest to anyone or take it offline.

Thanks for your opinion.
- Alp