If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

Design/Performance Questions

Started by slylock, 04 Dec 2009 10:08:10 AM

Previous topic - Next topic

slylock

Greetings all -

I'm putting together a package that runs against a SQL2005 database. Right now its a pretty straighforward design, 1 table, about 30 columns and 6+ million rows (with potential for A LOT more). When I start pulling more than 5 columns, the performance is pretty slow and if I start adding in some filters and throw it into a pivot table, results take a very long time.

The table is indexed and the server is optimized (in fact other packages that use other databases with complex joins, etc., perform very well), but I'm not sure what else to try to improve performance (I've run SQL Profiler against it and no changes were suggested). Is there something within FM that I could try or is there something within the database that I need to do?


Secondly, this project is basically a rewrite of a reporting project that uses 30+ different customer databases (each with 3-6 million rows of data). Since they all share relatively the same information, my goal was to merge all these together into a single database that the BI tool could run against. Aside from performance issues against the subset of data, an issue that I'm seeing is the use of filters. Each customer db has its own unique set of codes, but when a report accesses this centralized db schema and the user wants to filter on one of these codes, it brings up the codes from all the different customers, so the user may end up selecting a code from a customer other than the one they intend to filter for. Is there a way to build a filter so that its associated with a specific customer type or class?

blom0344

It all depends..
A table may contain 6+ million rows, but the aggregated set that Cognos needs to process may just contain some hundred/thousand records. That is the whole goal of BI: creating a high level overview by aggregating factual data to higher levels.
You do not specify what reports you intend to build, but do not expect any performance from any tool if you bring in data at the lowest grain.

Indices or not, if the query is optimized at the database level but generates tons of records, then Cognos will stall when it needs to build a crosstab (pivottable) on such a mass of data

Your issue with customer needing their own set of codes, is very easy to solve. When merging the data bring in an object that holds the data origin. You can then allow for a pre-filter (through a prompt) to give the specific code-set for a customer.