If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

Databases

Started by josephalapatt, 25 Aug 2005 10:29:48 AM

Previous topic - Next topic

josephalapatt

Apoligies to all if this is really a stupid question.But iam really new to the Cognos Enviornment  .Our clients uses lots of systems like Finance use SAP ,CustomerService use Navitiare etc.i know this all needs to be linked to a cognos connection .But would there be any performance issues in either of the systems when data is pulled .


Thanks
Joe

cognosfreelancer

Yes there would be performance implications on the data stores. It all depends on the complexity of the query and volume of data pulled.

NKT

cognostechie

#2
It also depends on which Cognos component you are talking about.

Reportnet/Impromptu are report writers so you would connect them to your Production DB so you might have performance issues depending on what type of query it is and depending on what type of DB you are using. Different databases interpret report writer queries in different ways.

If you talking about building cubes with Transformer, there are different ways to make cubes.

1> Build cubes thru Impromptu IQD's. This will connect to your Production DB so you might have performance issues.

2> The same architecture as above but build cubes from a Datawarehousing DB. This way, you are not connecting to your Production DB but you will need to transfer data from Production to Datawarehousing DB every once in a while.

3> Write a program to dump the data into flat files from the Production DB and build cubes using those flat files. No Performance issue in this as you are not connecting to the Production DB and reading the data thru a program and creating flat file(s) is usually very fast so no issues there either. Being a programmer, I have found this to be the best way for making cubes. There are many advantages too. You are eliminating the need of an ODBC. You don't have to rely on the joins and architecture of the system. Thru a program, you can massage and join the data in every way and get a cube 10 times more powerful that you may not get by SQL joins (IQD's) as SQL joins are always limited to the table architecture in the DB (and if you create more joins in Impromptu that are not supported by the indexes of the DB, just watch the performance). Most systems are designed for transactional purposes rather than reporting purposes.


Tell you what, the <3> above really worked for us and we eliminated using Impromptu for building the cubes and our customers absolutely love me for this. It is 1000 times faster to build the cube. A cube that used to take 6 hrs to build (thru IQD that goes thru ODBC) gets done in 5 minutes and the whole process is also automated thru schedulers.




josephalapatt

Thanks for this .It was elaborate :o


Thanks
Joe

Merri

However, there is one advantage I can think of for building cubes through iqds - if you want to create drill-through reports from the cube to a lower grain level, using iqds created through an Impromptu catalog or Framework Manager model helps you to keep everything in sync, because you can use the same catalog or model to build the reports from.

If you're building humungous cubes right down to atomic transaction level, of course you don't need drill through reports.

cognostechie

That's correct. For drilling down to low level details, IQD's are better way to go.

We are not building humungous cubes, only to the lowest level in the Transformer model. For analytical purposes and for a cube that is used by the senior and middle management, they never care for the lowest level of detail (Ex: Invoice Line or even Invoice header in case of a Sales Cube) so they never need to drill down to any Impromptu reports either.  They only want to see sales by agents, by time periods, by regions, by items, by product groups etc etc but not what invoice no. it relates to. That is for the clerical people and in our case, clerical people don't use cubes.

We are keeping Powerplay and Impromptu seperate from each other so only Impromptu hits the Production DB.

cognosfreelancer

Cognostechie

Your third point is interesting.

I assume you dumped the data file on the same server as the one the cube is being built on.
Any thing to watch out with this approach.

Any problems with the size of the data file itself

NKT

cognostechie

Freelancer:

The application (ERP) is on a UNIX server and the data file is dumped on a UNIX server. The directory on the UNIX server is shared thru NFS and mapped to the Windows server (Cognos server) thru a drive letter, so Transformer reads that data file straight.

There is no problem with the size of the file (it's nowhere near the max file size), nor any other performance factor. In fact, the cube for Sales is incremental so the data file is really small. The Orders cube is not incremental but the file size is still not bad.


Darek

Cognostechie:

The performance problems you were experiencing with IQDs, might have been related to configuration. I agree that build from flat files can save time occassional, but I wouldn't dismiss IQDs at all. There are many tweaks available to improve performance. Going through ODBC is a BIG no-no.

Second, you did not include information how long did it take to extract those flat files. Yeah, it build in 5 minutes with flat files, but it takes 5 hours to get those flat files. I'm partially kidding here, just trying to bust "an urban myth" before it start circling around in the community.

Talking about myths: some trainer on the east coast said once that the best thing you can do to shield yourself from changes in source DB in ReportNet, is to use only calculations in the presentation layer. I was pulling my hair, when I saw query complexity and performance of a model developed by one of his students.

cognostechie

#9
Darek:

I clearly mentioned in my earlier post that getting those flat files is pretty fast. In my case, it does not take 5 hours and we are talking about 3 years of data. The time taken is between 5 -Ã,  7 minutes. You may want to note that it all depends on which database and programming language is being used and most importantly how big the data is and how you utilise the indexes of the data tables to READ the data thru the program.

I have seen some programmers writing programs that takes 30 minutes to read the data BEFORE the 1st record is dumped to the file. That is bad programming and it does not disqualify this approach.

I also mentioned earlier that my Sales cube is INCREMENTAL, so only the new data is dumped to the flat file. For other cubes that are not incremental, I am not dumping every line but structuring the program to dump one line according to the lowest level required in the cube, not the lowest line level present in the database.

In one of my cubes, there are around 500,000 lines in the flat file which are summarisation of around 10 million records in the database. Ex: Lowest level in the cube is by Items so I am creating one line for all the transactions for that item for that date for that customer etc. In the database, there is one record for EVERY transaction. See the difference? and that cube has a lot of dimensions and levels and all work. It's the same concept as reading data from summarised files from the DB.

I am not saying this is the only way or the best way to make the cubes but it is definately a good way as long as the data needed for the cube is not of 500 million lines. In that case, this approach would not work.

..and I am an 'urban myth' too. I do not exist. This is my ghost posting it.


cognostechie

I missed one other point from your post. I would not dismiss the IQD's too but as you also agree that going thru ODBC is a no-no, the problem still exists becuase for most databases, you have to use ODBC and Cognos has not come out with another solution for that.

Darek

I'm not picking up on you, just exchanging ideas.

I've used to use flat files myself, when I was extracting data from mainframe, so it's not that I'm against it. I like the incremental approach, as it let's you put less stress on the DB.

Major databases, like Oracle, DB2, M$ SQL, are accessible to Transfromer through IQDs and native drivers. Which DBs you've had to use through ODBC?

cognostechie

PROGRESS and ORACLE. When we started on it back in 1996, Impromptu needed different ODBC drivers to connect to both of these. We tried it for some time, in fact for an year and then changed the method to use flat files for cubes. After that, I didn't try the IQD route to make cubes.

Do the recent versions of Impromptu connect to these databases without ODBC? It's possible I may not know it because I haven't used Impromptu in years.

Darek

ORACLE, yes. Progress, don't know.