If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

RESOLVED - FM 10.2 vs Amazon RedShift PostgresSQL datasource

Started by squish88, 28 May 2013 11:52:39 AM

Previous topic - Next topic

squish88

Hi all,
   I'm attempting to connect to an Amazon RedShift PostgresSQL database.  I've got it connected successfully in Cognos Connection via ODBC (yes, it tests OK), and I've got the same datasource (same name, info, etc) set up locally via ODBC (it tests fine there as well).  However, when I go into FM 10.2 and try to use the New Query Subject Wizard to get the metadata from that datasource, I get this:

QE-DEF-0285 The logon failed.
QE-DEF-0325 The logon failed for the following reason:
RQP-DEF-0068 Unable to connect to at least one database during a multi-database attach to 1 database(s) in:
REDSHIFT

UDA-SQL-0031 Unable to access the "REDSHIFT" database. Check that the connection parameters to the database are configured correctly. For example, ensure that the data source connection contains the signon information, such as a password, to connect to the database.
UDA-SQL-0107 A general exception has occurred during the operation "attach".
[Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application


Any ideas as to what I may be missing?

P.S. - It IS set to use the saved signon information when connecting

squish88

Unbeknownst to me, the ODBC drivers on the 64bit server we 32bit.   Once I used the 32bit version of the PostgressSQL ODBC drivers all connected as expected.

Nuffsaid

We were planning on attempting to use Redshift. Can you please keep us updated on how you make out??

Thanks

Nuffsaid.

squish88

So far, I'm able to connect to it, and successfully pull data into a report via the direct SQL method (ie. not using FM for anything other than publishing a package with a data source in it, no query subjects).  The only real problem I've run into so far is, there's an issue with Framework Manager where when you go to use the "New Query Subject wizard" it pulls EVERY ROW that's in the table(s) that the query points to...   Which isn't a major problem if you have small tables, but ours have millions of rows in them which made it irritating at best when attempting to create query subjects.  I have a PMR open w/IBM about it right now.  As it stands now, as long as you don't mind direct SQL reports, or waiting FOREVER for query subjects to validate, you can create reports against Redshift.

Nuffsaid

Interesting. BTW just as an FYI, We went to support when we first began investigating the possibility of using Redshift. We were told that it was not supported so not sure how much help you'll get from them.

Are you completely unable to use the Metadata wizard in FM to import your tables in the normal fashion?

Quotethere's an issue with Framework Manager where when you go to use the "New Query Subject wizard" it pulls EVERY ROW that's in the table(s) that the query points to...
Isn't that the expected behaviour? When we import tables into the model the default SQL created is Select * from [sometable] Then we would model our Query subjects on top of that.

Don't mean to sound simplistic. Just trying to get a better idea of what you're dealing with.

Really appreciate you taking the time to respond.

Nuff

squish88

So, stated more clearly, we CAN use the wizards successfully, it just takes AGES for the query subject to validate against tables with large amounts of data.  For tables that are empty, or only have a few hundred rows in them, it works/validates just fine.  For tables w/millions of rows in them, it just sits and spins, and spins, and spins while it reads through each and every one of the MILLIONS of rows in the tables...   Normally, we'd develop against a DEV/QA environment with far fewer rows in the tables, but as this is a proof of concept right now, we're doing both connectivity as well as load testing in the same environment.

  One work around we were considering is making synonyms pointing at shell tables (that have the same data structure as the 'real' tables, but no data) just to get FM changes made, and then re-pointing the synonyms to the 'real' tables after the changes had been made to FM. 

  As for expected behavior, there's got to be a better way to validate the underlying data structures than reading ALL of the data.  At least I'd hope there is...

Nuffsaid

Again, interesting. I wouldn't think that a validation would read all the data, agree it should just check the metadata / structure?

Where is the Cognos app hosted? I ask because we're running Cognos in an Amazon EC2 instance reporting against a managed
Oracle 11R2 DB also on Amazon. (also as a POC) I have a fact table with over 54 million rows and it validates in FM in under a minute.

So potential problem areas, ODBC / Connectivity, Postgress /Redshift itself (not being supported), who knows...

Please let us know how you get on, and if you have any questions regarding our setup, feel free!

Nuff

MFGF

Quote from: Nuffsaid on 29 May 2013 11:45:09 AM
Interesting. BTW just as an FYI, We went to support when we first began investigating the possibility of using Redshift. We were told that it was not supported so not sure how much help you'll get from them.

As long as the ODBC you are using conforms to the ODBC 3.5 standard (or higher) and you are on a Windows platform, you should get support from IBM.

Cheers!

MF.
Meep!

MMcBride

Been awhile since I posted... so pardon the Necro here on this chain   ::)

My current company is attempting to plug Cognos 10.2.1 sp12 into Redshift and we have seen this exact problem.
Well not the OP's issue but getting Framework manager to work...

Even when we limit the number of records to return when we go to update the object for Redshift from Netezza, when we try this now it attempts to return the hundreds of millions of records in these tables...

So we are creating a mirror environment and limiting the max records per table to 1000, then we will update all of our objects validate etc.
Then repoint the model to the "True" environment to publish the package.

This really limits the usefulness of testing in FM itself but does get us around the issue so we can get our model updated to begin testing reports.

I don't need help but I wanted to let folks know that even though this topic was brought up in 2013 it is still an issue on 10.2.1 in Feb 2017...

the6campbells

The "issue" stems from how the vendors driver architecture is handling a partially read result set.

If it helps, consider a fake example of a table with 10K rows where you test a query subject and look at the first 25 rows in the grid.

When the 'result set' is closed, the driver is probably continuing to drain the rest of the result set by reading the remaining 10K - 25 rows.

Many database systems will happily close a partially read result set without doing that extra leg work but from recollection Redshift is one that does not.

You should raise an issue with Amazon to enhance their driver which some other vendors did when that issue was noted to them.

On another note, if you expect to leverage model generated SQL from Cognos BI and see Redshift do as much processing on the data as possible, you need to move to Dynamic Query. Generic ODBC will use a very limited set of SQL query constructs which by default will incur a lot more local processing more often than not. If you are doing really basic list reporting/data extracts you may not find that a concern but once you are doing more typical query/reproting/analyis/dashboarding perspectives of your data then your performance will suffer.

Amazon Redshift was integrate via the Amazon JDBC driver since 10.2.2 Fix Pack 2.

the6campbells

Given you mentioned Netezza, their improvement re closing after partial read was delivered in 7.2.0.9-P1