If you are unable to create a new account, please email support@bspsoftware.com

 

DS-FILE-E200 Cannot open (read) Database Alias file ....

Started by JaneSecrist, 08 May 2014 11:34:33 PM

Previous topic - Next topic

JaneSecrist

DS-FILE-E200 Cannot open (read) Database Alias file 'D:\temp\1\dad_00001f74_cognos_2.tmp

We have been running decision stream for about 10 years now and suddenly with no explanation my jobs are failing and a message like the one above is in the logs.  Has anyone seen this?  Do you know what the recovery is? 

I have deleted the job and rebuilt it but this is our controlling jobstream for all the overnight processing done in this older tool.

MFGF

Quote from: JaneSecrist on 08 May 2014 11:34:33 PM
DS-FILE-E200 Cannot open (read) Database Alias file 'D:\temp\1\dad_00001f74_cognos_2.tmp

We have been running decision stream for about 10 years now and suddenly with no explanation my jobs are failing and a message like the one above is in the logs.  Has anyone seen this?  Do you know what the recovery is? 

I have deleted the job and rebuilt it but this is our controlling jobstream for all the overnight processing done in this older tool.

The only other reference to the DS-FILE-E200 error I can find is in this support log:

http://www-01.ibm.com/support/docview.wss?uid=swg21333019

Is it possible the D drive has become too full to support the required temp file processing?

MF.
Meep!

JaneSecrist

I actually found that article.  It doesn't help with my current problem. 

I think this might actually be a catalog corruption and am testing that theory now - the reason I think corruption is that most if not all other jobstreams execute just fine - it is only this one - the bad thing is this one is my controlling jobstream that calls all the others.

I have learned that the files created in that directory have code in them and all the connection strings etc.  I have no idea why they are generated and can not find a reason for them.  It seems like a lot of overhead. 


MFGF

Quote from: JaneSecrist on 09 May 2014 09:46:09 AM
I actually found that article.  It doesn't help with my current problem. 

I think this might actually be a catalog corruption and am testing that theory now - the reason I think corruption is that most if not all other jobstreams execute just fine - it is only this one - the bad thing is this one is my controlling jobstream that calls all the others.

I have learned that the files created in that directory have code in them and all the connection strings etc.  I have no idea why they are generated and can not find a reason for them.  It seems like a lot of overhead.

Yes - I'm surprised they are required. Database alias files are the "old" way of defining connections in the halcyon days before DecisionStream used a catalog to store its metadata.

I would suggest you do a File> Backup Catalog, create a new empty database, and set this up as a new empty catalog. Then do File > Restore catalog and bring in your backup.

Does this fix the issue?

MF.
Meep!

JaneSecrist

I back up the catalog daily because the software and server are so old and I have a huge fear that our conversion to data stage will not be complete and these processes will somehow die.

I created an empty test catalog and restored the catalog into it and still had the same problem. 

I recreated the jobstream = painful but done now = and it seems to be working fine.  This points me to something in ds_component_line being broken for the original.

We are finding that mappings get lost from one catalog to another and that the field has to be dropped and re-added as well.

I am beginning to think it knows we are phasing it out and has decided to get all the attention it can get before we send it to the nursing home. 


MFGF

Quote from: JaneSecrist on 09 May 2014 01:30:44 PM
I back up the catalog daily because the software and server are so old and I have a huge fear that our conversion to data stage will not be complete and these processes will somehow die.

I created an empty test catalog and restored the catalog into it and still had the same problem. 

I recreated the jobstream = painful but done now = and it seems to be working fine.  This points me to something in ds_component_line being broken for the original.

We are finding that mappings get lost from one catalog to another and that the field has to be dropped and re-added as well.

I am beginning to think it knows we are phasing it out and has decided to get all the attention it can get before we send it to the nursing home.

Ha ha - it is alive! :)

It's good that you got past the immediate issue by recreating the jobstream but I'm curious what the underlying problem was. What version of DecisionStream are you using? Not still on 6.5 I hope!!!

Cheers!

MF.
Meep!