If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

Workplace full

Started by satay2hot, 24 Apr 2009 01:41:13 AM

Previous topic - Next topic

satay2hot

Has anyone encounter this error message in Analyst?

What does this mean when pushing data from analyst to contributor

Thanks for your attention

Cheers.

StuartS

The default workspace, (memory allocation) that analyst uses by default is 64000, (at least it was up to 8.1).

The task you are trying to do means that analyst needs more than this.

There are two things you can do.

1.  Reduce the amount of data you are pushing from analyst to contributor.

2.  Increase the workspace size.  If is already at 512kb then you need to do number 1.

You may find you are trying to push too much data from analyst to contriobutor.  If this is the case then try to export the D-Cube as a txt file and do a copy load prepare in Contributor.

Regards

Stuart

mrobby

Also, you should take a look at the @SliceUpdate macro.

This effectively allows you to updated a slice of a model or cube and then loops through till all slices have been updated.  I have had to use this before when attempting to update cubes or models that are very large.

nedcpa

You need to consider how often you have to transfer this data from Analyst to Contributor. If this is a one-time load, then load only a few nodes at a time to accomplish your task. You can always use the slice update macro, but it would be unnecessary if this is a one-off situation. If you have to transfer this data on a regular basis, then your simple and safe bet is to use the CLAP (copy>load>prep) feature, and set up a macro to run the entire process. It is important to note that A>C links run on a single thread, and they have not been designed to handle large data volume.

mrobby

Just on a side note Nedcpa,

What do your quereies typically look like if you are going the import data method?  Since your load is a single column do you union a bunch of queries together to create the load file?  Just looking for a best practice on how you would write your query to prepare the flat file for load.