If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

PlanningErrorLog: Destructive unpack detected

Started by mr j, 18 Mar 2013 06:17:07 AM

Previous topic - Next topic

mr j

Hi, again..

has anyone got an idea what the following in PlanningErrorLog means exactly:

"Destructive unpack detected. The following objects are overwritten: 97 101 99 110 107 108 111 112 105 106 109 118..."  and so on a bunch of numbers.

There is a CAC synchronize macro with "Save changes if destructive" checked run at the time the above has shown up in log. How could I found out which objects those mentioned numbers are? This occurs regularly and it's always the same numbers.

skinners666

you might want to save the application definition xml and see if you can find references to those objects in the xml. You could also just try a manual synchronise and see what differences it reports.

mr j

Thanks for the reply,

however, I was wrong. Took a closer look at the time stamps and it was actually eList import which causes this error in the log. Also when importing manually from file.

Still interested to hear thoughts on this and what could be expected as a result of this "destructive unpack".
I looked at the xml but couldn't figure out anything.

Danno

#3
Just thinking out loud here and these are probably things you have already thought of when looking into this issue. However I wonder if those are itemids that are being referenced from the elist dimension. If that is the case and it is saying there is a destructive unpack my questions are:

Is the elist definition changing every time you do a elist import or are you just adding more items to the existing elist? If you are just adding more elist items you can get this warning without really impacting the application. If you are changing the elist and dropping items then there may be data that is getting removed due to the change or perhaps a rollup is being redefined by the change and that is flagging the error.

Have you noticed anything being negatively affected in the app after you get this error. Does anything get wiped out, do you lose any access table rules or data or are any admin links affected. To me you are not really indicating this but it helps to get that kind of detail.

Did you look into the publish tables and see if those numbers do infact correspond to any itemids for any of your elist items?

Sorry I can't be of much more help but perhaps this will spur some thoughts for you.

Danno

mr j

#4
Danno,
you're right on spot. I didn't want to give all the story yet as it could've become more rambling than relevant details   :)

The eList is 'technically' updated regularly to ensure new needed items will appear in the app automatically. However, I don't recall such ever happening ie. there's practically never any changes to the eList. The eList item id's don't match those in the mentioned error message and there's more object numbers mentioned in the error than there's items in the eList. Compared to all D-Lists in the app the objects in the error match only some of the item id's in a time dimension which in fact is the only D-List in the app that is not updated ever...

eList is used in an imported access table along with three other D-Lists. (This gives a warning of a possibly too large access table when updated manually. In earlier version 8.4.1 this did not happen, in 10.1.1 FP2 it does.)  All the D-Lists & the eList are updated every time (apart from the mentioned time dimension) to ensure new entries are there. App is synchronized, access tables imported based on these updated D-Lists, data reloaded -> which in my mind should ensure there's no issues, BUT:

After these updates are run we see in Contributor one D-List missing all it's detail level items (hierarchical from bottom to top: Products - Product Group - Products Total). The top total is shown with summary from the detail level numbers, but the detail levels are not there in the grid, the only visible row is the total. Publish from this does not show detail level data either. Base level for the access table is NO DATA, in import it's defined what is read, write etc.

This update is run every Saturday. In the beginning the Products detail level disappeared only every 2-3 weeks, now it's every Sat. There's another similar only slightly smaller app which didn't have this issue in the  beginning but it's starting to loose details regularly too.
Could this mean there's some table in the app database getting filled up with nonsense data the GTP can't handle, or something of the sort? I've noticed error messages
- "tried to pop engage but top of stack is engage"
- "context stack is empty"
in planningerrorlog, but right now can't recall exactly how I was able to reproduce them, maybe executing a C<A D-Link and even when executing an Admin link from a package to DEV app to give it a try.

Again, this did NOT happen in 8.4.1 but after migration to 10.1.1. FP2 it's giving a headache. It's exactly the same bunch of CAC & Analyst macros, data movement tasks with Data Manager etc. but in 10 it just doesn't work.


See, I warned you   ;)




EDIT:
Further rambling... I've discovered that there was data in the app import staging tables (im_cubename). The flow of macros is such that there's GTP prior to access table update step. Now, as it says in the manuals:
"If you create an access table after you have imported data, the entire import queue is deleted. Making changes to access tables, e.List items or saved selections that affect the pattern of no data in a cube can also result in data loss".
So, if there was data in the import table it was imported with the GTP, right? According to the above from manual this could cause an issue when updating the access table and then GTP again? I think I know why there was data.


Now, another question is why is there still data in the import table after the GTP. Should GTP clear the import table or not?

Yet another question related to the same: With another app there's
1. data delivered to the import table with db tools
2. CAC macro "Prepare Import" is run
3. then GTP
and yet the data remained in the import table in question. Shouldn't it be cleared at least by the "Prepare Import" if not the GTP?


I have the final question in my mind already...

ykud

Import tables are cleaned only by next import (there's a setting clear existing data) GTPs just convert them to internal XML format.

Danno

As ykud said, the next import clears the import table. That explains why you are are seeing data in the table after your process.  The two errors you list I have never experienced but that is because we keep our elist static once the application goes into the production environment. However one thing that comes to mind is that there are known issues around synchronization macros in 10.1. I don't know if all of these were addressed in FP2 or not. We are stuck at 10.1 until a decision is made by management to upgrade to 10.2.

Having said that, I am curious as to the size of the dlist for Products. I wonder this because I had an issue where we lost details on a larger dlist (seemingly randomly) and I had to restructure that dlist to solve the problem. This was on 10.1. Perhaps you could share more on this and we can move closer to resolution or at least get an understanding of what is happening and why.


mr j

#7
Hi,

Thanks for the input while I was on a sudden winter vacation..   ;)

So the next import clears the im_tables including the import queue -table?

I've now used the new CAC GTP-macro force reconcile -option to really clean up tables and waiting to see whether the next change in products D-List goes through or not. I've also removed the step to automatically update the eList for now.

The synchronization macros we're an issue during the migration process as the rep of a certain software provider did not install all the fix packs right away... once FP2 was installed it has worked. At least until proved otherwise, suggesting this problem at hand has something to do with the synchronization. Can't really tell, as you could read from my previous!

If there's documentation covering the synchronization issues in more detail I'd be interested to know where to find it.

The size of the Products D-List is currently 194 items. Other application details:
Number of cubes: 2
Total number of cells per eList slice: 1 614 292
Total number of D-List items in application: 479

Access table which includes the Products D-List (194 items) also includes following D-lists:
"A": 10 items
"B": 2 items
"C": 52 items
eList: 7 items
and there's 268 lines currently. Affects only one cube of the two.

I'm not sure if this covers the details you ment, Danno?

Danno

I am not sure what to say regarding your dlist size of 194 items. Ours, when we had problems, was slightly less than double that size (350). It is now down to 300. The one detail I find interesting is the number of cells per slice. That seems a little high to me for a 2 cube application. For example ours is 18 cubes with total number of cells per slice at 119,304 and the total number of dlist items at 672. I don't think you are hitting any hard limits but you may want to check the library and see what recommendations. You may be getting close which leads me to think that you may want to restructure that one cube a bit more. I will need to think on this a bit more. Right now I am seconded to another project and I am building some servers so once I get past that I can look at this in more detail.

mr j

#9
Yes, it's not the 194 items I'd be worried about and I'm aware of the fairly large amount of cells per slice. Actually forgot to include one dimension in the list   ::)   with 52 items... now edited.

What puzzles me most is the fact that these issues never occurred in the earlier environment with version 8.4.1. After migration to 10 and new servers there's this and that all new problems. Trying to say that with more resources number of cells per slice should not be a problem as it's the same as when in the old environment/version. Another thing is whether it's suddenly a problem in version 10, recommended limits are actual limits which in the past they really were not.

However, I'm looking at getting rid of the mentioned dimension "B" with two items only. That alone would reduce the number of cells quite a bit. There's also other points of development which would make the model smaller but as they require quite a bit of changes to background jobs I'm reluctant to start that as something that has worked suddenly does not.

Thanks for the input, Danno!

mr j

In case someone is interested,

I think this has been resolved, sort of. The original problem was that after a set of update macros had run according to a schedule sometimes all the detail rows were missing in the Contributor grid and only the total was shown with the total calculated. There was no clear connection with any of the error messages and the symptom, therefore all the rambling in the topic previously and even the topic itself is somewhat misleading..

However,
I noticed that for some reason or another the application data import and import queue tables were not emptied even though I had understood they should've been with the existing macro steps. This was never a problem in version 8.4.1 but after migration to version 10.1.1.

The set of macros contains steps to synchronize the application and update Access tables. If there was a change in these after which the application definition didn't match the obsolete data in the import tables apparently this caused the detail rows to disappear in GTP.
In version 10 there's possibility in CAC GTP -macro to force full reconcile which clears the data import tables. After this was applied it seems to work finally.

So, a fairly simple fix to a huge issue, but it doesn't explain why the macros ok in previous 8 won't work in 10 anymore as they did, but instead you have to figure out you now need to tick the force reconcile box..

Danno

Great feedback on the issue. That is very useful informtation and I have made a note of it. Sorry I wasn't of more help, I actually don't get back to Cognos land until next week at the earliest. I have been delving into OBIEE land and that has been quite a ride :)