If you are unable to create a new account, please email support@bspsoftware.com

 

News:

MetaManager - Administrative Tools for IBM Cognos
Pricing starting at $2,100
Download Now    Learn More

Main Menu

** Dates to be displayed based on the area **

Started by Good Friend, 14 Sep 2015 03:05:21 PM

Previous topic - Next topic

Good Friend

Hello All..

I have a requirement where i need to show begin and end date in different time formats in report based on the distribution center selected from the prompt page.
Data is stored in eastern time in database by default. We have around 15 distribution centers which are in different time zone in USA. So is there any way to approach this at framework level to show data in different time zones based on the prompt(center) selection.

Please let me know your thoughts and I appreciate the same.

cognostechie

Conceptually, you can change the data with a simple case statement for each distribution center regardless of whether or not there is a prompt in the report. Since the data is stored in EST, something like this:

Case

   When   <Distribution center> = 1 then Date - subtract 1 hour>     **  If the distribution center is in Mid-West
   When   <Distribution center> = 2 then Date - subtract 2 hour>     **  If the distribution center is in Denver
   When   <Distribution center> = 3 then Date - subtract 3 hour>     **  If the distribution center is in California

Else NULL   

End

Good Friend

Cognostechie, Thanks for your reply. I made a mistake in my previous post and correction needs to be done, it was actually 113 distribution centers we have in the database. So what will be the best approach for this instead of a case statement. Can you please elaborate step by step if you have any solution.

Good Friend

Also in the above case i'm tried writing a case statement and failed in getting the result:

(datetime -1) (To get result in CST as datetime is in EST and in format AUG 26, 2015 11:31:03 PM)
(datetime -3) (To get result in PST as datetime is in EST and in format AUG 26, 2015 11:31:03 PM)

Is there anyway to get the desired result for the above.

cognostechie

The code I had mentioned was a pseudo code. There are functions in FM which can do this. Without knowing the details of the field and table names, I cannot provide the exact code. I tried it and this works:

_add_days(   [Business View].[Date].[Date]  , -3/24)   

This will subtract 3 hours from the datetime  so you can try something like this:

Case
  When <distribution center> = 1 then _add_days(   [Business View].[Date].[Date]  , -3/24)

Else  NULL

End

Based on your requirements, a solution in FM would not be the best solution. I would prefer to do this in the database either by adding a table with one row for every distribution center and having a field which will record the time difference between EST and the time of the center

Ex:

Distribution Center                 Time_Lag

1                                            -3
2                                            -2
3                                            -1
4                                            0
5                                             1

Using this in a parameter map and then substituting the value in the Query Item would be a better solution. I would not like to create date fields
for local time in each Fact table as you can have multiple Dates in each fact table and that would be too much of data addition which will slow down every query
reading data from those tables.

Good Friend

Appreciate your help and time. I did checked and as of now we are populating data for only 13 centers and i tried the date logic you have given me When <distribution center> = 1 then _add_days(   [Business View].[Date].[Date]  , -3/24), it still not giving me the correct result. Date field is having data time data type.


cognostechie

I don't know how to help you. You have to understand that the expression I have given you is to tell you what you can do , not to just use the expression as is. You will have to replace
<distribution center> = 1  with your actual query item and the code. You also have to replace [Business View].[Date].[Date] with your actual query item. I tried the _add_days expression in a query item and it works.  The Data Type for the field has to be a DateTime field, not a Date field.

BigChris

I tried to use the same calculation with the 3/24 element and it didn't work for me either. When I read the help info it suggests that the additional element needs to be an integer. Cognos, and in fact most databases that I've used, are fairly difficult to manipulate times with...dates are fine though.

You don't say what database you're using, but assuming you're using an SQL server you might be able to use the DATEADD function.

http://www.w3schools.com/sql/func_dateadd.asp

I haven't tested it myself, but it's probably worth investigating.

Lynn

How about the _add_hours function?

The last example below uses 1/60 and mentions that only some database technologies would support that. There is a similar comment in the _add_days function so perhaps the lack of success with 3/24 in previous posts might be related to that.



_add_hours ( time_expression, integer_expression )
Returns the time or datetime, depending on the format of "time_expression", that results from adding "integer_expression" hours to "time_expression".

Example: _add_hours ( 13:59:00 , 1 )
Result: 14:59:00

Example: _add_hours ( 2002-04-30 12:10:10.000, 1 )
Result: 2002-04-30 13:10:10.000,

Example: _add_hours ( 2002-04-30 00:00:00.000, 1/60 )
Note that the second argument is not a whole number. This is supported by some database technologies and increments the time portion.
Result: 2002-04-30 00:01:00.000



BigChris

Interestingly I don't have an _add_hours function available to me. Do you know if that was added after 10.2, or whether it's not there because of the version of SQL server that I'm on?

Lynn

I see it in both 10.2.1 and 10.2.2 although I don't think it is new. It is in the Business Date/Time Functions folder on the functions tab, not in the vendor specific function area. It right there alongside add_days, add_minutes, add_months, add_seconds and add_years.

BigChris

I'm going to need to do some more digging...I'm on 10.2 and I've only got _add_days, _add_months and _add_years

cognostechie

I have SQL Server 2012 and the 3/4 is working and yes, it may not work for some databases.

Dates in SQL Server and Oracle are stored in character format even if the data type is called Date or Datetime. That's why the indexes on these fields are slow as it is technically stored as characters. That's the reason the Date Keys in any table are always integers as an integer index is a lot faster than character index and hence the joins with be executed faster. If the functions in FM don't work , you can always use the cast and substring function to retrieve the hour and minute portion, then cast it into numeric , subtract the required time and then cast it back to Date. You will have to be careful though as the hour and minutes will not increment continuously but end at 24 and 60 so you would need a case statement to signify a logic to calculate the difference properly when it recycles from 24 to 1 and 60 to 1.   

bdbits

Not to digress from the original question, but SQL Server certainly does not store datetime as characters. They are 8-byte values, 4 for date and 4 for time. Dates are numeric offsets from 1/1/1900, and time I believe is milliseconds (I could be off on that). Think about how much overhead there would be for any kind of date selection or calculations if they were strings, and you can see why it is stored numerically. We have some tables with dates as keys and they are plenty fast, indexed or not, and are heavily used in joins (for example, time dims).

I do not use Oracle enough to say, but I am willing to bet it stores them in a similar fashion as has most any full-featured relational database I have ever used.

cognostechie

Sorry, my bad. I should have done some research before posting this. My interpretation stems from what my DBA had told me and that is 'date indexes are very heavy in Oracle and SQL Server, they behave like character indexes'. Moreover, when we write an SQL in SQL Server management Studio, it accepts the Date as a string like:

where Date between '2015-07-01 00:00:00.000' and '2015-07-21 00:00:00.000'

so I thought it stores it as character. You are right, it stores it as 8 byte integer and it allows the user interface to treat it as a character but that also means that it spends time on converting it to the stored value before it can fetch the data and it will spend time on that.

However dates used as keys will definitely be slower than integers used as keys. It might work in your environment and that 'good practice' is relative rather than factual depending on how much data you have. The fact that it works well without indexes means the data is probably small. In some companies they add 10 million records to the fact tables every day. 'Smallint' , which can range between -32768 and 32767 would be sufficient in most cases for a date dimension even if you keep 10 yrs data and 'INT' which can range between -2^31 to 2^31 would be sufficient for datetime. Smallint will need 2 bytes as opposed to 4 bytes required by date and INT would need 4 bytes as opposed to 8 bytes required by datetime. From what I have seen so far, most architects have designed a Date/Datetime dimension with integers as keys and this is also what Ralph Kimball suggests. Maybe I am missing something here?
 



bdbits

No, you are not missing anything, I mainly wanted to clarify that date storage is not character-based, which would indeed add a whole lot of overhead. In many or maybe most of our data warehouses here, we do use identity columns (db-generated integer surrogate keys) or lately, the date itself stored as an integer. This is mostly for ease of querying so those of us querying the raw data don't have to join it to the time dim all the time. :)  For example, a date key might have an integer value of 20150916.

Storage wise, perhaps to my detriment, I do not get real excited about a few bytes anymore. At best case of 2 versus 8 bytes, even with a billion rows on a fact table, you would only save 6G. Nowadays, that is not a lot of storage. Personally our largest fact tables only get into the 100s of millions of rows, so it probably matters less to me than it would to say a telephone company collecting call data, or someone collecting telemetry from 1000s of instrumentation points.

On the other hand, I do wonder though. As discussed dates are actually stored as numbers, and the joins would likely be index scans. Pretty efficient. Realistically most time dimension tables are small enough and used frequently enough that the whole thing probably ends up in memory anyway. So even though I do use and logically think it makes sense to use surrogate keys, I think the impact might not be as significant as we sometimes think. But without any testing and given best practice (as you say Mr. Kimball himself), I could be dead wrong. :)

cognostechie

My assumption is that if the number of bytes required are less then the joins would be more efficient. Disk space is not a concern because as you said , the storage is hardly a problem these days. Even if the Date Dimension is always small, it's the Fact tables that matter for the join and both together may not get loaded into memory. That's another reason for having integer values for Date keys apart from easy querying of the data by the developers.