[GLLUG] Is there any sane reason to do this? (Database question.)

James Courtier-Dutton james.dutton at gmail.com
Tue Oct 7 19:23:21 UTC 2014


On 7 October 2014 20:05, John Winters <john at sinodun.org.uk> wrote:
> I'm extracting data from a legacy (sort of - still actively sold) system
> which has the following table:
>
> (CSV dump of the table)
>
>
> Days,DateIdent
> 2000-01-03 00:00:00.000,1
> 2000-01-05 00:00:00.000,2
> 2000-01-06 00:00:00.000,3
> 2000-01-07 00:00:00.000,4
> 2000-01-08 00:00:00.000,5
> 2000-01-10 00:00:00.000,6
> 2000-01-11 00:00:00.000,7
>
> ...
>
> 2031-04-29 00:00:00.000,10382
> 2031-04-30 00:00:00.000,10383
> 2031-05-01 00:00:00.000,10384
> 2031-05-02 00:00:00.000,10385
> 2031-05-03 00:00:00.000,10386
> 2031-05-04 00:00:00.000,10387
>
> Then whenever another table in the database would normally have a column
> of type "Date" it has column of type "Integer", which is used to index
> into this table.
>
> I'm no database guru, but this seems to me to be bordering on the
> insane.  Apart from anything else, it means you have to error check
> every attempt to retrieve a record with a date in it, in case the date
> integer does not have a corresponding row in the Date table.
>
> Is there a good design reason to do this?
>

There is no good design reason for this.
My advice would be to just leave the integers as they are, and work
out a formula to do the conversion.
For example:
(integer + offset) * 86400 = unix epoch time.
Then use standard functions to convert from unix epoch time to a
display friendly time.
Then take unix epoch time and run the conversion in reverse to reach
the integers. Verify that the conversion works both ways, and then
just leave the integers, and drop the dates table.

James




More information about the GLLUG mailing list