Not sure where best to post this because I am not sure where the error is occuring.... but....
I'm taking a DateTime in C#, and saving it to SQL Server 2005 as a DateTime. When it is reloaded out of the DB back into C# (as a DateTime), I'm seeing between a few microseconds and so far up to 2 milliseconds in difference.
The process I'm using, takes a DateTime and saves it to SQL Server. This DateTime is then converted ToBinary().ToString() and saved temporarily in in C#. Then I re-load the DateTime that was saved in SQL Server and convert it ToBinary().ToString(). The two strings are definitely different.....
Are there known rounding issues between SQL Server and C# or is there something going on in my code? I would be a little suprised that SQL Server cannot store a C# DateTime without rounding errors... or at least not 2 milliseconds of rounding errors.
I know 2 o 3 milliseconds doesn't sound like much time, but right now, it has costed me many hours of my time!
I'm not sure if its related, but when you're in precise date territory, you have a problem: SQL Server is not anywhere as precise as .NET when it comes to dates, and it will truncate them (its a weird amount too.. like, 1/3rd of a millisecond precision, as opposed to .NET that can go far more than that).
It has been one of the biggest issues with SQL Server for years. As of SQL Server 2008, there's a DateTime2 (or something, I forget the name) which allows for very high precision. With 2005 and down though, you're out of luck.
So you said "I would be a little surprised that SQL Server cannot store a C# datetime without rounding errors", well, be surprised, thats exactly what it is :) At least its finally fixed in 2008.
There, its not the MSDN article, but good enough for now:
.NET is precise to 100 nanosecond increments (ticks), while SQL Server is precise in rounded increments of .000, .003, or .007 seconds
If you think its rediculous and it makes SQL Server 2000/2005 a joke (when it comes to Dates...its great for other stuff), you'd be right =P Again, thank gods for 2008.
If you need more precision (at the cost of not being able to query the dates as easily from SQL), you can store the binary representation in SQL directly... and if all you need for queries is loose ranges, you could store both format in your table.
Yes I agree, that makes 2005 a joke. I could see perhaps a few nanoseconds difference for internal storage conversions or some other archiac reason but to loose milliseconds... incredible!
And thanks for the quick response. I've been doing pass after pass through my code trying to identify a coding error but now I can stop... and perhaps get some sleep!
Looks like I'll be taking your advise and storing at in non-datetime format in the database.
Thank you Shados!
No problem :) I've never actually had the issue with it myself... its pure luck that I even knew about it... one of my friend is a database developer (that is, he's not a DBA, doesn't administrate databases, and doesn't do any application code... he just writes SQL 40 hours a week =P), and that particular thing drove him totally insane, so we had a few discussions about it...
Personally, I don't even care about the lack of precision...if it was tenths of second precise or something...but now, its 1/3rd of a millisecond precise. -THAT- makes it a pain in the ***. Having a date like 01/01/2008 12:12:12.008 doesn't become .000, it doesnt become 0.1, it doesn't even become 0.01... it becomes 0.007 (or something). That goes against all expectations of 99.9999999% of developers...