Weird Inconsistencies  
Author Message
LoveMeSomeCode





PostPosted: Visual C# General, Weird Inconsistencies Top

I recently upgraded a multi project solution from 2003 to 2005 and now i'm getting weird and inconsistant behavior on very basic sections of code.

In one area I have a decimal, which is equal to 0. I call ToString() on it and get "0.00000" instead of "0" like I ought to. In other sections of the application I get "0", in Snippet Compiler i get "0", and in .NET 1.1 I always get "0".

In another area I create a TextWriterTraceListener like "new TextWriterTraceListener(fileName);" which should create a new file. It does work in my sample 2.0 apps, but several places in my big project it just does nothing. No exception, no complaints, just no file.

Anyone have any ideas why this is happening Are there some settings I need to adjust on the Project or Solution settings The original file was in source countrol(Surround SCM), but I detached it and deleted all the vspscc files before I used the upgrade wizard.

Any help would be great,

Nick


Visual C#14  
 
 
TaylorMichaelL





PostPosted: Visual C# General, Weird Inconsistencies Top

It's possible the formatting rules changed a little between releases. It happened inside the CLR however as the managed code in both versions use the current culture settings as a baseline. Ultimately though, and I know people disagree here, ToString was not designed as a general purpose display method. If you want the number to be formatted in a particular manner you should use one of the formatting overloads to format the number exactly as it should appear. Otherwise you are depending upon the current culture and/or implementation of the default formatting.

As for the TextWriterTraceListener class I haven't had any problems with using it. I would therefore wager that whatever condition is required to generate a trace message isn't being met. In v2.0 they added several new features so not only are trace messages filterable by they also have configurable sources and "levels" so it is possible that your trace statement would never be generated. You can test this by seeing if you can get a trace using the standard debug listener with the same settings.

Michael Taylor - 12/5/06


 
 
LoveMeSomeCode





PostPosted: Visual C# General, Weird Inconsistencies Top

Well the TextWriter issue seems to be just an issue with when it gets created. In 1.1 I could call just the constructor and the file would get created, in 2.0 I have to actually write/flush before the file gets created. Not sure why it's different, but I can deal with that.

As for ToString() thing, I wasn't being specific enough. I have a varaible that's a decimal, and I assign it the value of 0. Then I say d1.ToString() and I get back a string 7 characters long. The exact same chunk of code in 1.1 produces a string 1 character long, "0".

I pulled out the IL DASM and looked at the IL code and found the culprit. They added a constructor to the decimal class in 2.0, which I could care less about. But it looks like they changed the primitive constructor call.

This code:

decimal d1 = 0.00000m;

produces this IL in 2003:

IL_0000: ldc.i4.0
IL_0001: newobj instance void [mscorlib]System.Decimal::.ctor(int32)

and this IL in 2005:

IL_0000: nop
IL_0001: ldc.i4.0
IL_0002: ldc.i4.0
IL_0003: ldc.i4.0
IL_0004: ldc.i4.0
IL_0005: ldc.i4.5
IL_0006: newobj instance void [mscorlib]System.Decimal::.ctor(int32,
int32,
int32,
bool,
uint8)

Now I'm not an IL expert, but it sure looks like the constructor that the runtime is generating for my primitive type has changed.

So far I don't know if this affects just decimal or other primitives as well, but I'd like to know why this is, and hopefully there's an easy solution, because I'm not changing 50,000 instances of decimal d = ... to use the new keyword.


 
 
TaylorMichaelL





PostPosted: Visual C# General, Weird Inconsistencies Top

The 5 parameter constructor was in previous versions of .NET as well. The change that occurred is the fact that the compiler in previous versions would convert 0.00000M to 0 before assigning it to the decimal. However this was technically a lose of information as the scale was being dropped.

In VS2005 if you specify a value with a scale then it'll use the 5 parameter overload so the scale information is not lost. If you specify 0M instead then it will use the one parameter Int32 version as it did in VS2003.

Michael Taylor - 12/5/06


 
 
LoveMeSomeCode





PostPosted: Visual C# General, Weird Inconsistencies Top

Ok, I see what you're saying, and I see that yes, it is better that it preserve the scale of the variable, and so the 2.0 way is probably better. I'll look into some Trim statements that might save my project upgrade.
But I still have some IL that doesn't make sense. In 2003 I changed the code to read like this:

decimal d1 = new decimal(0, 0, 0, false, 5);

and it produces IL very similar to the stuff from 2005:

IL_0000: ldloca.s d1
IL_0002: ldc.i4.0
IL_0003: ldc.i4.0
IL_0004: ldc.i4.0
IL_0005: ldc.i4.0
IL_0006: ldc.i4.5
IL_0007: call instance void [mscorlib]System.Decimal::.ctor(int32,
int32,
int32,
bool,
uint8)

but it still produces a 1 character string "0".

I thought it was the compiler was reducing my 0.00000m to a 0 before the assignment, thereby using the Int32 constructor. But here its seems like the 5 arg constructor is also t**** off the decimals before instantiating the variable. Is that what's going on or is it the ToString() override in 1.1 that truncates the scale before converting it

Also, is there no compiler switch I can use to get it to preserve the primitive behavior of 1.1 Even though it's wrong, it would be nice to have the option.

Thanks for the help,

Nick

 
 
TaylorMichaelL





PostPosted: Visual C# General, Weird Inconsistencies Top

No truncation is done with the 5 parameter overload in either version. Therefore we can only assume that the difference lies in the ToString implementation. Comparing the two versions they effectively do the same thing and ultimately call the same CLR routine with the same parameters. Looking at the implementation details of the CLR method that ultimately does the conversion however things are dramatically different.

I feel like I know C++ really well but to be honest scanning through the CLR source can be confusing at best due to the tricks and macros used. Therefore I can only give a partial picture of what appears to be going on. In v1.x converting a decimal to a string ultimately resulted in the decimal being converted to a float and then to a string with the current culture information thrown in to format it properly. In v2.0 however it is converted in place rather than being converted to a float first. Presumably this is what is causing the differences you are seeing. However I can only guess because, like I said, the CLR code can be difficult to understand.

There is no compiler switch to toggle back to the previous behavior as far as I'm aware. However if you use 0M instead of 0.0000M you'll get the behavior you want. Tedious to make the changes but find/replace could speed it up quite a bit.

Michael Taylor - 12/5/06