View Full Version : Floated calculation error in Moz
It is welknown that, while multiplying/dividing floated numbers, the translation decimal(input)binary(CPU)decimal(output) small errors my occure (as some floated can not be translated in binary as finite numbers).
But I am amazed that the error is different in IE and in Moz.
var v1=0.8;
var v2=3;
var res = v1*v2
res will be
2.4000000000000003 in IE
but
2.4000000000000004 in Moz;
This makes my live harder, as I intended to correct this using the correction code as
var v1=0.8;
var v2=3;
var acc = 16// rounding floated digits accuracy, 1 for 0.x, 2 for 0.xx etc
var pwr = Math.pow(10,acc);
var res = ((v1*pwr)*(v2*pwr)/Math.pow(pwr,2)).toFixed(acc);
IE returns now correctly
2.4000000000000000
while Moz changes the error to
2.3999999999999999
Is it a Moz's bug? Isn't my code correct? Any ideeas?
It looks like Moz autmatically rounds even the error :D beyond the 16th digit, while IE don't.
Hm...
liorean 01232006, 02:15 PM No idea what the real cause for this is, but note:
 The scripting engines may be performing optimisations that cause operating differences.
 The scripting engines may use different internal precision. I wouldn't be surprised if iew used the x8632/IA32 80bit floating point numbers or used 64bit hardware floats directly (which means the processor will be using 80bit floats for calculations to add some precision), while moz and op used a software 64bit float struct to make them able to run the same floating point code on both high endian and low endian processors, as well as giving support for JavaScript numerals on processors with no native 64bit float support or with legacy 64bit formats. Of course, this is just speculation on my end.
 It's probably iew being in error. Opera votes for moz being correct :)
Seems like another IE/MOZ crossbrowser "thinkabout" ... Hey, you *&^%%^%$#$# IE and Moz programmers, can't you finad a common way, darn *&^%%^%$#$# :mad: :mad: :mad: ?
I am fed up of those differences in interpretors...
Men, we do need a common way to progress... Those silly differences keep all of us on an irresolute position. :rolleyes:
liorean 09052006, 01:05 PM Still interested in getting an answer to this, Kor? Because I've delved some into the number systems and think I know the cause.
Essentially, the scripting engines are doing the same thing to the numbers, they just have different ways of dealing with serialisation. JScript serialises so as to not add precision, it simply truncates the number at the chosen number of decimal digits. SpiderMonkey, linear_b and KJS/JavaScriptCore also serialise so as to not add precision, but they round to nearest number instead of truncating.
However, the actual binary representation is exactly the same  only the string parsing to number and number serialisation to string differs.
For example, try this:
var
a=2.4000000000000003,
b=2.4000000000000004,
c=a===b;
alert([a,b,c].join('\n'));
/*
JScript

Microsoft Internet Explorer

2.4000000000000003
2.4000000000000003
true

OK

JavaScriptCore in Swift (just because:)

JavaScript Alert

2.4000000000000004
2.4000000000000004
true

OK

*/
Eric Lippert (http://blogs.msdn.com/ericlippert/) wrote a blog entry Fun With Floating Point Arithmetic, Part Three (http://blogs.msdn.com/ericlippert/archive/2005/01/17/354658.aspx) which speaks of just this. I quote:
/  /
A reader wrote in to ask some questions about how floating point numbers are displayed in decimal. He noticed a weirdness in VBScript, but it's actually easier to show the scenario in JScript. (There are additional factors at play in VBScript which I may get to in another article later.) Consider the following:
var x = 0x8000000000000800;
That would be a 64 bit unsigned integer. It's obviously too large to fit into a 32 bit signed integer, so JScript generates a float and assigns it to the variable slot. However, since this number requires exactly 53 bits, it can be represented with full fidelity as a float. It is not rounded.
In decimal notation, this value should be 9223372036854777856. But if we print out the value of x, we get 9223372036854777000! Why is it rounded off when this particular float has full precision?
Maybe it doesn't have full precision. Maybe I've been lying to you this whole time. Maybe in fact the float is stored in decimal internally, with a 16 slot decimal digit buffer! Fortunately, we can test this hypothesis out.
print(x % 0x800); // 0
print(x % 10); // 6
Whew! The mod operator shows that JScript believes that this number is evenly divisible by 2048, and that the last digit when represented in base 10 is in fact 6.
But that just makes it even more confusing! If JScript knows that the last decimal digit is a six, why does converting the number to a string end in a zero?
Because we do not want to ever make it look like a float has more precision than it actually does. By lopping off the last few decimal digits and replacing them with zeros, we emphasize that floats are accurate only to about fifteen or sixteen significant decimal digits. Imagine the confusion that would result if the situation were reversed:
var x = 9223372036854777000;
print(x); // prints 9223372036854777856
Where did the extra precision come from? To the naïve user who does not realize that numbers are stored in binary internally, this looks really bizarre. They put in something with 16 significant digits and something with 19 comes out! We do this rounding because in the real world, people expect floating point numbers to act like decimal numbers, not binary numbers.
A correct and efficient floattostring algorithm which shows numbers with a prescribed level of decimal precision is surprisingly difficult to write, particularly if you add the requirement that the stringtofloat algorithm have nice "round trip" properties. There are lots of places where things can go slightly wrong.
You've probably noticed already, for instance, that the algorithm which JScript uses does NOT have the property that the decimal integer which comes out is the closest decimal integer to the actual value. Given that we're going to round to a fixed number of decimal significant digits, we would expect that 9223372036854777856 would be rounded to 9223372036854778000, not 9223372036854777000.
In fact, the specification categorically states that the last digit need not be correctly rounded, because doing so is a pain.
/  /(Emphasis mine).
No, As it was only a curiosity, and as I needed not a higher precision than 4 decimals in my project till now, I have had quit searching for a solution.
So that, I should not care about the "seemingly" wrong decimal number, as long as the binary (which is in fact the important thing during the different calculation intermediate steps) is stored correctly.
As long as the precision is important mainly during the calculation, and as long as the result, for humans, has a meaning no further than few decimals after the floated point, I guess that the problem became irrelevant...
In other word, toFixed(d) is enough, as long as d is no higher than 16, and 99% of cases nor it should be.
Is this a correct interpretation of that article?
liorean 09052006, 02:52 PM In other word, toFixed(d) is enough, as long as d is no higher than 16, and 99% of cases nor it should be.
Is this a correct interpretation of that article?
Not quite. The total number of precise decimal digits isn't consistant, because the number has 53 bits of precision, but those bits are distributed on both integer and fraction part. So the size of d would have to change depending on how many bits are used for the integer part.
So that, if I don't care about the seemingly errors during the calculations steps and the final result will be just simply toFixed(), I must expect for a correct shown result inside the 15 digits fraction part?
liorean 09052006, 05:53 PM So that, if I don't care about the seemingly errors during the calculations steps and the final result will be just simply toFixed(), I must expect for a correct shown result inside the 15 digits fraction part?Ah, you're misunderstanding what I just said. The calculations will work perfectly fine, they are the same in all the engines I tested. However, how many decimal digits of precision you can expect is unreliable. If the integer part of the number is large, you get lower number of precision decimals. For example, (3*0.8)*0x100000000 gives just five decimals of precision, the sixth decimal is truncated in JScript and rounded up in the others.
So, to get consistency you need to lop of the last decimal and round the fraction part. Don't reuse a value you've lopped off the last decimal from in calculations though, use the original value for that and only lop off the decimal when you're displaying it. There are several ways to lop off the last decimal off a number, and there are probably slimmer ways to do it than the way I'm going to show you. But here goes:
with(Math){
Number.prototype.imprecisionBeGone=function(){
var
fPart=this%1,
iPart=thisfPart,
iAbs=abs(iPart),
precision=floor(log(pow(2,53(iAbs&&ceil(log(iAbs)/LN2))))/LN10),
decimals=pow(10,precision),
fraction=precision&&round(decimals*fPart)/decimals;
return iPart+fraction;
}
}
(3*.8).imprecisionBeGone();
Don't reuse a value you've lopped off the last decimal from in calculations though, use the original value for that and only lop off the decimal when you're displaying it.
No, I guess you misunderstood me. I was to say the same. Display the rounded only as informative (under the 16 floated  anyway for an informative matter, what is over 45 floated is hard to be appreciated, and in fact of no use, for human mind, for a quick reveal), but use the genuine (not rounded) value for further, if any, calculations. Thanks a lot, you have been of great help. You should write a short article about this, here or where ever...

