How come in javascript 9.9 + 4.2 = 14.100000000000001

How come in javascript 9.9 + 4.2 = 14.100000000000001
Because all JS numbers are actually 64bit floating point numbers.
More info here: http://stackoverflow.com/questions/5...ntmathbroken
There are workarounds, like this, for example:
Code:function add(one, two) { return ( (one * 10) + (two * 10) ) / 10; } add(9.9, 4.2); // 14.1
Because of the accuracy of your computer math chip.
Try...Code:alert((9.9 + 4.2).toFixed(1))
Last edited by jmrker; 09082013 at 11:48 PM. Reason: types too slow.....
.toFixed() cannot always be relied upon.
alert (Math.round(0.49999999999999992).toFixed(2)) // results in 1.00
If you need an exact answer to a set number of decimal places then shift the decimal point in all your numbers that many places to the right to make integers before doing the calculation and then shift it back after the calculation. For example with currencies that use two decimal places you should always multiply them all by 100 at the start and divide by 100 at the end or the answer might not be exact.
That is OK for addition, but for multiplication you need to divide by the multipliers squared
Code:<script type = "text/javascript"> var a = 4; var b = 5 alert (a*b); // 20 var c = ((a*100)*(b*100))/10000; alert(c); // 20 </script>
Last edited by Philip M; 09092013 at 07:52 AM.
All the code given in this post has been tested and is intended to address the question asked.
Unless stated otherwise it is not just a demonstration.
I'd add that this produces a string. If a number is desired, then one can explicitly convert it back to a number with:
Interesting problem. Seems more like a flaw inCode:Number((9.9 + 4.2).toFixed(1));Math.round
thantoFixed
though. After rounding, you'll have either [0 or 1.toFixed
returns the expected values (0.00 and 1.00) in both cases.
But one must remember that even *THAT* number is *NOT EXACTLY* 14.1.
No decimal tenth (0.1, 0.2, 0.3, etc.) except 0.5 can be expressed exactly in IEEE floating point (the number type used in all modern CPUs).
Remember, in binary floating point (which IEEE floating point uses), each bit to the right of the decimal point represents a NEGATIVE POWER OF 2.
So, in binary, 0.110101 is 2^1 + 2^2 + 2^4 + 2^6
which is 0.5 + 0.25 + 0.0625 + 0.015625
and even thought IEEE floating point extends those negative powers of 2 out to 53 binary digits, you simply can *NOT* represent 0.1, 0.2, etc., EXACTLY as the sum of such values, out to any number of binary digits.
Here's a little demo to show you how close you can get but it's still not 0.1:
Code:<script> var s = 0; var b = "0."; for ( var i = 1; i < 60; ++i ) { var p2 = Math.pow(2,i); if ( s + p2 < 0.1 ) { b += "1"; s += p2; document.write( "2^" + i + " is " + p2 + ", sum is " + s + "<br/>"); } else { b += "0"; } } document.write( "Binary notation: " + b ) </script>
Last edited by Old Pedant; 09092013 at 11:28 PM.
Be yourself. No one else is as qualified.
Arbitrator (09102013)
That begs the question as to why every number isn't just offset (in JavaScript) so that it's an integer for the purpose of calculations... Bad performance?
So I guess the best solution is to determine how many decimal places are desired and use something like joesimmon's idea of implementing an add function... though I'd do it a bit differently:
Code:Math.add = function () { var sum = 0; var argumentsIndex = 0; while (argumentsIndex < arguments.length) { sum += arguments[argumentsIndex] * 1000; // thousandths offset argumentsIndex += 1; } return sum / 1000; // reversed thousandths offset }; alert(Math.add(9.9, 4.2));
Because for most floating point numbers that would give meaningless results.
Consider what would happen if 3e100 were to be converted from a floating point number to an integer before being used in a calculation  it would overflow the available memory in trying to add all the zeros. Or imagine if you were multiplying by 1e20.
Also the values that the calculations are performed on ARE integers  being the nearest binary equivalent of the number with the number of decimal places to offset the result being stored separately (the 300 and 20 in the above examples).
When you use floating point numbers in any programming language it is assumed that you only need the answer to be accurate to a few decimal places (approximately 15 in the case of JavaScript).
Only when you are using small integers do you expect computer calculations to be completely accurate after the decimal to binary to decimal conversions because only there do the decimal numbers have exact binary equivalents.
Stephen
Learn Modern JavaScript  http://javascriptexample.net/
Helping others to solve their computer problem at http://www.felgall.com/
Don't forget to start your JavaScript code with"use strict";
which makes it easier to find errors in your code.
Arbitrator (09102013)
It's not exactly "assumed" if you can't accurately add two numbers with a tenths digit without creating an intermediary function.
Sounds like they should have come up with two types: IEEEFloatingPointNumber—what we have now—and AccurateNumber, which is limited to and accurate within that 15digit range and what the average Joe would use with mathematical operations.
They have  in JavaScript numbers without an e or decimal point are limited to and accurate within that 15 digit range. If the number contains a decimal point or an e then it is an IEEEFloatingPointNumber.
Anyway 4.2 cannot be accurately entered within the 15 digits available. Other numbers such as 4.25 and 4.0078125 and 4.000003814697265625 and 4.0000000000582076609134674072265625 which can be held exactly within the 15 available digits do not cause unexpected results provided they are used with other numbers that can also be exactly entered.
Last edited by felgall; 09102013 at 08:22 PM.
Stephen
Learn Modern JavaScript  http://javascriptexample.net/
Helping others to solve their computer problem at http://www.felgall.com/
Don't forget to start your JavaScript code with"use strict";
which makes it easier to find errors in your code.
They have, in other languages. It's called DECIMAL numbers. For example, MySQL (and SQL Server) support the DECIMAL datatype. You specify a numbers as (example)DECIMAL(20,4)
which means it can hold 20 digits, 4 of which are to the right of the decimal point.
By the by, all of the Microsoft .NET languages *can* have this capability, too. It's built into the .NET framework. VB.NET and C# have the DECIMAL data type as a syntactical part of the language and C++ (and other languages) can use it via an extended library.
But don't think the DECIMAL type solves everything. Now how do you represent 1/3 exactly???? If you have a variable declared as DECIMAL(20,4) and assign the value 1/3 to it, you will see it as 0.3333, which looks fine. But now multiply it by 3 and you get 0.9999, whereas we all know that 3 * 1/3 is exactly 1. Right?
There is no way in *ANY* notation, in *ANY* base of numbers, to EXACTLY represent all possible fractions. Suppose we used BASE3 numbers. So then 1/3 would be represented as 0.100000, exactly. But *NOW* there is no way to represent 1/2 exactly!
Pardon me, but people who expect exact numerical results from any computerbased system are akin to those ancient alchemists who thought you could turn lead into gold.
Be yourself. No one else is as qualified.
As a comment on my own personal history...
I had the opportunity four times in my career to create floating point number systems in software. This was long before floating point hardware was standard (in fact, the hardware I worked on didn't have ANY multiply or divide built in!) and even before the IEEE adopted the current standards.
Two of those four times (for BASIC languages we produced for Atari 8bit computers) I opted to use DECIMAL FLOATING POINT. And just for the reasons that this thread has been discussing: Human beings, trained to use decimal, are just more comfortable when they can add 0.1 to itself 10 times and come up with *exactly* 1.00 (and so on). And I think many of the people who used those two languages appreciated my choice, especially those who wrote programs dealing with money.
But let's face it: Decimal floating point, even if built into hardware, is much much slower for computers than is binary floating point. And, in the end, it still doesn't solve the problem of representing 1/3 (or 1/7 or 1/11 or...) *exactly*.
So learn to live with binary floating point. It's here to stay.
Be yourself. No one else is as qualified.