View Full Version : Javascripting efficiency - loops versus non-loop

12-11-2007, 02:30 AM
Hello, I was wondering which was more efficient/has better performance.
Or perhaps it cant not be determined because of the different ways CPUs work.

The snippets in question are:
var str=""
str+= "Option="+obj[i].value;

str+= "Option="+obj[1].value;
str+= "Option="+obj[2].value;
str+= "Option="+obj[3].value;
str+= "Option="+obj[4].value;
str+= "Option="+obj[5].value;

If the value i is known(10), would it be faster to just punch the code out as in example 2? I know it's a lot [I]uglier but which would run faster?
Would ex.1 use branch prediction? and if so, would that make it slower than a straight punch through? Or are they equivalent? My gut feeling is that ex.2 would be faster even though its a lot more verbose.

Thanks ahead of time.

rnd me
12-11-2007, 02:59 AM
this would be faster:

str+= "Option="+obj[3].value;

with only ten little ops, you wont see a shred of difference between the two.

if you have a thousand, the loop would probably be faster if you cached object properties right.

fastest for big operations:

//and hey, since youre starting at one anyways let tweak this out to the max:
var str=[]
var mx=inputs.length || 1000; //the first 1000 nodes only
for(var i=mx; i; i--){
str[i]= obj[i].value || "";

var output = "Option="+str.join("\nOption=");

loops avoid repetition which is good for the computer as well as the programmer.

note how using the second method you posted,
there are about 5 things that are computed in the if clause, plus three more string creations, and object properties in the 'action' part.
compare to the loop sample, which makes no prior evaluations,
uses an array for storing the data as its built, and caching the length. (although it wont matter going backwards like this.

try not to ask for an object property more than once in a loop.
a simple rule of thumb would be that using fewer dots should yield faster code.

hope this help you get started.
like i said, you wont see a bit of difference on 10 items, and perhaps not even on 100.

i become more aware of the factors after making a javascript image editor. when applying filters to even 800,000 pixel images, your CPU will let you know if your loops are inefficient.

12-11-2007, 06:45 AM
Regardless of infinitismal efficiency issues, never, I repeat, never use the expanded for-loop idiom even with a small number of iterations. If anything, remember the code has to be parsed anyway, which in and of itself takes time too, but I'm assuming that should be ignored.

12-11-2007, 09:25 AM
Technically unrolling the loop is faster, since comparisons are not executed that would have been executed in the loop command. However, computers are fast, and 10 is small. If you're looking for the best of both worlds, consider Duff's device. You can even prototype it in as Array.prototype.loop(i) and make it very convenient to use.

12-11-2007, 10:14 PM
I do like his suggestion, the use of the array and .join method is nice.
I know 10 is small, it was just an example, but if extrapolated to a higher number, would it still remain true? Does JS do anything like branch predictions? or is it solely on the client's CPU?

In theory, would it then be faster to write out all, say 100 million, iterations instead of a loop?

I know it would probably never be put into practice, 100 million+ lines of code would be insane for such a much simpler solution of a loop.

12-12-2007, 03:18 AM
Here's my test:

function a(){
x+=0;x+=1;x+=2; . . . x+=99999;

function b(){
for(var i=0;i<1e5;++i)x+=i;

var x;

x = 0;
var aStart = new Date();
var aTime = (new Date()).getTime() - aStart.getTime();

x = 0;
var bStart = new Date();
var bTime = (new Date()).getTime() - bStart.getTime();

alert("written:" + aTime + "\nloop:" + bTime);

I put the "loop" code in functions so that the text would be already parsed by the time I got around to clocking them.

Results in FF3: "written:62 loop:183"
Results in IE6: "written:32 loop:46"