# Can someone help me improve the efficiency of my code?

console.time();
var numberOfRolls = 1000000000;
var avgMultiplier = 1;
var avgTimesNumRolls = numberOfRolls * avgMultiplier;
//Used to make sure the successful rolls approach the average
var nums9thru11 = 0;
var nums12 = 0;
var possibleOptions = [2,3,3,4,4,4,5,5,5,5,6,6,6,6,6,7,7,7,7,7,7,8,8,8,8,8,9,9,9,9,10,10,10,11,11,12]
while (avgTimesNumRolls !== 0) {
var rollResult = possibleOptions[Math.floor(Math.random()*36)];

if (rollResult >= 9 && rollResult <= 11) {
nums9thru11 += 1;
} else if (rollResult === 12) {
nums12 += 1;
}
//That code filtered out successful rolls
avgTimesNumRolls -= 1;
}
if (avgMultiplier !== 1) {
var avgTotal = (nums9thru11 + nums12)/avgMultiplier;
console.log("The average total is " + avgTotal + â€ś!â€ť);
} else {
var multiplierTotal = nums9thru11 * 2 + nums12 * 3;
console.log("The multiplier is " + multiplierTotal + â€ś!â€ť);
}
//That code returns a different output depending on if there is an average multiplier or not
console.timeEnd()

For numberOfRolls greater than 1 000 000 000, it take 17 * number of rolls/1 000 000 000 seconds. I am wondering if I can take that down to maybe a 10 or an 8.

For something this complex, would it not be better to hand it off to Python, or Ruby, or C? Iâ€™m no expert but this is a tall order for JavaScript in terms of optimization. It will never equal that of those mentioned, among others. Been wrong before, though.

Logging in any loop construct will be deferred, but still takes up a whack of stack space. If you really want to benchmark an algorithm, donâ€™t log anything during iteration. Donâ€™t even cache. Just power through and log the variables at the end.

Nail down the core algo and then start caching results or looking for ways to create generators that donâ€™t use much memory. Iterators and generators are ideal for conserving memory. An example would be the `range()` function in Python. No matter what upper value you give it, the memory usage is the same.

Bottom line, any amount of logging however small, in a loop this large will consume vast amounts of clock ticks and memory. `console.log()` is the resource hog extraordinaire when we come down to it. Time scale wise, think in terms of milli-seconds, which is a huge hit on any kind of performance.

In a perfect world there would be a general term to describe this sequenceâ€¦

We are given to understand that this is related to the ways that two dice can add up to `n`. That definitely suggests a sequence with a general term. Kind of puzzling over this one since it borders on combinators and probability. Not two of my strong suits. Twenty or thirty years ago Iâ€™d be all over this. Still am, but with much less mental accumen. Maybe reading too much into this?

Yes that is the probability of rolling each possible combination with two dice. It takes some of the load off of the while loop

Narrowing down the definition, can you generalize a mathematical model?

Which model would I generalize? I had rollResult = Math.floor(1 + Math.random()*6) + Math.floor(1 + Math.random()6), but reducing that to rollResult = possibleOptions[Math.floor(Math.random() * 36)], having it select a number through the possibleOptions array (adjusted for probability) yielded a 1.5 increase in efficiency

Calling Math.floor() and Math.random() half as many times will definitely bolster efficiency.

If you really need a billion random events then Math.random might not be suitable (if that huge quantity is important, the quality might be too), might want to get random data from the os instead (and considering what the os promises about that data firstâ€¦and also it might not have that much entropy because there are few practical applications for it)
Randomness is pretty difficult to reason about.

As for â€śefficiencyâ€ť thatâ€™s vague. Something has to budge (more compute power, doing less work) or just let it take the time it does

If itâ€™s a quick dirty script intended to avoid doing the math, then there doesnâ€™t seem to be any point in changing it

As mentioned, using a language like C may offer a couple times speedup (js is already very fast though), that still leaves the randomness issue - getting true randomness would likely be the bottleneck and changing language wouldnâ€™t matter, or if thatâ€™s not wanted then itâ€™s back to either the amount of iterations not mattering that much or at least some very difficult reasoning about the properties of the randomness source.

Oh and Python/Ruby are both slower, neither of those have a goal of being anywhere near native speed, they just need to keep up with IO devices

1 Like

True randomness is not really of my concern. I just need an efficient way to roll, and sort, dice.
I appreciate your comment!