After some more testing I now find that the search randmly takes 1.0 ms, 0.5 ms or 0.0 ms (± 0.00005 ms) on my system ( i7-4790K CPU). Do you have any good explanation of that? I allow for 5 digits of precision in the timing, so it’s kinda strange it falls into brackets of half milliseconds. I got only a vague idea of what goes on in low level languages like c, but could it be the effect of some CPU caching or something like that?
Python is indeed C under the hood (for a large part, at least) but I don’t think it is a side effect from the byte code or lower. It may have something to do with the output mutation… (just taking a stab at this)
time_spent = round((self.__t2__ - self.__t1__) * 1000, 2)
What does the output look like when you don’t multiply, or don’t round?
For output, you can get the desired effect with
.format(), including rounding.
Thanks for the input! I didn’t know you could do that with format . . I will try that out!
after removing the rounding I get the same result. Fluctuates a bit more now though, but the pattern is still clear. The strange thing is that when running on repl (only 5M items tho), the search time seems to float more freely. It runs on Linux tho, and I run on Windows 10.
# some different runs Search : 0.0 s Search : 0.0009987354278564453 s Search : 0.0005002021789550781 s Search : 0.000997781753540039 s Search : 0.0010001659393310547 s #current version of __exit__ def __exit__(self, exc_type, exc_val, exc_tb): self.__t2__ = time.time() spaces = Timer.line_length - len(self.__msg__) time_spent = self.__t2__ - self.__t1__ Timer.total_time += time_spent Timer.add_msg('%s%s : %s s' % (self.__msg__, ' ' * spaces, self.__t2__ - self.__t1__))
Times around 1 ms is most common, then 0 ms and quite rarely 0.5 ms.
I think I will try to remove my current way of constructing the string entirely and see if that changes anything
I’m not sure if Python supports all the formats that C does (with the above % syntax). Check C++ and see what all there are. Not being a C programmer, I’m once again blowing smoke, but I was sure rounding is in there somewhere. Should actually check that .format() supports rounding, as well, before I walk away convinced and biased.
Yep, it’s possible.
Could possibly be just what else your CPU is doing at the same time. In the longer case it might start running and then switch to checking on the anti virus and then switch back to finish processing. There’s always going to be too many factors for precise timing. The best (and only) way to time accurately is to run the same thing somewhere over 1000 times and then divide to get the average. The more times you run it the more accurate your timing will be.
Good point! I modified my code so I run the search part x times.
- I don’t have to multiply the creation of the array
- I can test on the exact same data
This is my result for 10.000 search runs over 5.000.000 ints
Search runs : 10000 Total Pieces : 5000000 Missing Pieces : 30 Missing Pieces : [4910216, 4342650, 4311865, 4261865, 4210440, 4127409, 4054961, 3858957, 3847473, 3706236, 3398906, 3211504, 2718694, 2654875, 2644655, 2629007, 2608768, 2471715, 2166265, 2144242, 1886887, 1852793, 1403282, 1354109, 1305442, 1281497, 798860, 552078, 496959, 82186] Found Pieces : [4910216, 4342650, 4311865, 4261865, 4210440, 4127409, 4054961, 3858957, 3847473, 3706236, 3398906, 3211504, 2718694, 2654875, 2644655, 2629007, 2608768, 2471715, 2166265, 2144242, 1886887, 1852793, 1403282, 1354109, 1305442, 1281497, 798860, 552078, 496959, 82186] Iterations : 560 per run ----------------------------------- Timer Report ----------------------------------- Setting up array : 79.56862 ms Picking missing pieces : 22.02845 ms Finding end pieces : 0.0 ms Search (10000 runs) : 0.64674 ms per run Total Running Time : 6569.01622 ms -----------------------------------
On a different note, I tried to speed up the creation of the array and picking the missing numbers, so I changed
pieces = list(range(1, range_size + 1)) # pick out and remove missing pieces in a 2nd step
into something like
# picking missing pieces first, then make the array pieces = [x for x in range(1, range_size + 1) if x not in self.missing_pieces]
This turns out to be a horrible idea adding even the simplest condition to the creation of millions of ints comes at a huge cost. Might work better with a dict.get() check, but if the pieces are many and the missing ones are few, the first approach will probably still be the best.
Why don’t you use timeit module included in python standard library ?
It is more precise; it is designed for sole purpose of timing python codes.
If you use clock, you take into account the total time between two particular clock time, which may include some other process using CPU, and thus increasing time as @alexcraig mentioned .
timeit is designed keeping this in mind.