Sorry, but what? If I drove 100 miles at 50 miles per hour, how would multiplying those numbers (100 * 50 = 5000) be in any way an indicator of time? If you post a link to the project or exercise, perhaps we can clear this up. Distance divided by speed would be the proper way to calculate the time taken to make the trip.
I found it. https://www.codecademy.com/projects/practice/function-args-the-nile
I completed it a while back, and remember it vaguely. Even my code has driver_time calculated by multiplying distance by speed. Seems like a flaw in the project, but I don’t remember even questioning it at the time. I’m reviewing my code to see if I can figure out why the seemingly illogical formula is used.
From what I can tell (there’s not much explanation given) speed is likely a rating rather than the actual rate of travel. So we aren’t employing the distance formula to determine the time taken to traverse a known distance at a known rate. It’s a formula the “The Nile” uses to compensate their drivers. My best guess as to what happened here is that the original instructions are as shown in the video. At some point someone without knowledge of the project said, like we did, “Hey wait. That’s not how you calculate time using the distance formula.”, and it was changed. Some additional explanation in the project instructions would help.
I’m glad to see I wasn’t the only one banging my head against the wall when running into this. It seems as though they changed the requirement in that step to have you divide instead of multiply, but they did not change the values that the test code was looking for.
This project is still flawed. Although the step 12 instruction for calculating driver time has been corrected so that it is mathematically correct, all of the values in the test_driver function in test.py have been calculated using the mathematically incorrect formula time = distance * speed. Given a test distance value of 80, and Driver2 with speed 7 and cost 20, then the cost of that driver is 80/7*20 which is approximately 228.57. The test assumes Driver1 with speed 2 and cost 10 to be the cheapest, with an incorrectly calculated cost of 80*2*10 = 1600. Unit tests are meant to make error identification and debugging easier, but this exercise does the exact opposite. Still not fixed after how long now ¯\(ツ)/¯ !!!