Iterating through every single combination of ML models, features, hyperparameters, etc

Hey everyone,

This is kind of a strange hypothetical question, but it’s something I’ve been wondering about as I learn more about ML models and how to fine-tune them to get the best performance. I know that what I’m describing is not generally done by ML engineers, but I’m trying to understand WHY it’s not done and what they do instead…

So when setting up a machine learning model, we get to choose from many different models, feature selection methods, hyperparameters, and much more. The number of possible combinations is astronomical, and the ML engineer has to find the one that is optimal or close to it.

Now, to test every possible combination would be computationally expensive on a grand scale. But couldn’t it be done just once to determine the best model, then deploy that model into the real world? Surely, a computer could perform that enormous test even if it took a whole day to process, then the ML engineer would know which model/features/hyperparameters are best for this particular problem, and they wouldn’t have to manually fiddle around with features and hyperparameters… Right? Wouldn’t you have found the “needle in the haystack” if you can run this horribly slow process just once?

I’m just curious about why this approach is not used and what is used instead. The only thing that really comes to mind is that it results in overfitting, but it seems like we have to watch for that pitfall no matter how we’re developing our models.

Thanks!