According to the springer.com *“The no-free-lunch theorem of *optimization* (NFLT) is an impossibility theorem telling us that a general-purpose, universal *optimization* strategy is impossible. The only way one strategy can outperform another is if it is *specialized* to the structure of the specific problem under consideration.” *

It was introduced in the year 1995 by David Wolpert and Willian Macready. The theorem basically says that there isn’t just one model that works best for all problems.

The Chemical Statistician on word press says that the term “Model” is a representation of reality at its simplest form. The simplifications are then used to exclude details that are not needed .This the allows us to focus on the aspect of reality that we want to understand. The simplifications are based on the assumptions which might then hold in some situations. However, it may not hold in some situations also. Meaning that a model that can explain a situation at its best may also fail in other situations.

One assumption of a model for one problem may not hold for another problem. This is why it’s common to find various models that are best for certain problems in machine learning. This could be used in supervised learning ( both validation or cross-validation). It’s used to assess the predictive accuracies of many models to search for the best-suited model.

References:

http://link.springer.com/article/10.1023/A:1021251113462