This refers to process of assessing whether the machine learning algorithm you have trained is performing well.
The performance is measured against certain metrics e.g a model to detect cancer will have a different set of metrics and evaluation requirements than a model to prices house prices.
Tehniques to evaluation.
Train/Test split: This is the simplest method of evalation.It help you point out overfitting or underfitting.
Cross validation:This involves splitting the data into k subsets, where each subset is used as a test set while the remaining k-1 subsets are used for training. This process is repeated k times, with each subset being used as the test set exactly once. The results are then averaged to give a more reliable estimate of the model’s performance.
Metrics: This metrics are chosen based on the type of machine learning.Classification algorithms(accuracy, precision, recall, F1-score, and ROC-AUC) and regression model(mean squared error (MSE), mean absolute error (MAE), and R-squared.)
Learning Curves:These are plots that show how the model’s performance improves as the amount of training data increases. Learning curves can be used to identify if the model is overfitting or underfitting the data.
Confusion Matrix: This is a table that shows the number of correct and incorrect predictions made by the model compared to the actual outcomes.