In this episode, we explain the proper semantic interpretation of the Akaike Information Criterion (AIC) and the Generalized Akaike Information Criterion (GAIC) for the purpose of picking the best model for a given set of training data.  The precise semantic interpretation of these model selection criteria is provided, explicit assumptions are provided for the AIC and GAIC to be valid, and explicit formulas are provided for the AIC and GAIC so they can be used in practice. Briefly, AIC and GAIC provide a way of estimating the average prediction error of your learning machine on test data without using test data or cross-validation methods. The GAIC is also called the Takeuchi Information Criterion (TIC).

Podden och tillhörande omslagsbild på den här sidan tillhör Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.. Innehållet i podden är skapat av Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E. och inte av, eller tillsammans med, Poddtoppen.