Introduction to machine learning: What is learning, learning objectives, data needed.
Bayesian inference and learning: Inference, naïve Bayes.
The basic objective of learning: Assumption of nearness and contiguity in input spaces, accuracy, Bayesian risk and casting of learning as Bayesian inference, Risk matrix, other cost measures
Other issues in learning: Generalization and model complexity, Accuracy, Empirical risk and training, validation, and testing, Model complexity, Structural risk, number of free parameters vs. VC dimension, Bias-variance the tradeoff, Curse of dimensionality, Training sample size requirement, Convergence and training time, Memory requirement, Introduction to online/incremental learning
Objective functions for classification, regression, and ranking
Some supervised learning formulations: Linear regression and LMS algorithm, Perceptron and logistic regression, Cybenko’s theorem for nonlinear function estimation, MLP and backpropagation, introduction to momentum and quasi-Newton, L1-norm penalty and sparsity, SVM, support
vector regression, decision trees
Kernelization of linear problems: RBF, increase in dimensionality through simple kernels, kernel definition and Mercer’s theorem, Kernelized SVM and SVR, Other applications of kernelization, matching a kernel to a problem
Role of randomization and model combination: Committees and random forests, boosting cascade of classifiers
Some unsupervised learning machines: Clustering criteria, K-means, Fuzzy C-means, DB-scan, PDF estimation, Parzen window, EM-algorithm for a mixture of Gaussians
Optional topics: Manifold learning, Kernel-PCA, semi-supervised learning, introduction to generative and probabilistic graphical models