User:Mathurin.ache/Books/Machine Learningv2

Source: Wikipedia, the free encyclopedia.


Machine Learning

Introduction and Main Principles
Machine learning
Data analysis
Occam's razor
Curse of dimensionality
No free lunch theorem
Accuracy paradox
Overfitting
Regularization (machine learning)
Inductive bias
Data dredging
Ugly duckling theorem
Uncertain data
Background and Preliminaries
Knowledge discovery in Databases
Knowledge discovery
Data mining
Predictive analytics
Predictive modelling
Business intelligence
Reactive business intelligence
Business analytics
Reactive business intelligence
Pattern recognition
Statistics
Exploratory data analysis
Covariate
Statistical inference
Algorithmic inference
Bayesian inference
Base rate
Bias (statistics)
Gibbs sampling
Cross-entropy method
Latent variable
Maximum likelihood
Maximum a posteriori estimation
Expectation–maximization algorithm
Expectation propagation
Kullback–Leibler divergence
Generative model
Main Learning Paradigms
Supervised learning
Unsupervised learning
Active learning (machine learning)
Reinforcement learning
Multi-task learning
Transduction
Explanation-based learning
Offline learning
Online learning model
Online machine learning
Hyperparameter optimization
Classification Tasks
Classification in machine learning
Concept class
Features (pattern recognition)
Feature vector
Feature space
Concept learning
Binary classification
Decision boundary
Multiclass classification
Class membership probabilities
Calibration (statistics)
Concept drift
Prior knowledge for pattern recognition
Online Learning
Margin Infused Relaxed Algorithm
Semi-supervised learning
Semi-supervised learning
One-class classification
Coupled pattern learner
Lazy learning and nearest neighbors
Lazy learning
Eager learning
Instance-based learning
Cluster assumption
K-nearest neighbor algorithm
IDistance
Large margin nearest neighbor
Decision Trees
Linear Classifiers
Statistical classification
Evaluation of Classification Models
Data classification (business intelligence)
Training set
Test set
Synthetic data
Cross-validation (statistics)
Loss function
Hinge loss
Generalization error
Type I and type II errors
Sensitivity and specificity
Precision and recall
F1 score
Confusion matrix
Matthews correlation coefficient
Receiver operating characteristic
Lift (data mining)
Stability in learning
Features Selection and Features Extraction
Data Pre-processing
Discretization of continuous features
Feature selection
Feature extraction
Dimension reduction
Principal component analysis
Multilinear principal-component analysis
Multifactor dimensionality reduction
Targeted projection pursuit
Multidimensional scaling
Nonlinear dimensionality reduction
Kernel principal component analysis
Kernel eigenvoice
Gramian matrix
Gaussian process
Kernel adaptive filter
Isomap
Manifold alignment
Diffusion map
Elastic map
Locality-sensitive hashing
Spectral clustering
Minimum redundancy feature selection
Clustering
Rule Induction
Association rules and Frequent Item Sets
Ensemble Learning
Ensemble learning
Ensemble averaging
Consensus clustering
AdaBoost
Boosting
Bootstrap aggregating
BrownBoost
Cascading classifiers
Co-training
CoBoosting
Gaussian process emulator
Gradient boosting
LogitBoost
LPBoost
Mixture model
Product of Experts
Random multinomial logit
Random subspace method
Weighted Majority Algorithm
Randomized weighted majority algorithm