Machine Learning The Art and Science of Algorithms that Make Sense of Data 1st edition by Peter Flach – Ebook PDF Instant Download/Delivery: 1139575416 9781139575416
Full download Machine Learning The Art and Science of Algorithms that Make Sense of Data 1st edition after payment

Product details:
ISBN 10: 1139575416
ISBN 13: 9781139575416
Author: Peter Flach
As one of the most comprehensive machine learning texts around, this book does justice to the field’s incredible richness, but without losing sight of the unifying principles. Peter Flach’s clear, example-based approach begins by discussing how a spam filter works, which gives an immediate introduction to machine learning in action, with a minimum of technical fuss. Flach provides case studies of increasing complexity and variety with well-chosen examples and illustrations throughout. He covers a wide range of logical, geometric and statistical models and state-of-the-art topics such as matrix factorisation and ROC analysis. Particular attention is paid to the central role played by features. The use of established terminology is balanced with the introduction of new and useful concepts, and summaries of relevant background material are provided with pointers for revision if necessary. These features ensure Machine Learning will set a new standard as an introductory textbook.
Machine Learning The Art and Science of Algorithms that Make Sense of Data 1st Table of contents:
CHAPTER 1 The ingredients of machine learning
1.1 Tasks: the problems that can be solved with machine learning
Looking for structure
Evaluating performance on a task
1.2 Models: the output of machine learning
Geometric models
Probabilistic models
Logical models
Grouping and grading
1.3 Features: the workhorses of machine learning
Two uses of features
Feature construction and transformation
Interaction between features
1.4 Summary and outlook
What you’ll find in the rest of the book
CHAPTER 2 Binary classification and related tasks
2.1 Classification
Assessing classification performance
Visualising classification performance
2.2 Scoring and ranking
Assessing and visualising ranking performance
Turning rankers into classifiers
2.3 Class probability estimation
Assessing class probability estimates
Turning rankers into class probability estimators
2.4 Binary classification and related tasks: Summary and further reading
CHAPTER 3 Beyond binary classification
3.1 Handling more than two classes
Multi-class classification
Multi-class scores and probabilities
3.2 Regression
3.3 Unsupervised and descriptive learning
Predictive and descriptive clustering
Other descriptive models
3.4 Beyond binary classification: Summary and further reading
CHAPTER 4 Concept learning
4.1 The hypothesis space
Least general generalisation
Internal disjunction
4.2 Paths through the hypothesis space
Most general consistent hypotheses
Closed concepts
4.3 Beyond conjunctive concepts
Using first-order logic
4.4 Learnability
4.5 Concept learning: Summary and further reading
CHAPTER 5 Tree models
5.1 Decision trees
5.2 Ranking and probability estimation trees
Sensitivity to skewed class distributions
5.3 Tree learning as variance reduction
Regression trees
Clustering trees
5.4 Tree models: Summary and further reading
CHAPTER 6 Rule models
6.1 Learning ordered rule lists
Rule lists for ranking and probability estimation
6.2 Learning unordered rule sets
Rule sets for ranking and probability estimation
A closer look at rule overlap
6.3 Descriptive rule learning
Rule learning for subgroup discovery
Association rule mining
6.4 First-order rule learning
6.5 Rule models: Summary and further reading
CHAPTER 7 Linear models
7.1 The least-squares method
Multivariate linear regression
Regularised regression
Using least-squares regression for classification
7.2 The perceptron
7.3 Support vector machines
Soft margin SVM
7.4 Obtaining probabilities from linear classifiers
7.5 Going beyond linearity with kernel methods
7.6 Linear models: Summary and further reading
CHAPTER 8 Distance-based models
8.1 So many roads. . .
8.2 Neighbours and exemplars
8.3 Nearest-neighbour classification
8.4 Distance-based clustering
K-means algorithm
Clustering around medoids
Silhouettes
8.5 Hierarchical clustering
8.6 From kernels to distances
8.7 Distance-based models: Summary and further reading
CHAPTER 9 Probabilistic models
9.1 The normal distribution and its geometric interpretations
9.2 Probabilistic models for categorical data
Using a naive Bayes model for classification
Training a naive Bayes model
9.3 Discriminative learning by optimising conditional likelihood
9.4 Probabilistic models with hidden variables
Expectation-Maximisation
Gaussian mixture models
9.5 Compression-based models
9.6 Probabilistic models: Summary and further reading
CHAPTER 10 Features
10.1 Kinds of feature
Calculations on features
Categorical, ordinal and quantitative features
Structured features
10.2 Feature transformations
Thresholding and discretisation
Normalisation and calibration
Incomplete features
10.3 Feature construction and selection
Matrix transformations and decompositions
10.4 Features: Summary and further reading
CHAPTER 11 Model ensembles
11.1 Bagging and random forests
11.2 Boosting
Boosted rule learning
11.3 Mapping the ensemble landscape
Bias, variance and margins
Other ensemble methods
Meta-learning
11.4 Model ensembles: Summary and further reading
CHAPTER 12 Machine learning experiments
12.1 What to measure
12.2 How to measure it
12.3 How to interpret it
Interpretation of results over multiple data sets
12.4 Machine learning experiments: Summary and further reading
Epilogue: Where to go from here
Important points to remember
References
Index
People also search for Machine Learning The Art and Science of Algorithms that Make Sense of Data 1st:
machine learning the art and science of algorithms pdf
machine learning art tutorial
machine learning and art
machine learning and the physical sciences
machine learning artist
Tags: Peter Flach, Machine Learning, Make Sense


