Probabilistic Robotics 1st edition by Sebastian Thrun, Wolfram Burgard, Dieter Fox – Ebook PDF Instant Download/Delivery: 0262201623, 978-0262201629
Full download Probabilistic Robotics 1st edition after payment

Product details:
ISBN 10: 0262201623
ISBN 13: 978-0262201629
Author: Sebastian Thrun, Wolfram Burgard, Dieter Fox
An introduction to the techniques and algorithms of the newest field in robotics.
Probabilistic robotics is a new and growing area in robotics, concerned with perception and control in the face of uncertainty. Building on the field of mathematical statistics, probabilistic robotics endows robots with a new level of robustness in real-world situations. This book introduces the reader to a wealth of techniques and algorithms in the field. All algorithms are based on a single overarching mathematical foundation. Each chapter provides example implementations in pseudo code, detailed mathematical derivations, discussions from a practitioner’s perspective, and extensive lists of exercises and class projects. The book’s Web site, www.probabilistic-robotics.org, has additional material. The book is relevant for anyone involved in robotic software development and scientific research. It will also be of interest to applied statisticians and engineers dealing with real-world sensor data.
Probabilistic Robotics 1st Table of contents:
I. Basics
1. Introduction
1.1. Uncertainty in Robotics
1.2. Probabilistic Robotics
1.3. Implications
1.4. Road Map
1.5. Teaching Probabilistic Robotics
1.6. Bibliographical Remarks
2. Recursive State Estimation
2.1. Introduction
2.2. Basic Concepts in Probability
2.3. Robot Environment Interaction
2.3.1. State
2.3.2. Environment Interaction
2.3.3. Probabilistic Generative Laws
2.3.4. Belief Distributions
2.4. Bayes Filters
2.4.1. The Bayes Filter Algorithm
2.4.2. Example
2.4.3. Mathematical Derivation of the Bayes Filter
2.4.4. The Markov Assumption
2.5. Representation and Computation
2.6. Summary
2.7. Bibliographical Remarks
2.8. Exercises
3. Gaussian Filters
3.1. Introduction
3.2. The Kalman Filter
3.2.1. Linear Gaussian Systems
3.2.2. The Kalman Filter Algorithm
3.2.3. Illustration
3.2.4. Mathematical Derivation of the KF
3.3. The Extended Kalman Filter
3.3.1. Why Linearize?
3.3.2. Linearization Via Taylor Expansion
3.3.3. The EKF Algorithm
3.3.4. Mathematical Derivation of the EKF
3.3.5. Practical Considerations
3.4. The Unscented Kalman Filter
3.4.1. Linearization Via the Unscented Transform
3.4.2. The UKF Algorithm
3.5. The Information Filter
3.5.1. Canonical Parameterization
3.5.2. The Information Filter Algorithm
3.5.3. Mathematical Derivation of the Information Filter
3.5.4. The Extended Information Filter Algorithm
3.5.5. Mathematical Derivation of the Extended Information Filter
3.5.6. Practical Considerations
3.6. Summary
3.7. Bibliographical Remarks
3.8. Exercises
4. Nonparametric Filters
4.1. The Histogram Filter
4.1.1. The Discrete Bayes Filter Algorithm
4.1.2. Continuous State
4.1.3. Mathematical Derivation of the Histogram Approximation
4.1.4. Decomposition Techniques
4.2. Binary Bayes Filters with Static State
4.3. The Particle Filter
4.3.1. Basic Algorithm
4.3.2. Importance Sampling
4.3.3. Mathematical Derivation of the PF
4.3.4. Practical Considerations and Properties of Particle Filters
4.4. Summary
4.5. Bibliographical Remarks
4.6. Exercises
5. Robot Motion
5.1. Introduction
5.2. Preliminaries
5.2.1. Kinematic Configuration
5.2.2. Probabilistic Kinematics
5.3. Velocity Motion Model
5.3.1. Closed Form Calculation
5.3.2. Sampling Algorithm
5.3.3. Mathematical Derivation of the Velocity Motion Model
5.4. Odometry Motion Model
5.4.1. Closed Form Calculation
5.4.2. Sampling Algorithm
5.4.3. Mathematical Derivation of the Odometry Motion Model
5.5. Motion and Maps
5.6. Summary
5.7. Bibliographical Remarks
5.8. Exercises
6. Robot Perception
6.1. Introduction
6.2. Maps
6.3. Beam Models of Range Finders
6.3.1. The Basic Measurement Algorithm
6.3.2. Adjusting the Intrinsic Model Parameters
6.3.3. Mathematical Derivation of the Beam Model
6.3.4. Practical Considerations
6.3.5. Limitations of the Beam Model
6.4. Likelihood Fields for Range Finders
6.4.1. Basic Algorithm
6.4.2. Extensions
6.5. Correlation-Based Measurement Models
6.6. Feature-Based Measurement Models
6.6.1. Feature Extraction
6.6.2. Landmark Measurements
6.6.3. Sensor Model with Known Correspondence
6.6.4. Sampling Poses
6.6.5. Further Considerations
6.7. Practical Considerations
6.8. Summary
6.9. Bibliographical Remarks
6.10. Exercises
II. Localization
7. Mobile Robot Localization: Markov and Gaussian
7.1. A Taxonomy of Localization Problems
7.2. Markov Localization
7.3. Illustration of Markov Localization
7.4. EKF Localization
7.4.1. Illustration
7.4.2. The EKF Localization Algorithm
7.4.3. Mathematical Derivation of EKF Localization
7.4.4. Physical Implementation
7.5. Estimating Correspondences
7.5.1. EKF Localization with Unknown Correspondences
7.5.2. Mathematical Derivation of the ML Data Association
7.6. Multi-Hypothesis Tracking
7.7. UKF Localization
7.7.1. Mathematical Derivation of UKF Localization
7.7.2. Illustration
7.8. Practical Considerations
7.9. Summary
7.10. Bibliographical Remarks
7.11. Exercises
8. Mobile Robot Localization: Grid And Monte Carlo
8.1. Introduction
8.2. Grid Localization
8.2.1. Basic Algorithm
8.2.2. Grid Resolutions
8.2.3. Computational Considerations
8.2.4. Illustration
8.3. Monte Carlo Localization
8.3.1. Illustration
8.3.2. The MCL Algorithm
8.3.3. Physical Implementations
8.3.4. Properties of MCL
8.3.5. Random Particle MCL: Recovery from Failures
8.3.6. Modifying the Proposal Distribution
8.3.7. KLD-Sampling: Adapting the Size of Sample Sets
8.4. Localization in Dynamic Environments
8.5. Practical Considerations
8.6. Summary
8.7. Bibliographical Remarks
8.8. Exercises
III. Mapping
9. Occupancy Grid Mapping
9.1. Introduction
9.2. The Occupancy Grid Mapping Algorithm
9.2.1. Multi-Sensor Fusion
9.3. Learning Inverse Measurement Models
9.3.1. Inverting the Measurement Model
9.3.2. Sampling from the Forward Model
9.3.3. The Error Function
9.3.4. Examples and Further Considerations
9.4. Maximum A Posteriori Occupancy Mapping
9.4.1. The Case for Maintaining Dependencies
9.4.2. Occupancy Grid Mapping with Forward Models
9.5. Summary
9.6. Bibliographical Remarks
9.7. Exercises
10. Simultaneous Localization and Mapping
10.1. Introduction
10.2. SLAM with Extended Kalman Filters
10.2.1. Setup and Assumptions
10.2.2. SLAM with Known Correspondence
10.2.3. Mathematical Derivation of EKF SLAM
10.3. EKF SLAM with Unknown Correspondences
10.3.1. The General EKF SLAM Algorithm
10.3.2. Examples
10.3.3. Feature Selection and Map Management
10.4. Summary
10.5. Bibliographical Remarks
10.6. Exercises
11. The GraphSLAM Algorithm
11.1. Introduction
11.2. Intuitive Description
11.2.1. Building Up the Graph
11.2.2. Inference
11.3. The GraphSLAM Algorithm
11.4. Mathematical Derivation of GraphSLAM
11.4.1. The Full SLAM Posterior
11.4.2. The Negative Log Posterior
11.4.3. Taylor Expansion
11.4.4. Constructing the Information Form
11.4.5. Reducing the Information Form
11.4.6. Recovering the Path and the Map
11.5. Data Association in GraphSLAM
11.5.1. The GraphSLAM Algorithm with Unknown Correspondence
11.5.2. Mathematical Derivation of the Correspondence Test
11.6. Efficiency Consideration
11.7. Empirical Implementation
11.8. Alternative Optimization Techniques
11.9. Summary
11.10. Bibliographical Remarks
11.11. Exercises
12. The Sparse Extended Information Filter
12.1. Introduction
12.2. Intuitive Description
12.3. The SEIF SLAM Algorithm
12.4. Mathematical Derivation of the SEIF
12.4.1. Motion Update
12.4.2. Measurement Updates
12.5. Sparsification
12.5.1. General Idea
12.5.2. Sparsification in SEIFs
12.5.3. Mathematical Derivation of the Sparsification
12.6. Amortized Approximate Map Recovery
12.7. How Sparse Should SEIFs Be?
12.8. Incremental Data Association
12.8.1. Computing Incremental Data Association Probabilities
12.8.2. Practical Considerations
12.9. Branch-and-Bound Data Association
12.9.1. Recursive Search
12.9.2. Computing Arbitrary Data Association Probabilities
12.9.3. Equivalence Constraints
12.10. Practical Considerations
12.11. Multi-Robot SLAM
12.11.1. Integrating Maps
12.11.2. Mathematical Derivation of Map Integration
12.11.3. Establishing Correspondence
12.11.4. Example
12.12. Summary
12.13. Bibliographical Remarks
12.14. Exercises
13. The FastSLAM Algorithm
13.1. The Basic Algorithm
13.2. Factoring the SLAM Posterior
13.2.1. Mathematical Derivation of the Factored SLAM Posterior
13.3. FastSLAM with Known Data Association
13.4. Improving the Proposal Distribution
13.4.1. Extending the Path Posterior by Sampling a New Pose
13.4.2. Updating the Observed Feature Estimate
13.4.3. Calculating Importance Factors
13.5. Unknown Data Association
13.6. Map Management
13.7. The FastSLAM Algorithms
13.8. Efficient Implementation
13.9. FastSLAM for Feature-Based Maps
13.9.1. Empirical Insights
13.9.2. Loop Closure
13.10. Grid-based FastSLAM
13.10.1. The Algorithm
13.10.2. Empirical Insights
13.11. Summary
13.12. Bibliographical Remarks
13.13. Exercises
IV. Planning and Control
14. Markov Decision Processes
14.1. Motivation
14.2. Uncertainty in Action Selection
14.3. Value Iteration
14.3.1. Goals and Payoff
14.3.2. Finding Optimal Control Policies for the Fully Observable Case
14.3.3. Computing the Value Function
14.4. Application to Robot Control
14.5. Summary
14.6. Bibliographical Remarks
14.7. Exercises
15. Partially Observable Markov Decision Processes
15.1. Motivation
15.2. An Illustrative Example
15.2.1. Setup
15.2.2. Control Choice
15.2.3. Sensing
15.2.4. Prediction
15.2.5. Deep Horizons and Pruning
15.3. The Finite World POMDP Algorithm
15.4. Mathematical Derivation of POMDPs
15.4.1. Value Iteration in Belief Space
15.4.2. Value Function Representation
15.4.3. Calculating the Value Function
15.5. Practical Considerations
15.6. Summary
15.7. Bibliographical Remarks
15.8. Exercises
16. Approximate POMDP Techniques
16.1. Motivation
16.2. QMDPs
16.3. Augmented Markov Decision Processes
16.3.1. The Augmented State Space
16.3.2. The AMDP Algorithm
16.3.3. Mathematical Derivation of AMDPs
16.3.4. Application to Mobile Robot Navigation
16.4. Monte Carlo POMDPs
16.4.1. Using Particle Sets
16.4.2. The MC-POMDP Algorithm
16.4.3. Mathematical Derivation of MC-POMDPs
16.4.4. Practical Considerations
16.5. Summary
16.6. Bibliographical Remarks
16.7. Exercises
17. Exploration
17.1. Introduction
17.2. Basic Exploration Algorithms
17.2.1. Information Ga
17.2.2. Greedy Techniques
17.2.3. Monte Carlo Exploration
17.2.4. Multi-Step Techniques
17.3. Active Localization
17.4. Exploration for Learning Occupancy Grid Maps
17.4.1. Computing Information Gain
17.4.2. Propagating Gain
17.4.3. Extension to Multi-Robot Systems
17.5. Exploration for SLAM
17.5.1. Entropy Decomposition in SLAM
17.5.2. Exploration in FastSLAM
17.5.3. Empirical Characterization
17.6. Summary
17.7. Bibliographical Remarks
17.8. Exercises
Bibliography
Index
People also search for Probabilistic Robotics 1st:
borrow probabilistic robotics
probabilistic robotics github
probabilistic robotics course
probabilistic robotics pdf github
probabilistic robotics exercise solutions
Tags: Sebastian Thrun, Wolfram Burgard, Dieter Fox, Probabilistic Robotics



