Matrix Algebra for Linear Models 1st edition by Marvin Gruber – Ebook PDF Instant Download/Delivery: 1118592557 , 978-1118592557
Full download Matrix Algebra for Linear Models 1st edition after payment

Product details:
ISBN 10: 1118592557
ISBN 13: 978-1118592557
Author: Marvin Gruber
A self-contained introduction to matrix analysis theory and applications in the field of statistics
Comprehensive in scope, Matrix Algebra for Linear Models offers a succinct summary of matrix theory and its related applications to statistics, especially linear models. The book provides a unified presentation of the mathematical properties and statistical applications of matrices in order to define and manipulate data.
Written for theoretical and applied statisticians, the book utilizes multiple numerical examples to illustrate key ideas, methods, and techniques crucial to understanding matrix algebra’s application in linear models. Matrix Algebra for Linear Models expertly balances concepts and methods allowing for a side-by-side presentation of matrix theory and its linear model applications. Including concise summaries on each topic, the book also features:
- Methods of deriving results from the properties of eigenvalues and the singular value decomposition
- Solutions to matrix optimization problems for obtaining more efficient biased estimators for parameters in linear regression models
- A section on the generalized singular value decomposition
- Multiple chapter exercises with selected answers to enhance understanding of the presented material
Matrix Algebra for Linear Models is an ideal textbook for advanced undergraduate and graduate-level courses on statistics, matrices, and linear algebra. The book is also an excellent reference for statisticians, engineers, economists, and readers interested in the linear statistical model.
Matrix Algebra for Linear Models 1st Table of contents:
Part I: Basic Ideas about Matrices and Systems of Linear Equations
Section 1: What Matrices are and Some Basic Operations with Them
-
Introduction
-
What are Matrices and why are they Interesting to a Statistician?
-
Matrix Notation Addition and Multiplication
-
Summary
-
Exercises
Section 2: Determinants and Solving a System of Equations
-
Introduction
-
Definition of and Formulae for Expanding Determinants
-
Some Computational Tricks for the Evaluation of Determinants
-
Solution to Linear Equations Using Determinants
-
Gauss Elimination
-
Summary
-
Exercises
Section 3: The Inverse of a Matrix
-
Introduction
-
The Adjoint Method of Finding the Inverse of a Matrix
-
Using Elementary Row Operations
-
Using the Matrix Inverse to Solve a System of Equations
-
Partitioned Matrices and Their Inverses
-
Finding the Least Square Estimator
-
Summary
-
Exercises
Section 4: Special Matrices and Facts about Matrices that will be used in the Sequel
-
Introduction
-
Matrices of the Form aIn + bJn
-
Orthogonal Matrices
-
Direct Product of Matrices
-
An Important Property of Determinants
-
The Trace of a Matrix
-
Matrix Differentiation
-
The Least Square Estimator Again
-
Summary
-
Exercises
Section 5: Vector Spaces
-
Introduction
-
What is a Vector Space?
-
The Dimension of a Vector Space
-
Inner Product Spaces
-
Linear Transformations
-
Summary
-
Exercises
Section 6: The Rank of a Matrix and Solutions to Systems of Equations
-
Introduction
-
The Rank of a Matrix
-
Solving Systems of Equations with Coefficient Matrix of Less than Full Rank
-
Summary
-
Exercises
Part II: Eigenvalues, the Singular Value Decomposition, and Principal Components
Section 7: Finding the Eigenvalues of a Matrix
-
Introduction
-
Eigenvalues and Eigenvectors of a Matrix
-
Nonnegative Definite Matrices
-
Summary
-
Exercises
Section 8: The Eigenvalues and Eigenvectors of Special Matrices
-
Introduction
-
Orthogonal, Nonsingular, and Idempotent Matrices
-
The Cayley–Hamilton Theorem
-
The Relationship between the Trace, the Determinant, and the Eigenvalues of a Matrix
-
The Eigenvalues and Eigenvectors of the Kronecker Product of Two Matrices
-
The Eigenvalues and the Eigenvectors of a Matrix of the Form aI + bJ
-
The Loewner Ordering
-
Summary
-
Exercises
Section 9: The Singular Value Decomposition (SVD)
-
Introduction
-
The Existence of the SVD
-
Uses and Examples of the SVD
-
Summary
-
Exercises
Section 10: Applications of the Singular Value Decomposition
-
Introduction
-
Reparameterization of a Non-full-Rank Model to a Full-Rank Model
-
Principal Components
-
The Multicollinearity Problem
-
Summary
-
Exercises
Section 11: Relative Eigenvalues and Generalizations of the Singular Value Decomposition
-
Introduction
-
Relative Eigenvalues and Eigenvectors
-
Generalizations of the Singular Value Decomposition: Overview
-
The First Generalization
-
The Second Generalization
-
Summary
-
Exercises
Part III: Generalized Inverses
Section 12: Basic Ideas about Generalized Inverses
-
Introduction
-
What is a Generalized Inverse and how is One Obtained?
-
The Moore–Penrose Inverse
-
Summary
-
Exercises
Section 13: Characterizations of Generalized Inverses Using the Singular Value Decomposition
-
Introduction
-
Characterization of the Moore–Penrose Inverse
-
Generalized Inverses in Terms of the Moore–Penrose Inverse
-
Summary
-
Exercises
Section 14: Least Square and Minimum Norm Generalized Inverses
-
Introduction
-
Minimum Norm Generalized Inverses
-
Least Square Generalized Inverses
-
An Extension of Theorem to Positive-Semi-definite Matrices
-
Summary
-
Exercises
Section 15: More Representations of Generalized Inverses
-
Introduction
-
Another Characterization of the Moore–Penrose Inverse
-
Still another Representation of the Generalized Inverse
-
The Generalized Inverse of a Partitioned Matrix
-
Summary
-
Exercises
Section 16: Least Square Estimators for Less than Full-Rank Models
-
Introduction
-
Some Preliminaries
-
Obtaining the LS Estimator
-
Summary
-
Exercises
Part IV: Quadratic Forms and the Analysis of Variance
Section 17: Quadratic Forms and their Probability Distributions
-
Introduction
-
Examples of Quadratic Forms
-
The Chi-Square Distribution
-
When does the Quadratic Form of a Random Variable have a Chi-Square Distribution?
-
When are Two Quadratic Forms with the Chi-Square Distribution Independent?
-
Summary
-
Exercises
Section 18: Analysis of Variance: Regression Models and the One- and Two-Way Classification
-
Introduction
-
The Full-Rank General Linear Regression Model
-
Analysis of Variance: One-Way Classification
-
Analysis of Variance: Two-Way Classification
-
Summary
-
Exercises
Section 19: More ANOVA
-
Introduction
-
The Two-Way Classification with Interaction
-
The Two-Way Classification with One Factor Nested
-
Summary
-
Exercises
Section 20: The General Linear Hypothesis
-
Introduction
-
The Full-Rank Case
-
The Non-full-Rank Case
-
Contrasts
-
Summary
-
Exercises
Part V: Matrix Optimization Problems
Section 21: Unconstrained Optimization Problems
-
Introduction
-
Unconstrained Optimization Problems
-
The Least Square Estimator Again
-
Summary
-
Exercises
Section 22: Constrained Minimization Problems with Linear Constraints
-
Introduction
-
An Overview of Lagrange Multipliers
-
Minimizing a Second-Degree Form with Respect to a Linear Constraint
-
The Constrained Least Square Estimator
-
Canonical Correlation
-
Summary
-
Exercises
Section 23: The Gauss–Markov Theorem
-
Introduction
-
The Gauss–Markov Theorem and the Least Square Estimator
-
The Modified Gauss–Markov Theorem and the Linear Bayes Estimator
-
Summary
-
Exercises
Section 24: Ridge Regression-Type Estimators
-
Introduction
-
Minimizing a Second-Degree Form with Respect to a Quadratic Constraint
-
The Generalized Ridge Regression Estimators
-
The Mean Square Error of the Generalized Ridge Estimator without Averaging over the Prior Distribution
-
The Mean Square Error Averaging over the Prior Distribution
-
Summary
People also search for Matrix Algebra for Linear Models 1st :
linear algebra matrix theory
linear algebra matrix formula
matrix algebra for dummies
matrix algebra formulas
matrix algebra and linear models
Tags: Marvin Gruber, Matrix Algebra, Linear Models


