Numerical Methods in Matrix Computations 1st edition by Åke Björck – Ebook PDF Instant Download/Delivery: B00PUM5OX6 , 978-3319050898
Full download Numerical Methods in Matrix Computations 1st edition after payment

Product details:
ISBN 10: B00PUM5OX6
ISBN 13: 978-3319050898
Author: Åke Björck
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given.
Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work.
Numerical Methods in Matrix Computations 1st Table of contents:
1 Direct Methods for Linear Systems
1.1 Elements of Matrix Theory
1.1.1 Matrix Algebra
1.1.2 Vector Spaces
1.1.3 Submatrices and Block Matrices
1.1.4 Operation Counts in Matrix Algorithms
1.1.5 Permutations and Determinants
1.1.6 The Schur Complement
1.1.7 Vector and Matrix Norms
1.1.8 Eigenvalues
1.1.9 The Singular Value Decomposition
1.2 Gaussian Elimination Methods
1.2.1 Solving Triangular Systems
1.2.2 Gaussian Elimination and LU Factorization
1.2.3 LU Factorization and Pivoting
1.2.4 Variants of LU Factorization
1.2.5 Elementary Elimination Matrices
1.2.6 Computing the Matrix Inverse
1.2.7 Perturbation Analysis
1.2.8 Scaling and componentwise Analysis
1.3 Hermitian Linear Systems
1.3.1 Properties of Hermitian Matrices
1.3.2 The Cholesky Factorization
1.3.3 Inertia of Symmetric Matrices
1.3.4 Symmetric Indefinite Matrices
1.4 Error Analysis in Matrix Computations
1.4.1 Floating-Point Arithmetic
1.4.2 Rounding Errors in Matrix Operations
1.4.3 Error Analysis of Gaussian Elimination
1.4.4 Estimating Condition Numbers
1.4.5 Backward Perturbation Bounds
1.4.6 Iterative Refinement of Solutions
1.4.7 Interval Matrix Computations
1.5 Banded Linear Systems
1.5.1 Band Matrices
1.5.2 Multiplication of Band Matrices
1.5.3 LU Factorization of Band Matrices
1.5.4 Tridiagonal Linear Systems
1.5.5 Envelope Methods
1.5.6 Diagonally Dominant Matrices
1.6 Implementing Matrix Algorithms
1.6.1 BLAS for Linear Algebra Software
1.6.2 Block and Partitioned Algorithms
1.6.3 Recursive Matrix Multiplication
1.6.4 Recursive Cholesky and LU Factorizations
1.7 Sparse Linear Systems
1.7.1 Storage Schemes for Sparse Matrices
1.7.2 Graphs and Matrices
1.7.3 Graph Model of Cholesky Factorization
1.7.4 Ordering Algorithms for Cholesky Factorization
1.7.5 Sparse Unsymmetric Matrices
1.7.6 Permutation to Block Triangular Form
1.7.7 Linear Programming and the Simplex Method
1.8 Structured Linear Equations
1.8.1 Kronecker Products and Linear Systems
1.8.2 Toeplitz and Hankel Matrices
1.8.3 Vandermonde Systems
1.8.4 Semiseparable Matrices
1.8.5 The Fast Fourier Transform
1.8.6 Cauchy-Like Matrices
1.9 Notes and Further References
References
2 Linear Least Squares Problems
2.1 Introduction to Least Squares Methods
2.1.1 The Gauss–Markov Model
2.1.2 Projections and Geometric Characterization
2.1.3 The Method of Normal Equations
2.1.4 Stability of the Method of Normal Equations
2.2 Least Squares Problems and the SVD
2.2.1 SVD and the Pseudoinverse
2.2.2 Perturbation Analysis
2.2.3 SVD and Matrix Approximation
2.2.4 Backward Error Analysis
2.2.5 Principal Angles Between Subspaces
2.3 Orthogonal Factorizations
2.3.1 Elementary Orthogonal Matrices
2.3.2 QR Factorization and Least Squares Problems
2.3.3 Golub–Kahan Bidiagonalization
2.3.4 Gram–Schmidt QR Factorization
2.3.5 Loss of Orthogonality and Reorthogonalization
2.3.6 MGS as a Householder Method
2.3.7 Partitioned and Recursive QR Factorization
2.3.8 Condition Estimation and Iterative Refinement
2.4 Rank-Deficient Problems
2.4.1 Numerical Rank
2.4.2 Pivoted QR Factorizations
2.4.3 Rank-Revealing Permutations
2.4.4 Complete QR Factorizations
2.4.5 The QLP Factorization
2.4.6 Modifying QR Factorizations
2.4.7 Stepwise Variable Regression
2.5 Structured and Sparse Least Squares
2.5.1 Kronecker Products
2.5.2 Tensor Computations
2.5.3 Block Angular Least Squares Problems
2.5.4 Banded Least Squares Problems
2.5.5 Sparse Least Squares Problems
2.5.6 Block Triangular Form
2.6 Regularization of Ill-Posed Linear Systems
2.6.1 TSVD and Tikhonov Regularization
2.6.2 Least Squares with Quadratic Constraints
2.6.3 Bidiagonalization and Partial Least Squares
2.6.4 The NIPALS Algorithm
2.6.5 Least Angle Regression and l1 Constraints
2.7 Some Special Least Squares Problems
2.7.1 Weighted Least Squares Problems
2.7.2 Linear Equality Constraints
2.7.3 Linear Inequality Constraints
2.7.4 Generalized Least Squares Problems
2.7.5 Indefinite Least Squares
2.7.6 Total Least Squares Problems
2.7.7 Linear Orthogonal Regression
2.7.8 The Orthogonal Procrustes Problem
2.8 Nonlinear Least Squares Problems
2.8.1 Conditions for a Local Minimum
2.8.2 Newton and Gauss–Newton Methods
2.8.3 Modifications for Global Convergence
2.8.4 Quasi-Newton Methods
2.8.5 Separable Least Squares Problems
2.8.6 Iteratively Reweighted Least Squares
2.8.7 Nonlinear Orthogonal Regression
2.8.8 Fitting Circles and Ellipses
References
3 Matrix Eigenvalue Problems
3.1 Basic Theory
3.1.1 Eigenvalues of Matrices
3.1.2 The Jordan Canonical Form
3.1.3 The Schur Decomposition
3.1.4 Block Diagonalization and Sylvester’s Equation
3.2 Perturbation Theory
3.2.1 Geršgorin’s Theorems
3.2.2 General Perturbation Theory
3.2.3 Perturbation Theorems for Hermitian Matrices
3.2.4 The Rayleigh Quotient Bounds
3.2.5 Numerical Range and Pseudospectra
3.3 The Power Method and Its Generalizations
3.3.1 The Simple Power Method
3.3.2 Deflation of Eigenproblems
3.3.3 Inverse Iteration
3.3.4 Rayleigh Quotient Iteration
3.3.5 Subspace Iteration
3.4 The LR and QR Algorithms
3.4.1 The Basic LR and QR Algorithms
3.4.2 The Practical QR Algorithm
3.4.3 Reduction to Hessenberg Form
3.4.4 The Implicit Shift QR Algorithm
3.4.5 Enhancements to the QR Algorithm
3.5 The Hermitian QR Algorithm
3.5.1 Reduction to Real Symmetric Tridiagonal Form
3.5.2 Implicit QR Algorithm for Hermitian Matrices
3.5.3 The QR-SVD Algorithm
3.5.4 Skew-Symmetric and Unitary Matrices
3.6 Some Alternative Algorithms
3.6.1 The Bisection Method
3.6.2 Jacobi’s Diagonalization Method
3.6.3 Jacobi SVD Algorithms
3.6.4 Divide and Conquer Algorithms
3.7 Some Generalized Eigenvalue Problems
3.7.1 Canonical Forms
3.7.2 Solving Generalized Eigenvalue Problems
3.7.3 The CS Decomposition
3.7.4 Generalized Singular Value Decomposition
3.7.5 Polynomial Eigenvalue Problems
3.7.6 Hamiltonian and Symplectic Problems
3.8 Functions of Matrices
3.8.1 The Matrix Square Root
3.8.2 The Matrix Sign Function
3.8.3 The Polar Decomposition
3.8.4 The Matrix Exponential and Logarithm
3.9 Nonnegative Matrices with Applications
3.9.1 The Perron–Frobenius Theory
3.9.2 Finite Markov Chains
3.10 Notes and Further References
References
4 Iterative Methods
4.1 Classical Iterative Methods
4.1.1 A Historical Overview
4.1.2 A Model Problem
4.1.3 Stationary Iterative Methods
4.1.4 Convergence of Stationary Iterative Methods
4.1.5 Relaxation Parameters and the SOR Method
4.1.6 Effects of Non-normality and Finite Precision
4.1.7 Polynomial Acceleration
4.2 Krylov Methods for Hermitian Systems
4.2.1 General Principles of Projection Methods
4.2.2 The One-Dimensional Case
4.2.3 The Conjugate Gradient (CG) Method
4.2.4 Rate of Convergence of the CG Method
4.2.5 The Lanczos Process
4.2.6 Indefinite Systems
4.2.7 Block CG and Lanczos Processes
4.3 Krylov Methods for Non-Hermitian Systems
4.3.1 The Arnoldi Process
4.3.2 Two-Sided Lanczos and the BiCG Method
4.3.3 The Quasi-Minimal Residual Algorithm
4.3.4 Transpose-Free Methods
4.3.5 Complex Symmetric Systems
4.4 Preconditioned Iterative Methods
4.4.1 Some Preconditioned Algorithms
4.4.2 Gauss-Seidel and SSOR Preconditioners
4.4.3 Incomplete LU Factorization
4.4.4 Incomplete Cholesky Factorization
4.4.5 Sparse Approximate Inverse Preconditioners
4.4.6 Block Incomplete Factorizations
4.4.7 Preconditioners for Toeplitz Systems
4.5 Iterative Methods for Least Squares Problems
4.5.1 Basic Least Squares Iterative Methods
4.5.2 Jacobi and Gauss–Seidel Methods
4.5.3 Krylov Subspace Methods
4.5.4 GKL Bidiagonalization and LSQR
4.5.5 Generalized LSQR
4.5.6 Regularization by Iterative Methods
4.5.7 Preconditioned Methods for Normal Equations
4.5.8 Saddle Point Systems
4.6 Iterative Methods for Eigenvalue Problems
4.6.1 The Rayleigh–Ritz Procedure
4.6.2 The Arnoldi Eigenvalue Algorithm
4.6.3 The Lanczos Algorithm
4.6.4 Reorthogonalization of Lanczos Vectors
4.6.5 Convergence of Arnoldi and Lanczos Methods
4.6.6 Spectral Transformation
4.6.7 The Lanczos–SVD Algorithm
4.6.8 Subspace Iteration for Hermitian Matrices
4.6.9 Jacobi–Davidson Methods
4.7 Notes and Further References
References
Mathematical Symbols
Flop Counts
Index
People also search for Numerical Methods in Matrix Computations 1st :
numerical methods in matrix computations ake bjorck
numerical calculation methods
matrix numerical methods
numerical matrix
numerical methods in matlab


