Tienda Online
Seleccione una categoría
eBook Vst
Disponibilidad: En existencia
ISBN: 9780273775720
Precio sin IVA:
$ 121.800,00

Edición: 5
Copyright: 2014
Páginas: 464

Consideraciones previas a la compra de tu eBook Vst

Adaptive Filter Theory: International Edition (ebook)


By Simon Haykin

Descripción:

For courses in Adaptive Filters.   Haykin examines both the mathematical theory behind various linear adaptive filters and the elements of supervised multilayer perceptrons. In its fifth edition, this highly successful book has been updated and refined to stay current with the field and develop concepts in as unified and accessible a manner as possible. The full text downloaded to your computer With eBooks you can: search for key concepts, words and phrases make highlights and notes as you study share your notes with friends eBooks are downloaded to your computer and accessible either offline through the Bookshelf (available as a free download), available online and also via the iPad and Android apps. Upon purchase, you'll gain instant access to this eBook. Time limit The eBooks products do not have an expiry date. You will continue to access your digital ebook products whilst you have your Bookshelf installed.




Contenido:
Chapter 1 Stochastic Processes and Models
1.1 Partial Characterization of a Discrete-Time Stochastic Process
1.2 Mean Ergodic Theorem
1.3 Correlation Matrix
1.4 Correlation Matrix of Sine Wave Plus Noise
1.5 Stochastic Models
1.6 Wold Decomposition
1.7 Asymptotic Stationarity of an Autoregressive Process
1.8 Yule–Walker Equations
1.9 Computer Experiment: Autoregressive Process of Order Two
1.10 Selecting the Model Order
1.11 Complex Gaussian Proceses
1.12 Power Spectral Density
1.13 Propert ies of Power Spectral Density
1.14 Transmission of a Stationary Process Through a Linear Filter
1.15 Cramér Spectral Representation for a Stationary Process
1.16 Power Spectrum Estimation
1.17 Other Statistical Characteristics of a Stochastic Process
1.18 Polyspectra
1.19 Spectral-Correlation Density
1.20 Summary and Discussion

Chapter 2 Wiener Filters
2.1 Linear Optimum Filtering: Statement of the Problem
2.2 Principle of Orthogonality
2.3 Minimum Mean-Square Error
2.4 Wiener–Hopf Equations
2.5 Error-Performance Surface
2.6 Multiple Linear Regression Model
2.7 Example
2.8 Linearly Constrained Minimum-Variance Filter
2.9 Generalized Sidelobe Cancellers
2.10 Summary and Discussion

Chapter 3 Linear Prediction
3.1 Forward Linear Prediction
3.2 Backward Linear Prediction
3.3 Levinson–Durbin Algorithm
3.4 Properties of Prediction-Error Filters
3.5 Schur–Cohn Test
3.6 Autoregressive Modeling of a Stationary Stochastic Process
3.7 Cholesky Factorization
3.8 Lattice Predictors
3.9 All-Pole, All-Pass Lattice Filter
3.10 Joint-Process Estimation
3.11 Predictive Modeling of Speech
3.12 Summary and Discussion

Chapter 4 Method of Steepest Descent
4.1 Basic Idea of the Steepest-Descent Algorithm
4.2 The Steepest-Descent Algorithm Applied to the Wiener Filter
4.3 Stability of the Steepest-Descent Algorithm
4.4 Example
4.5 The Steepest-Descent Algorithm Viewed as a Deterministic Search Method
4.6 Virtue and Limitation of the Steepest-Descent Algorithm
4.7 Summary and Discussion

Chapter 5 Method of Stochastic Gradient Descent
5.1 Principles of Stochastic Gradient Descent
5.2 Application 1: Least-Mean-Square (LMS) Algorithm
5.3 Application 2: Gradient-Adaptive Lattice Filtering Algorithm
5.4 Other Applications of Stochastic Gradient Descent
5.5 Summary and Discussion

Chapter 6 The Least-Mean-Square (LMS) Algorithm
6.1 Signal-Flow Graph
6.2 Optimality Considerations
6.3 Applications
6.4 Statistical Learning Theory
6.5 Transient Behavior and Convergence Considerations
6.6 Efficiency
6.7 Computer Experiment on Adaptive Prediction
6.8 Computer Experiment on Adaptive Equalization
6.9 Computer Experiment on a Minimum-Variance Distortionless-Response Beamformer
6.10 Summary and Discussion

Chapter 7 Normalized Least-Mean-Square (LMS) Algorithm and Its Generalization
7.1 Normalized LMS Algorithm: The Solution to a Constrained Optimization Problem
7.2 Stability of the Normalized LMS Algorithm
7.3 Step-Size Control for Acoustic Echo Cancellation
7.4 Geometric Considerations Pertaining to the Convergence Process for Real-Valued Data
7.5 Affine Projection Adaptive Filters
7.6 Summary and Discussion

Chapter 8 Block-Adaptive Filters
8.1 Block-Adaptive Filters: Basic Ideas
8.2 Fast Block LMS Algorithm
8.3 Unconstrained Frequency-Domain Adaptive Filters
8.4 Self-Orthogonalizing Adaptive Filters
8.5 Computer Experiment on Adaptive Equalization
8.6 Subband Adaptive Filters
8.7 Summary and Discussion

Chapter 9 Method of Least-Squares
9.1 Statement of the Linear Least-Squares Estimation Problem
9.2 Data Windowing
9.3 Principle of Orthogonality Revisited
9.4 Minimum Sum of Error Squares
9.5 Normal Equations and Linear Least-Squares Filters
9.6 Time-Average Correlation Matrix Φ
9.7 Reformulation of the Normal Equations in Terms of Data Matrices
9.8 Properties of Least-Squares Estimates
9.9 Minimum-Variance Distortionless Response (MVDR) Spectrum Estimation
9.10 Regularized MVDR Beamforming
9.11 Singular-Value Decomposition
9.12 Pseudoinverse
9.13 Interpretation of Singular Values and Singular Vectors
9.14 Minimum-Norm Solution to the Linear Least-Squares Problem
9.15 Normalized LMS Algorithm Viewed as the Minimum-Norm Solution to an Underdetermined Least-Square
9.16 Summary and Discussion

Chapter 10 The Recursive Least-Squares (RLS) Algorithm
10.1 Some Preliminaries
10.2 The Matrix Inversion Lemma
10.3 The Exponentially Weighted RLS Algorithm
10.4 Selection of the Regularization Parameter
10.5 Updated Recursion for the Sum of Weighted Error Squares
10.6 Example: Single-Weight Adaptive Noise Canceller
10.7 Statistical Learning Theory
10.8 Efficiency
10.9 Computer Experiment on Adaptive Equalization
10.10 Summary and Discussion

Chapter 11 Robustness
11.1 Robustness, Adaptation, and Disturbances
11.2 Robustness: Preliminary Considerations Rooted in H∞ Optimization
11.3 Robustness of the LMS Algorithm
11.4 Robustness of the RLS Algorithm
11.5 Comparative Evaluations of the LMS and RLS Algorithms from the Perspective of Robustness
11.6 Risk-Sensitive Optimality
11.7 Trade-Offs Between Robustness and Efficiency
11.8 Summary and Discussion

Chapter 12 Finite-Precision Effects
12.1 Quantization Errors
12.2 Least-Mean-Square (LMS) Algorithm
12.3 Recursive Least-Squares (RLS) Algorithm
12.4 Summary and Discussion

Chapter 13 Adaptation in Nonstationary Environments
13.1 Causes and Consequences of Nonstationarity
13.2 The System Identification Problem
13.3 Degree of Nonstationarity
13.4 Criteria for Tracking Assessment
13.5 Tracking Performance of the LMS Algorithm
13.6 Tracking Performance of the RLS Algorithm
13.7 Comparison of the Tracking Performance of LMS and RLS Algorithms
13.8 Tuning of Adaptation Parameters
13.9 Incremental Delta-Bar-Delta (IDBD) Algorithm
13.10 Autostep Method
13.11 Computer Experiment: Mixture of Stationary and Nonstationary Environmental Data
13.12 Summary and Discussion

Chapter 14 Kalman Filters
14.1 Recursive Minimum Mean-Square Estimation for Scalar Random Variables
14.2 Statement of the Kalman Filtering Problem
14.3 The Innovations Process
14.4 Estimation of the State Using the Innovations Process
14.5 Filtering
14.6 Initial Conditions
14.7 Summary of the Kalman Filter
14.8 Optimality Criteria for Kalman Filtering
14.9 Kalman Filter as the Unifying Basis for RLS Algorithms
14.10 Covariance Filtering Algorithm
14.11 Information Filtering Algorithm
14.12 Summary and Discussion

Chapter 15 Square-Root Adaptive Filtering Algorithms
15.1 Square-Root Kalman Filters
15.2 Building Square-Root Adaptive Filters on the Two Kalman Filter Variants
15.3 QRD-RLS Algorithm
15.4 Adaptive Beamforming
15.5 Inverse QRD-RLS Algorithm
15.6 Finite-Precision Effects
15.7 Summary and Discussion

Chapter 16 Order-Recursive Adaptive Filtering Algorithm
16.1 Order-Recursive Adaptive Filters Using Least-Squares Estimation: An Overview
16.2 Adaptive Forward Linear Prediction
16.3 Adaptive Backward Linear Prediction
16.4 Conversion Factor
16.5 Least-Squares Lattice (LSL) Predictor
16.6 Angle-Normalized Estimation Errors
16.7 First-Order State-Space Models for Lattice Filtering
16.8 QR-Decomposition-Based Least-Squares Lattice (QRD-LSL) Filters
16.9 Fundamental Properties of the QRD-LSL Filter
16.10 Computer Experiment on Adaptive Equalization
16.11 Recursive (LSL) Filters Using A Posteriori Estimation Errors
16.12 Recursive LSL Filters Using A Priori Estimation Errors with Error Feedback
16.13 Relation Between Recursive LSL and RLS Algorithms
16.14 Finite-Precision Effects
16.15 Summary and Discussion

Chapter 17 Blind Deconvolution
17.1 Overview of Blind Deconvolution
17.2 Channel Identifiability Using Cyclostationary Statistics
17.3 Subspace Decomposition for Fractionally Spaced Blind Identification
17.4 Bussgang Algorithm for Blind Equalization
17.5 Extension of the Bussgang Algorithm to Complex Baseband Channels
17.6 Special Cases of the Bussgang Algorithm
17.7 Fractionally Spaced Bussgang Equalizers
17.8 Estimation of Unknown Probability Distribution Function of Signal Source
17.9 Summary and Discussion