An Introduction to Numerical Analysis for Electrical and Computer Engineers / Edition 1

An Introduction to Numerical Analysis for Electrical and Computer Engineers / Edition 1

by Christopher J. Zarowski
ISBN-10:
0471467375
ISBN-13:
9780471467373
Pub. Date:
04/13/2004
Publisher:
Wiley
ISBN-10:
0471467375
ISBN-13:
9780471467373
Pub. Date:
04/13/2004
Publisher:
Wiley
An Introduction to Numerical Analysis for Electrical and Computer Engineers / Edition 1

An Introduction to Numerical Analysis for Electrical and Computer Engineers / Edition 1

by Christopher J. Zarowski

Hardcover

$248.95
Current price is , Original price is $248.95. You
$248.95 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Overview

  • This book is an introduction to numerical analysis and intends to strike a balance between analytical rigor and the treatment of particular methods for engineering problems
  • Emphasizes the earlier stages of numerical analysis for engineers with real-life problem-solving solutions applied to computing and engineering
  • Includes MATLAB oriented examples

An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.


Product Details

ISBN-13: 9780471467373
Publisher: Wiley
Publication date: 04/13/2004
Pages: 604
Product dimensions: 6.42(w) x 9.35(h) x 1.31(d)

About the Author

Christopher J. Zarowski, PhD, is an associate professor'in the? Department of Electrical and Computer Engineering at the University of Alberta, Canada. His research areas include digital communications applications (wireless, wireline, optical fiber), biomedical applications (e.g., circadian rhythm parameter estimation), structured matrix algebra, wavelet methods, signal detection and parameter estimation, computationally efficient and numerically reliable algorithms, and parallel processing algorithms. He has authored over 100 journal articles and conference papers, and is a senior member of the IEEE.

Read an Excerpt

An Introduction to Numerical Analysis for Electrical and Computer Engineers


By Christopher J. Zarowski

John Wiley & Sons

Copyright © 2004 John Wiley & Sons, Inc.
All right reserved.

ISBN: 0-471-46737-5


Chapter One

Functional Analysis Ideas

1.1 INTRODUCTION

Many engineering analysis and design problems are far too complex to be solved without the aid of computers. However, the use of computers in problem solving has made it increasingly necessary for users to be highly skilled in (practical) mathematical analysis. There are a number of reasons for this. A few are as follows.

For one thing, computers represent data to finite precision. Irrational numbers such as [pi] or [square root of 2] do not have an exact representation on a digital computer (with the possible exception of methods based on symbolic computing). Additionally, when arithmetic is performed, errors occur as a result of rounding (e.g., the truncation of the product of two n-bit numbers, which might be 2n bits long, back down to n bits). Numbers have a limited dynamic range; we might get overflow or underflow in a computation. These are examples of finite-precision arithmetic effects. Beyond this, computational methods frequently have sources of error independent of these. For example, an infinite series must be truncated if it is to be evaluated on a computer. The truncation error is something "additional" toerrors from finite-precision arithmetic effects. In all cases, the sources (and sizes) of error in a computation must be known and understood in order to make sensible claims about the accuracy of a computer-generated solution to a problem.

Many methods are "iterative." Accuracy of the result depends on how many iterations are performed. It is possible that a given method might be very slow, requiring many iterations before achieving acceptable accuracy. This could involve much computer runtime. The obvious solution of using a faster computer is usually unacceptable. A better approach is to use mathematical analysis to understand why a method is slow, and so to devise methods of speeding it up. Thus, an important feature of analysis applied to computational methods is that of assessing how much in the way of computing resources is needed by a given method. A given computational method will make demands on computer memory, operations count (the number of arithmetic operations, function evaluations, data transfers, etc.), number of bits in a computer word, and so on.

A given problem almost always has many possible alternative solutions. Other than accuracy and computer resource issues, ease of implementation is also relevant. This is a human labor issue. Some methods may be easier to implement on a given set of computing resources than others. This would have an impact on software/hardware development time, and hence on system cost. Again, mathematical analysis is useful in deciding on the relative ease of implementation of competing solution methods.

The subject of numerical computing is truly vast. Methods are required to handle an immense range of problems, such as solution of differential equations (ordinary or partial), integration, solution of equations and systems of equations (linear or nonlinear), approximation of functions, and optimization. These problem types appear to be radically different from each other. In some sense the differences between them are true, but there are means to achieve some unity of approach in understanding them.

The branch of mathematics that (perhaps) gives the greatest amount of unity is sometimes called functional analysis. We shall employ ideas from this subject throughout. However, our usage of these ideas is not truly rigorous; for example, we completely avoid topology, and measure theory. Therefore, we tend to follow simplified treatments of the subject such as Kreyszig, and then only those ideas that are immediately relevant to us. The reader is assumed to be very comfortable with elementary linear algebra, and calculus. The reader must also be comfortable with complex number arithmetic (see Appendix 1.A now for a review if necessary). Some knowledge of electric circuit analysis is presumed since this will provide a source of applications examples later. (But application examples will also be drawn from other sources.) Some knowledge of ordinary differential equations is also assumed.

It is worth noting that an understanding of functional analysis is a tremendous aid to understanding other subjects such as quantum physics, probability theory and random processes, digital communications system analysis and design, digital control systems analysis and design, digital signal processing, fuzzy systems, neural networks, computer hardware design, and optimal design of systems. Many of the ideas presented in this book are also intended to support these subjects.

1.2 SOME SETS

Variables in an engineering problem often take on values from sets of numbers. In the present setting, the sets of greatest interest to us are (1) the set of integers Z = {...-3, -2, -1, 0, 1, 2, 3 ...}, (2) the set of real numbers R, and (3) the set of complex numbers BLDBLD = {x + jy|j = [square root of (-1)], x, y [member of] R}. The set of nonnegative integers is [Z.sup.+] = {0, 1, 2, 3, ...,} (so [Z.sup.+] [subset] Z). Similarly, the set of nonnegative real numbers is [R.sup.+] = {x [member of] R|x [greater than or equal to] 0}. Other kinds of sets of numbers will be introduced if and when they are needed.

If A and B are two sets, their Cartesian product is denoted by A × B = {(a, b)|a [element of] A, b [element of] B}. The Cartesian product of n sets denoted [A.sub.0], [A.sub.1], ..., [A.sub.n-1] is [A.sub.0] × [A.sub.1] × ··· × [A.sub.n-1] = {([a.sub.0], [a.sub.1], ..., [a.sub.n-1])|[a.sub.k] [member of] [A.sub.k]}.

Ideas from matrix/linear algebra are of great importance. We are therefore also interested in sets of vectors. Thus, [R.sup.n] shall denote the set of n-element vectors with real-valued components, and similarly, [BLDBLD.sup.n] shall denote the set of n-element vectors with complex-valued components. By default, we assume any vector x to be a column vector:

(1.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Naturally, row vectors are obtained by transposition. We will generally avoid using bars over or under symbols to denote vectors. Whether a quantity is a vector will be clear from the context of the discussion. However, bars will be used to denote vectors when this cannot be easily avoided. The indexing of vector elements [x.sub.k] will often begin with 0 as indicated in (1.1). Naturally, matrices are also important. Set [R.sup.n×m] denotes the set of matrices with n rows and m columns, and the elements are real-valued. The notation [BLDBLD.sup.n×m] should now possess an obvious meaning. Matrices will be denoted by uppercase symbols, again without bars. If A is an n × m matrix, then

(1.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus, the element in row p and column q of A is denoted [a.sub.p,q]. Indexing of rows and columns again will typically begin at 0. The subscripts on the right bracket "]" in (1.2) will often be omitted in the future. We may also write [a.sub.pq] instead of [a.sub.p,q] where no danger of confusion arises.

The elements of any vector may be regarded as the elements of a sequence of finite length. However, we are also very interested in sequences of infinite length. An infinite sequence may be denoted by x = ([x.sub.k]) = ([x.sub.0], [x.sub.1], [x.sub.2], ...), for which [x.sub.k] could be either real-valued or complex-valued. It is possible for sequences to be doubly infinite, for instance, x = ([x.sub.k]) = (..., [x.sub.-2], [x.sub.-1], [x.sub.0], [x.sub.1], [x.sub.2], ...).

Relationships between variables are expressed as mathematical functions, that is, mappings between sets. The notation |A [right arrow] B signifies that function associates an element of set A with an element from set B. For example, |R [right arrow] R represents a function defined on the real-number line, and this function is also real-valued; that is, it maps "points" in R to "points" in R. We are familiar with the idea of "plotting" such a function on the xy plane if y = (x) (i.e., x, y [element of] R). It is important to note that we may regard sequences as functions that are defined on either the set Z (the case of doubly infinite sequences), or the set [Z.sup.+] (the case of singly infinite sequences). To be more specific, if, for example, k [element of] [Z.sup.+], then this number maps to some number [x.sub.k] that is either real-valued or complex-valued. Since vectors are associated with sequences of finite length, they, too, may be regarded as functions, but defined on a finite subset of the integers. From (1.1) this subset might be denoted by [Z.sub.n] = {0, 1, 2, ..., n - 2, n - 1}.

Sets of functions are important. This is because in engineering we are often interested in mappings between sets of functions. For example, in electric circuits voltage and current waveforms (i.e., functions of time) are input to a circuit via voltage and current sources. Voltage drops across circuit elements, or currents through circuit elements are output functions of time. Thus, any circuit maps functions from an input set to functions from some output set. Digital signal processing systems do the same thing, except that here the functions are sequences. For example, a simple digital signal processing system might accept as input the sequence ([x.sub.n]), and produce as output the sequence ([y.sub.n]) according to

(1.3) [y.sub.n] = [x.sub.n] + [x.sub.n+1]/2

for which n [element of] [Z.sup.+].

Some specific examples of sets of functions are as follows, and more will be seen later. The set of real-valued functions defined on the interval [a, b] [subset] R that are n times continuously differentiable may be denoted by [C.sup.n]]a, b]. This means that all derivatives up to and including order n exist and are continuous. If n = 0 we often just write ITLa, b], which is the set of continuous functions on the interval [a, b]. We remark that the notation [a, b] implies inclusion of the endpoints of the interval. Thus, (a, b) implies that the endpoints a and b are not to be included [i.e., if x [element of] (a, b), then a < x < b].

A polynomial in the indeterminate x of degree n is

(1.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Unless otherwise stated, we will always assume [p.sub.n,k] [element of] R. The indeterminate x is often considered to be either a real number or a complex number. But in some circumstances the indeterminate x is merely regarded as a "placeholder," which means that x is not supposed to take on a value. In a situation like this the polynomial coefficients may also be regarded as elements of a vector (e.g., [p.sub.n] = [[[p.sub.n,0] [p.sub.n,1] ··· [p.sub.n,n]].sup.T]). This happens in digital signal processing when we wish to convolve sequences of finite length, because the multiplication of polynomials is mathematically equivalent to the operation of sequence convolution. We will denote the set of all polynomials of degree n as [P.sup.n]. If x is to be from the interval [a, b] [subset of] R, then the set of polynomials of degree n on [a, b] is denoted by [P.sup.n][a, b]. If m < n we shall usually assume [P.sup.m][a, b] [subset] [P.sup.n][a, b].

1.3 SOME SPECIAL MAPPINGS: METRICS, NORMS, AND INNER PRODUCTS

Sets of objects (vectors, sequences, polynomials, functions, etc.) often have certain special mappings defined on them that turn these sets into what are commonly called function spaces. Loosely speaking, functional analysis is about the properties of function spaces. Generally speaking, numerical computation problems are best handled by treating them in association with suitable mappings on well-chosen function spaces. For our purposes, the three most important special types of mappings are (1) metrics, (2) norms, and (3) inner products. You are likely to be already familiar with special cases of these really very general ideas.

The vector dot product is an example of an inner product on a vector space, while the Euclidean norm (i.e., the square root of the sum of the squares of the elements in a real-valued vector) is a norm on a vector space. The Euclidean distance between two vectors (given by the Euclidean norm of the difference between the two vectors) is a metric on a vector space. Again, loosely speaking, metrics give meaning to the concept of "distance" between points in a function space, norms give a meaning to the concept of the "size" of a vector, and inner products give meaning to the concept of "direction" in a vector space.

In Section 1.1 we expressed interest in the sizes of errors, and so naturally the concept of a norm will be of interest. Later we shall see that inner products will prove to be useful in devising means of overcoming problems due to certain sources of error in a computation. In this section we shall consider various examples of function spaces, some of which we will work with later on in the analysis of certain computational problems. We shall see that there are many different kinds of metric, norm, and inner product. Each kind has its own particular advantages and disadvantages as will be discovered as we progress through the book.

Sometimes a quantity cannot be computed exactly. In this case we may try to estimate bounds on the size of the quantity. For example, finding the exact error in the truncation of a series may be impossible, but putting a bound on the error might be relatively easy. In this respect the concepts of supremum and infimum can be important. These are defined as follows.

Suppose we have E [subset] R.

Continues...


Excerpted from An Introduction to Numerical Analysis for Electrical and Computer Engineers by Christopher J. Zarowski Copyright © 2004 by John Wiley & Sons, Inc.. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface xiii

1 Functional Analysis Ideas 1

1.1 Introduction 1

1.2 Some Sets 2

1.3 Some Special Mappings: Metrics, Norms, and Inner Products 4

1.3.1 Metrics and Metric Spaces 6

1.3.2 Norms and Normed Spaces 8

1.3.3 Inner Products and Inner Product Spaces 14

1.4 The Discrete Fourier Series (DFS) 25

Appendix 1.A Complex Arithmetic 28

Appendix 1.B Elementary Logic 31

References 32

Problems 33

2 Number Representations 38

2.1 Introduction 38

2.2 Fixed-Point Representations 38

2.3 Floating-Point Representations 42

2.4 Rounding Effects in Dot Product Computation 48

2.5 Machine Epsilon 53

Appendix 2.A Review of Binary Number Codes 54

References 59

Problems 59

3 Sequences and Series 63

3.1 Introduction 63

3.2 Cauchy Sequences and Complete Spaces 63

3.3 Pointwise Convergence and Uniform Convergence 70

3.4 Fourier Series 73

3.5 Taylor Series 78

3.6 Asymptotic Series 97

3.7 More on the Dirichlet Kernel 103

3.8 Final Remarks 107

Appendix 3.A COordinate Rotation DIgital Computing (CORDIC) 107

3.A.1 Introduction 107

3.A.2 The Concept of a Discrete Basis 108

3.A.3 Rotating Vectors in the Plane 112

3.A.4 Computing Arctangents 114

3.A.5 Final Remarks 115

Appendix 3.B Mathematical Induction 116

Appendix 3.C Catastrophic Cancellation 117

References 119

Problems 120

4 Linear Systems of Equations 127

4.1 Introduction 127

4.2 Least-Squares Approximation and Linear Systems 127

4.3 Least-Squares Approximation and Ill-Conditioned Linear Systems 132

4.4 Condition Numbers 135

4.5 LU Decomposition 148

4.6 Least-Squares Problems and QR Decomposition 161

4.7 Iterative Methods for Linear Systems 176

4.8 Final Remarks 186

Appendix 4.A Hilbert Matrix Inverses 186

Appendix 4.B SVD and Least Squares 191

References 193

Problems 194

5 Orthogonal Polynomials 207

5.1 Introduction 207

5.2 General Properties of Orthogonal Polynomials 207

5.3 Chebyshev Polynomials 218

5.4 Hermite Polynomials 225

5.5 Legendre Polynomials 229

5.6 An Example of Orthogonal Polynomial Least-Squares Approximation 235

5.7 Uniform Approximation 238

References 241

Problems 241

6 Interpolation 251

6.1 Introduction 251

6.2 Lagrange Interpolation 252

6.3 Newton Interpolation 257

6.4 Hermite Interpolation 266

6.5 Spline Interpolation 269

References 284

Problems 285

7 Nonlinear Systems of Equations 290

7.1 Introduction 290

7.2 Bisection Method 292

7.3 Fixed-Point Method 296

7.4 Newton–Raphson Method 305

7.4.1 The Method 305

7.4.2 Rate of Convergence Analysis 309

7.4.3 Breakdown Phenomena 311

7.5 Systems of Nonlinear Equations 312

7.5.1 Fixed-Point Method 312

7.5.2 Newton–Raphson Method 318

7.6 Chaotic Phenomena and a Cryptography Application 323

References 332

Problems 333

8 Unconstrained Optimization 341

8.1 Introduction 341

8.2 Problem Statement and Preliminaries 341

8.3 Line Searches 345

8.4 Newton’s Method 353

8.5 Equality Constraints and Lagrange Multipliers 357

Appendix 8.A MATLAB Code for Golden Section Search 362

References 364

Problems 364

9 Numerical Integration and Differentiation 369

9.1 Introduction 369

9.2 Trapezoidal Rule 371

9.3 Simpson’s Rule 378

9.4 Gaussian Quadrature 385

9.5 Romberg Integration 393

9.6 Numerical Differentiation 401

References 406

Problems 406

10 Numerical Solution of Ordinary Differential Equations 415

10.1 Introduction 415

10.2 First-Order ODEs 421

10.3 Systems of First-Order ODEs 442

10.4 Multistep Methods for ODEs 455

10.4.1 Adams–Bashforth Methods 459

10.4.2 Adams–Moulton Methods 461

10.4.3 Comments on the Adams Families 462

10.5 Variable-Step-Size (Adaptive) Methods for ODEs 464

10.6 Stiff Systems 467

10.7 Final Remarks 469

Appendix 10.A MATLAB Code for Example 10.8 469

Appendix 10.B MATLAB Code for Example 10.13 470

References 472

Problems 473

11 Numerical Methods for Eigenproblems 480

11.1 Introduction 480

11.2 Review of Eigenvalues and Eigenvectors 480

11.3 The Matrix Exponential 488

11.4 The Power Methods 498

11.5 QR Iterations 508

References 518

Problems 519

12 Numerical Solution of Partial Differential Equations 525

12.1 Introduction 525

12.2 A Brief Overview of Partial Differential Equations 525

12.3 Applications of Hyperbolic PDEs 528

12.3.1 The Vibrating String 528

12.3.2 Plane Electromagnetic Waves 534

12.4 The Finite-Difference (FD) Method 545

12.5 The Finite-Difference Time-Domain (FDTD) Method 550

Appendix 12.A MATLAB Code for Example 12.5 557

References 560

Problems 561

13 An Introduction to MATLAB 565

13.1 Introduction 565

13.2 Startup 565

13.3 Some Basic Operators, Operations, and Functions 566

13.4 Working with Polynomials 571

13.5 Loops 572

13.6 Plotting and M-Files 573

References 577

Index 579

What People are Saying About This

From the Publisher

"Zarkowski (Univ. of Alberta) offers this book as a general, advanced undergraduate work in numerical analysis, containing all of the usual topics." (CHOICE, October 2004)

From the B&N Reads Blog

Customer Reviews