Chebyshev and Fourier Spectral Methods: Second Revised Edition

Chebyshev and Fourier Spectral Methods: Second Revised Edition

by John P. Boyd
Chebyshev and Fourier Spectral Methods: Second Revised Edition

Chebyshev and Fourier Spectral Methods: Second Revised Edition

by John P. Boyd

eBook

$28.49  $37.95 Save 25% Current price is $28.49, Original price is $37.95. You Save 25%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Completely revised text focuses on use of spectral methods to solve boundary value, eigenvalue, and time-dependent problems, but also covers Hermite, Laguerre, rational Chebyshev, sinc, and spherical harmonic functions, as well as cardinal functions, linear eigenvalue problems, matrix-solving methods, coordinate transformations, methods for unbounded intervals, spherical and cylindrical geometry, and much more. 7 Appendices. Glossary. Bibliography. Index. Over 160 text figures.

Product Details

ISBN-13: 9780486141923
Publisher: Dover Publications
Publication date: 06/05/2013
Series: Dover Books on Mathematics
Sold by: Barnes & Noble
Format: eBook
Pages: 688
File size: 35 MB
Note: This product may take a few minutes to download.

Read an Excerpt

Chebyshev and Fourier Spectral Methods


By JOHN P. BOYD

Dover Publications, Inc.

Copyright © 2001 John P. Boyd
All rights reserved.
ISBN: 978-0-486-14192-3



CHAPTER 1

Introduction


"I have no satisfaction in formulas unless I feel their numerical magnitude."

–Sir William Thomson, 1st Lord Kelvin (1824–1907)

"It is the increasingly pronounced tendency of modem analysis to substitute ideas for calculation; nevertheless, there are certain branches of mathematics where calculation conserves its rights."

–P. G. L. Dirichlet (1805–1859)


1.1 Series expansions

Our topic is a family of methods for solving differential and integral equations. The basic idea is to assume that the unknown u (x) can be approximated by a sum of N + 1 "basis functions" In (x):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1)


When this series is substituted into the equation

Lu = f(x) (1.2)


where L is the operator of the differential or integral equation, the result is the so–called "residual function" defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.3)


Since the residual function R (x; an) is identically equal to zero for the exact solution, the challenge is to choose the series coefficients {an} so that the residual function is minimized. The different spectral and pseudospectral methods differ mainly in their minimization strategies.


1.2 First Example

These abstract ideas can be made concrete by a simple problem. Although large problems are usually programmed in FORTRAN and C, it is very educational to use an algebraic manipulation language like Maple, Mathematica, Macsyma or Reduce. In what follows, Maple statements are shown in bold face. The machine's answers have been converted into standard mathematical notation.

The example is the linear, one-dimensional boundary value problem:

uxz - (x6 + 3x2)u = 0 (1.4)

u(-1) = u(1) = 1 (1.5)


The exact solution is (Scraton, 1965)

u(x) = exp(x4 - 1]/4) (1.6)


Polynomial approximations are recommended for most problems, so we shall choose a spectral solution of this form. In order to satisfy the boundary conditions independently of the unknown spectral coefficients, it is convenient to write the approximation as u2:=1 + (1-x*x)*(a0 + a1*x + a2*x*x);

u2 = 1 + (1 - x2) (a0 + a1x + a2x2) (1.7)


where the decision to keep only three degrees of freedom is arbitrary.

The residual for this approximation is Resid:= diff(u,x,x) - (x**6 + 3*x**2)*u;

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.9)


As error minimization conditions, we choose to make the residual zero at a set of points equal in number to the undetermined coefficients in u2 (x). This is called the "collocation" or "pseudospectral" method. If we arbitrarily choose the points xi = (-1/2,0,1/2), this gives the three equations: eq1:=subs(x=1/2,Resid); eq2:=subs(x=0,Resid); eq3:=subs(x=1/2,Resid);

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.10)


The coefficients are then determined by solving eq 1 = eq 2 = eq 3 = 0; solutionarray:= solve({eq1, eq2, eq3}, {a0, a1, a2}); yields

a0 = -784/3807, a1 = 0, a2 = a0 (1.11)


Figure 1.1 shows that this low order approximation is quite accurate.

However, the example raises a whole host of questions including:

1. What is an optimum choice of basis functions?

2. Why choose "collocation" as the residual-minimizing condition?

3. What are the optimum collocation points?

4. Why is a1 zero? Could we have anticipated this, and used a trial solution with just two degrees of freedom for the same answer?

5. How do we solve the algebraic problem for the coefficients when the Maple "solve" function isn't available?


The answer to the first question is that choosing powers of x as a basis is actually rather dangerous unless N, the number of degrees-of-freedom, is small or the calculations are being done in exact arithmetic, as true for the Maple solution here. In the next section, we describe the good choices. In an algebraic manipulation language, different rules apply as explained in Chapter 20.

The second answer is: Collocation is the simplest choice which is guaranteed to work, and if done right, nothing else is superior. To understand why, however, we shall have to understand both the standard theory of Fourier and Chebyshev series and Galerkin methods (Chapters 2 and 3) and the theory of interpolation and cardinal functions (Chapters 4 and 5).

The third answer is: once the basis set has been chosen, there are only two optimal sets of interpolation points for each basis (the Gauss-Chebyshev points and the Gauss-Lobatto points); both are given by elementary formulas in Appendices A and F, and which one is used is strictly a matter of convenience.

The fourth answer is: Yes, the irrelevance of a1 could have been anticipated. Indeed, one can show that for this problem, all the odd powers of x have zero coefficients. Symmetries of various kinds are extremely important in practical applications (Chapter 8).

The fifth answer is: the algebraic equations can be written (for a linear differential equation) as a matrix equation, which can then be solved by library software in FORTRAN or C.

Many other questions will be asked and answered in later chapters. However, some things are already clear.

First, the method is not necessarily harder to program than finite difference or finite element algorithms. In Maple, the complete solution of the ODE/BVP takes just five lines (Table 1.1)!

Second, spectral methods are not purely numerical. When N is sufficiently small, Chebyshev and Fourier methods yield an analytic answer.


1.3 Comparison with finite element methods

Finite element methods are similar in philosophy to spectral algorithms; the major difference is that finite elements chop the interval in x into a number of sub-intervals, and choose the In (x) to be local functions which are polynomials of fixed degree which are non-zero only over a couple of sub-intervals. In contrast, spectral methods useglobal basis functions in which In (x) is a polynomial (or trigonometric polynomial) of high degree which is nonzero, except at isolated points, over the entire computational domain.

When more accuracy is needed, the finite element method has three different strategies. The first is to subdivide each element so as to improve resolution uniformly over the whole domain. This is usually called "h-refinement" because h is the common symbol for the size or average size of a subdomain. (Figure 1.2).

The second alternative is to subdivide only in regions of steep gradients where high resolution is needed. This is "r-refinement".

The third option is to keep the subdomains fixed while increasing p, the degree of the polynomials in each subdomain. This strategy of "p-refinement" is precisely that employed by spectral methods. Finite element codes which can quickly change p are far from universal, but those that can are some called "p-type" finite elements.

Finite elements have two advantages. First, they convert differential equations into matrix equations that are sparse because only a handful of basis functions are non-zero in a given sub-interval. ("Sparse" matrices are discussed in Appendix B; suffice it to say that "sparse" matrix equations can be solved in a fraction of the cost of problems of similar size with "full" matrices.) Second, in multi-dimensional problems, the little sub-intervals become little triangles or tetrahedra which can be fitted to irregularly-shaped bodies like the shell of an automobile. Their disadvantage is low accuracy (for a given number of degrees of freedom N) because each basis function is a polynomial of low degree.

Spectral methods generate algebraic equations with full matrices, but in compensation, the high order of the basis functions gives high accuracy for a given N. When fast iterative matrix-solvers are used, spectral methods can be much more efficient than finite element or finite difference methods for many classes of problems. However, they are most useful when the geometry of the problem is fairly smooth and regular.


So-called "spectral element" methods gain the best of both worlds by hybridizing spectral and finite element methods. The domain is subdivided into elements, just as in finite elements, to gain the flexibility and matrix sparsity of finite elements. At the same time, the degree of the polynomial p in each subdomain is sufficiently high to retain the high accuracy and low storage of spectral methods. (Typically, p = 6 to 8, but spectral element codes are almost always written so that p is an arbitrary, user-choosable parameter.)

It turns out that most of the theory for spectral elements is the same as for global spectral methods, that is, algorithms in which a single expansion is used everywhere. Consequently, we shall concentrate on spectral methods in the early going. The final chapter will describe how to match expansions in multiple subdomains.

Low order finite elements can be derived, justified and implemented without knowledge of Fourier or Chebyshev convergence theory. However, as the order is increased, it turns out that ad hoc schemes become increasingly ill-conditioned and ill-behaved. The only practical way to implement "nice" high order finite elements, where "high order" generally means sixth or higher order, is to use the technology of spectral methods.

Similarly, it turns out that the easiest way to match spectral expansions across subdomain walls is to use the variational formalism of finite elements.

Thus, it really doesn't make much sense to ask: Are finite elements or spectral methods better? For sixth or higher order, they are essentially the same. The big issue is: Does one need high order, or is second or fourth order sufficient?


1.4 Comparisons with Finite Difference Method: Why Spectral Methods are Accurate and Memory-Minimizing

Finite difference methods approximate the unknown u (x) by a sequence of overlapping polynomials which interpolate u (x) at a set of grid points. The derivative of the local interpolant is used to approximate the derivative of u (x). The result takes the form of a weighted sum of the values of u (x) at the interpolation points.

Figure 1.3: Three types of numerical algorithms. The thin, slanting lines illustrate all the grid points (black circles) that directly affect the estimates of derivatives at the points shown above the lines by open circles. The thick black vertical lines in the bottom grid are the subdomain walls.


(Continues...)

Excerpted from Chebyshev and Fourier Spectral Methods by JOHN P. BOYD. Copyright © 2001 John P. Boyd. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface; Acknowledgments; Errata and Extended-Bibliography
1. Introduction
1.1 Series expansions
1.2 First example
1.3 Comparison with finite element methods
1.4 Comparisons with finite differences
1.5 Parallel computers
1.6 Choice of basis functions
1.7 Boundary conditions
1.8 Non-Interpolating and Pseudospectral
1.9 Nonlinearity
1.10 Time-dependent problems
1.11 FAQ: frequently asked questions
1.12 The chrysalis
2. Chebyshev & Fourier series
2.1 Introduction
2.2 Fourier series
2.3 Orders of convergence
2.4 Convergence order
2.5 Assumption of equal errors
2.6 Darboux's principle
2.7 Why Taylor series fail
2.8 Location of singularities
2.8.1 Corner singularities & compatibility conditions
2.9 FACE: Integration-by-Parts bound
2.10 Asymptotic calculation of Fourier coefficients
2.11 Convergence theory: Chebyshev polynomials
2.12 Last coefficient rule-of-thumb
2.13 Convergence theory for Legendre polynomials
2.14 Quasi-Sinusoidal rule of thumb
2.15 Witch of Agensi rule-of-thumb
2.16 Boundary layer rule-of-thumb
3. Galerkin & Weighted residual methods
3.1 Mean weighted residual methods
3.2 Completeness and boundary conditions
3.3 Inner product & orthogonality
3.4 Galerkin method
3.5 Integration-by-Parts
3.6 Galerkin method: case studies
3.7 Separation-of-Variables & the Galerkin method
3.8 Heisenberg Matrix mechanics
3.9 The Galerkin method today
4. Interpolation, collocation & all that
4.1 Introduction
4.2 Polynomial interpolation
4.3 Gaussian integration & pseudospectral grids
4.4 Pseudospectral Is Galerkin method via Quadrature
4.5 Pseudospectral errors
5. Cardinal functions
5.1 Introduction
5.2 Whittaker cardinal or "sinc" functions
5.3 Trigonometric interpolation
5.4 Cardinal functions for orthogonal polynomials
5.5 Transformations and interpolation
6. Pseudospectral methods for BVPs
6.1 Introduction
6.2 Choice of basis set
6.3 Boundary conditions: behavioral & numerical
6.4 "Boundary-bordering"
6.5 "Basis Recombination"
6.6 Transfinite interpolation
6.7 The Cardinal function basis
6.8 The interpolation grid
6.9 Computing basis functions & derivatives
6.10 Higher dimensions: indexing
6.11 Higher dimensions
6.12 Corner singularities
6.13 Matrix methods
6.14 Checking
6.15 Summary
7. Linear eigenvalue problems
7.1 The No-brain method
7.2 QR/QZ Algorithm
7.3 Eigenvalue rule-of-thumb
7.4 Four kinds of Sturm-Liouville problems
7.5 Criteria for Rejecting eigenvalues
7.6 "Spurious" eigenvalues
7.7 Reducing the condition number
7.8 The power method
7.9 Inverse power method
7.10 Combining global & local methods
7.11 Detouring into the complex plane
7.12 Common errors
8. Symmetry & parity
8.1 Introduction
8.2 Parity
8.3 Modifying the Grid to Exploit parity
8.4 Other discrete symmetries
8.5 Axisymmetric & apple-slicing models
9. Explicit time-integration methods
9.1 Introduction
9.2 Spatially-varying coefficients
9.3 The Shamrock principle
9.4 Linear and nonlinear
9.5 Example: KdV equation
9.6 Implicitly-Implicit: RLW & QG
10. Partial summation, the FFT and MMT
10.1 Introduction
10.2 Partial summation
10.3 The fast Fourier transform: theory
10.4 Matrix multiplication transform
10.5 Costs of the fast Fourier transform
10.6 Generalized FFTs and multipole methods
10.7 Off-grid interpolation
10.8 Fast Fourier transform: practical matters
10.9 Summary
11. Aliasing, spectral blocking, & blow-up
11.1 Introduction
11.2 Aliasing and Equality-on-the-grid
11.3 "2 h-Waves" and spectral blocking
11.4 Aliasing instability: history and remedies
11.5 Dealiasing and the Orszag two-thirds rule
11.6 Energy-conserving: constrained interpolation
11.7 Energy-conserving schemes: discussion
11.8 Aliasing instability: theory
11.9 Summary
12. Implicit schemes & the slow manifold
12.1 Introduction
12.2 Dispersion and amplitude errors
12.3 Errors & CFL limit for explicit schemes
12.4 Implicit time-marching algorithms
12.5 Semi-implicit methods
12.6 Speed-reduction rule-of-thumb
12.7 Slow manifold: meteorology
12.8 Slow manifold: definition & examples
12.9 Numerically-induced slow manifolds
12.10 Initialization
12.11 The method of multiple scales (Baer-Tribbia)
12.12 Nonlinear Galerkin methods
12.13 Weaknesses of the nonlinear Galerkin method
12.14 Tracking the slow manifold
12.15 Three parts to multiple scale algorithms
13. Splitting & its cousins
13.1 Introduction
13.2 Fractional steps for diffusion
13.3 Pitfalls in splitting, I: boundary conditions
13.4 Pitfalls in splitting, II: consistency
13.5 Operator theory of time-stepping
13.6 High order splitting
13.7 Splitting and fluid mechanics
14. Semi-Lagrangian advection
14.1 Concept of an integrating factor
14.2 Misuse of integrating factor methods
14.3 Semi-Lagrangian advection: introduction
14.4 Advection & method of characteristics
14.5 Three-level, 2D order semi-implicit
14.6 Multiply-upstream SL
14.7 Numerical illustrations & superconvergence
14.8 Two-level SL/SI algorithms
14.9 Noninterpolating SL & numerical diffusion
14.10 Off-grid interpolation
14.10.1 Off-grid interpolation: generalities
14.10.2 Spectral off-grid
14.10.3 Low-order polynomial interpolation
14.10.4 McGregor's Taylor series scheme
14.11 Higher order SL methods
14.12 History and relationships to other methods
14.13 Summary
15. Matrix-solving methods
15.1 Introduction
15.2 Stationary one-step iterations
15.3 Preconditioning: finite difference
15.4 Computing iterates: FFT/matrix multiplication
15.5 Alternative preconditioners
15.6 Raising the order through preconditioning
15.7 Multigrid: an overview
15.8 MRR method
15.9 Delves-Freeman block-and-diagonal iteration
15.10 Recursions & formal integration: constant coefficient ODEs
15.11 Direct methods for separable PDE's
15.12 Fast interations for almost separable PDEs
15.13 Positive definite and indefinite matrices
15.14 Preconditioned Newton flow
15.15 Summary & proverbs
16. Coordinate transformations
16.1 Introduction
16.2 Programming Chebyshev methods
16.3 Theory of 1-D transformations
16.4 Infinite and semi-infinite intervals
16.5 Maps for endpoint & corner singularities
16.6 Two-dimensional maps & corner branch points
16.7 Periodic problems & the Arctan/Tan map
16.8 Adaptive methods
16.9 Almost-equispaced Kosloff/Tal-Ezer grid
17. Methods for unbounded intervals
17.1 Introduction
17.2 Domain truncation
17.2.1 Domain truncation for rapidly-decaying functions
 
17.7 Rational Chebyshev functions: TB subscript n
17.8 Behavioral versus numerical boundary conditions
17.9 Strategy for slowly decaying functions
17.10 Numerical exemples: rational Chebyshev functions
17.11 Semi-infinite interval: rational Chebyshev TL subscript n
17.12 Numerical Examples: Chebyshev for semi-infinite interval
17.13 Strategy: Oscillatory, non-decaying functions
17.14 Weideman-Cloot Sinh mapping
17.15 Summary
18. Spherical & Cylindrical geometry
18.1 Introduction
18.2 Polar, cylindrical, toroidal, spherical
18.3 Apparent singularity at the pole
18.4 Polar coordinates: parity theorem
18.5 Radial basis sets and radial grids
18.5.1 One-sided Jacobi basis for the radial coordinate
18.5.2 Boundary value & eigenvalue problems on a disk
18.5.3 Unbounded domains including the origin in Cylindrical coordinates
18.6 Annual domains
18.7 Spherical coordinates: an overview
18.8 The parity factoro for scalars: sphere versus torus
18.9 Parity II: Horizontal velocities & other vector components
18.10 The Pole problem: spherical coordinates
18.11 Spherical harmonics: introduction
18.12 Legendre transforms and other sorrows
18.12.1 FFT in longitude/MMT in latitude
18.12.2 Substitutes and accelerators for the MMT
18.12.3 Parity and Legendre Transforms
18.12.4 Hurrah for matrix/vector multiplication
18.12.5 Reduced grid and other tricks
18.12.6 Schuster-Dilts triangular matrix acceleration
18.12.7 Generalized FFT: multipoles and all that
18.12.8 Summary
18.13 Equiareal resolution
18.14 Spherical harmonics: limited-area models
18.15 Spherical harmonics and physics
18.16 Asymptotic approximations, I
18.17 Asymptotic approximations, II
18.18 Software: spherical harmonics
18.19 Semi-implicit: shallow water
18.20 Fronts and topography: smoothing/filters
18.20.1 Fronts and topography
18.20.2 Mechanics of filtering
18.20.3 Spherical splines
18.20.4 Filter order
18.20.5 Filtering with spatially-variable order
18.20.6 Topographic filtering in meteorology
18.21 Resolution of spectral models
18.22 Vector harmonics & Hough functions
18.23 Radial/vertical coordinate: spectral or non-spectral?
18.23.1 Basis for Axial coordinate in cylindrical coordinates
18.23.2 Axial basis in toroidal coordinates
18.23.3 Vertical/radial basis in spherical coordinates
18.24 Stellar convection in a spherical annulus: Glatzmaier (1984)
18.25 Non-tensor grids: icosahedral, etc.
18.26 Robert basis for the sphere
18.27 Parity-modified latitudinal Fourier series
18.28 Projective filtering for latitudinal Fourier series
18.29 Spectral elements on the sphere
18.30 Spherical harmonics besieged
18.31 Elliptic and elliptic cylinder coordinates
18.32 Summary
19. Special tricks
19.1 Introduction
19.2 Sideband truncation
19.3 Special basis functions, I: corner singularities
19.4 Special basis functions, II: wave scattering
19.5 Weakly nonlocal solitary waves
19.6 Root-finding by Chebyshev polynomials
19.7 Hilbert transform
19.8 Spectrally-accurate quadrature methods
19.8.1 Introduction: Gaussian and Clenshaw-Curtis quadrature
19.8.2 Clenshaw-Curtis adaptivity
19.8.3 Mechanics
19.8.4 Integration of periodic functions and the trapezoidal rule
19.8.5 Infinite intervals and the trapezoidal rule
19.8.6 Singular integrands
19.8.7 Sets and solitaries
20. Symbolic calculations
20.1 Introduction
20.2 Strategy
20.3 Examples
20.4 Summary and open problems
21. The Tau-method
21.1 Introduction
21.2 tau-Approximation for a rational function
21.3 Differential equations
21.4 Canonical polynomials
21.5 Nomenclature
22. Domain decomposition methods
22.1 Introduction
22.2 Notation
22.3 Connecting the subdomains: patching
22.4 Weak coupling of elemental solutions
22.5 Variational principles
22.6 Choice of basis & grid
22.7 Patching versus variational formalism
22.8 Matrix inversion
22.9 The influence matrix method
22.10 Two-dimensional mappings & sectorial elements
22.11 Prospectus
23. Books and reviews
A. A bestiary of basis functions
A.1 Trigonometric basis functions: Fourier series
A.2 Chebyshev polynomials T subscript n (x)
A.3 Chebyshev polynomials of the second kind: U subscript n (x)
A.4 Legendre polynomials: P subscript n (x)
A.5 Gegenbauer polynomials
A.6 Hermite polynomials: H subscript n (x)
A.7 Rational Chebyshev functions: TB subscript n (y)
A.8 Laguerre polynomials: L subscript n (x)
A.9 Rational Chebyshev functions: TL subscript n (y)
A.10 Graphs of convergence domains in the complex plane
B. Direct matrix-solvers
B.1 Matrix factorizations
B.2 Banded matrix
B.3 Matrix-of-matrices theorem
B.4 Block-banded elimination: the "Lindzen-Kuo" algorithm
B.5 Block and "bordered" matrices
B.6 Cyclic banded matrices (periodic boundary conditions)
B.7 Parting shots
C. Newton iteration
C.1 Introduction
C.2 Examples
C.3 Eigenvalue problems
C.4 Summary
D. The continuation method
D.1 Introduction
D.2 Examples
D.3 Initialization strategies
D.4 Limit Points
D.5 Bifurcation points
D.6 Pseudoarclength continuation
E. Change-of-Coordinate derivative transformations
F. Cardinal functions
F.1 Introduction
F.2 General Fourier series: endpoint grid
F.3 Fourier Cosine series: endpoint grid
F.4 Fourier Sine series: endpoint grid
F.5 Cosine cardinal functions: interior grid
F.6 Sine cardinal functions: interior grid
F.7 Sinc(x): Whittaker cardinal function
F.8 Chebyshev Gauss-Lobatto ("endpoints")
F.9 Chebyshev polynomials: interior or "roots" grid
F.10 Legendre polynomials: Gauss-Lobatto grid
G. Transformation of derivative boundary conditions
Glossary; Index; References
From the B&N Reads Blog

Customer Reviews