Mathematics for Quantum Chemistry

Mathematics for Quantum Chemistry

by Jay Martin Anderson
Mathematics for Quantum Chemistry

Mathematics for Quantum Chemistry

by Jay Martin Anderson


    Qualifies for Free Shipping
    Check Availability at Nearby Stores

Related collections and offers


This concise volume offers undergraduates an introduction to mathematical formalism in problems of molecular structure and motion. The main topics cover the calculus of orthogonal functions, algebra of vector spaces, and Lagrangian and Hamiltonian formulation of classical mechanics and applications to molecular motion. Answers to problems. 1966 edition.

Product Details

ISBN-13: 9780486442303
Publisher: Dover Publications
Publication date: 02/11/2005
Series: Dover Books on Chemistry Series
Pages: 176
Product dimensions: 5.50(w) x 8.50(h) x (d)

Read an Excerpt

Mathematics for Quantum Chemistry

By Jay Martin Anderson

Dover Publications, Inc.

Copyright © 2002 Jay Martin Anderson
All rights reserved.
ISBN: 978-0-486-44230-3




The mathematics and physics that are relevant to quantum chemistry are, almost without exception, oriented toward the solution of a particular kind of problem, the calculation of properties of a molecular system from the fundamental properties (charge, mass) of the particles composing the system. A good example of this is the calculation of the energy of the electrons in a molecule, using only the charge of the electron, Planck's constant, and so forth. The reader is probably already aware of the nature of the answer to this problem. There are a number of discrete values for the energy which the electrons in the molecule can assume up to a point, but higher values for the electronic energy occur in a continuous range. These energy values are shown qualitatively in Fig. 1-1. Quantum mechanics does provide the result that some physical quantities may take on only some values, not all values, as experiments indicated. The allowed values for a physical quantity are called eigenvalues, from the German for characteristic values. A particular physical quantity may assume an eigenvalue from a continuum, or perhaps from a finite or infinite discrete set of eigenvalues. The energy of an atom, for instance, may take on one of an infinite number of discrete values, as well as values from a higher-lying range of continuous eigenvalues, called the continuum. More often than not, chemistry is concerned with the discrete eigenvalues of a quantity, rather than its continuum of eigenvalues.

The mathematical problem of finding the eigenvalues of a quantity is called an eigenvalue problem; it is usually cast in the form of an equation called an eigenvalue equation. An eigenvalue equation for a physical quantity Q has the deceptively simple appearance

Qf = qf (1–1)

In this equation, f is a function, called the eigenfunction for the quantity Q, with the eigenvalue q. The element Q is called an operator, and the statement Qf tells us to change the function f into a new function, according to a set of instructions implicit in the definition of the operator Q. The eigenvalue equation, Eq. 1-1, then informs us that, by applying these "instructions" of the operator Q to f, we get merely a multiple, q, of the function f. The function Qf differs from the function f by a multiplicative constant q. It may very well be the case that several eigenfunctions have the same eigenvalue; that is, Qf1 = qf1 Qf2 = qf2, and so forth. If this is the case, the eigenvalue q is said to be degenerate; and the number of eigenfunctions that have the same eigenvalue is called the degree of degeneracy.

Operators may simply be numbers or functions; for example, the operator X may be defined by the instruction "multiply the operand function by x"; thus, Xx2 = x3. On the other hand, operators may be more complex than just numbers or functions. For example, the student has already used the operator (although probably not by that name) Δ which means, or is defined by the instructions, "evaluate the change in." For example, if we operate Δ on the thermodynamic function H, the enthalpy, we get a new function H, the change in the enthalpy, ΔH = H2 - H1. Another operator that is familiar is d/dx, meaning, "evaluate the derivative with respect to x."

It is the job of quantum mechanics to tell us how to form operators corresponding to the physical quantities which we wish to measure. Our task for the moment will be to learn how to solve the eigenvalue equations for such operators, and especially the vocabulary and concepts that are used to discuss the solutions. Quantum mechanics itself, however, grew up from two different points of view, which represent two analogous mathematical formulations of eigenvalue problems.

The first of these points of view is the wave mechanics of Schrödinger In wave mechanics, operators are differential expressions, such as the operator d/dx referred to above, and the eigenvalue equation then takes the form of a differential equation, and relies on the calculus for its solution. The second formulation is the matrix mechanics of Heisenberg, in which operators are represented by algebraic entities called matrices; instead of a function in the eigenvalue equation, the matrix operator operates on a vector [xi] to transform [xi] into a vector parallel to χ, but q times as long:

Qχ = qχ (1-2)

Equation 1–2 is the matrix-mechanical formulation of the eigenvalue problem. Matrices and vectors are defined and discussed in detail in Chapter 3. As in Eq. 1–1, q is the eigenvalue of the quantity Q, [xi] is the eigenvector, and Q is the operator represented as a matrix. The solution of this form of the eigenvalue problem relies on algebra.

These apparently different mathematical and physical approaches to quantum mechanical problems are really deeply interrelated; the work of Dirac shows the underlying equivalence of the two points of view, as well as of the corresponding mathematical techniques.


We have briefly discussed the role of eigenvalue equations in quantum mechanics. But a number of problems of classical mechanics may also be expressed in a simple and meaningful way as eigenvalue problems. Among these are the problems of the vibrations and rotations of a mechanical system, such as a molecule. These physical problems are of importance to the chemist concerned with molecular motion and spectroscopy. In vibrations, the normal modes and frequencies of oscillation appear as eigenvectors and eigenvalues; in rotations, the principal axes and moments of inertia emerge from an eigenvalue problem. It should be noted, however, that a correct description of these systems on the molecular level nearly always requires quantum mechanics, not classical mechanics.


With our course thus determined by the kinds of problems we wish to be able to set up, solve, and understand, we shall proceed first to a study of a certain class of functions germane to eigenfunction problems, then to a number of aspects of vector algebra and matrix algebra, finally to a synthesis of the two points of view of eigenvalue problems. We shall conclude with a study of classical mechanics to see how the vibrations of a mechanical system, such as a molecule, may be formulated as an eigenvalue problem. We shall also attempt to formulate Newtonian mechanics in such a way that the connection to quantum mechanics is clear.

Along the way, we shall learn some methods of solving eigenvalue problems, and take up applications of interest in chemistry. Our emphasis throughout will be primarily on concepts, secondarily on methods, and only lastly on the detailed proofs of the mathematical theorems. At the end of each chapter, a set of problems is given. Answers and hints for solution for many of the problems are found at the back of the book.


1. Find the eigenfunctions of the operator d/dx.


Orthogonal Functions

Two properties are, almost without exception, possessed by the eigenfunctions of operators corresponding to important physical quantities: orthogonality and normality. The purpose of this chapter is to develop these concepts in detail and to illustrate a number of their applications. Of primary usefulness is the idea of an expansion in orthogonal functions. As an example of this technique, we shall examine the Fourier series, in some detail. We shall also learn how to construct orthogonal functions by the Schmidt orthogonalization procedure, and how orthogonal functions arise from the solution of particular differential equations. To illustrate the latter concepts, we shall investigate the properties of the Legendre polynomials, and briefly mention other of the important "special functions" which arise in quantum chemistry. A brief discussion of some of the elements of the calculus and of complex variables are given in the Appendix. The reader would be wise to check his familiarity with this material before advancing into the present chapter.


We may best begin our discussion of orthogonal functions by reviewing the concept of function. The concept of function has three essential ingredients. We agree first to define a function on a particular region of the number scale, say, from a to b. Second, we agree that there exists a variable (say, x) that can independently assume values in the region from a to b. Third, we agree by some prescribed rule that for any value of x there exists a definite value of y. Then we say that y is a function of x on the range axb. This definition may be modified in a number of ways—so as to include more than one independent variable—but these three essential ingredients persist: an independent variable; a range on which the independent variable assumes its values; a dependent variable related to the independent variable by a prescribed rule.

The simplest way of notating the statement "y is a function of x" is to write y = y (x). This notation is compact, yet may be misleading. The left side of the equation is simply the name of a variable—we do not know it is the dependent variable until we see the right side of the equation. The right side uses the letter y again, but here the symbol y() means something different than just the name of the variable. The meaning of y() is that y is a dependent variable whose value may be found by some prescribed rule from the quantity inside the parentheses. Left out of the notation y = y (x) is the interval, or range, of the independent variable x for which the functional relationship is defined. This is not always of importance in elementary considerations of the idea of function, but it is of supreme importance to the notion of expansion of a function.

Hence, we introduce a definition.

DefinitionExpansion interval (or, simply, interval). The expansion interval is the range of the independent variable assumed by the function's under consideration. This does not imply that the function may not be defined for other values of the independent variable; we just decline to consider those other values.

The expansion interval is usually notated [a, b], meaning that the independent variable x is allowed values on the range axb.

We proceed now to four definitions in rapid succession.

DefinitionInner product. The inner product of the two (in general, complex-valued) functions f and g of a continuous variable on their expansion interval [a, b] is


The inner product of two functions is defined on their expansion interval. The inner product is notated by some authors (f, g), but this can easily be confused with the notation for two-dimensional coordinates or for an open interval. We shall use the notation <f / g >. The order is quite important:


For real-valued functions, the order is not important. Equation 22 illustrates an important feature of the inner product that arises again and again: "turning around," or transposing an inner product gives the complex conjugate of that inner product. Constants may be removed at will from the inner product symbol: if b and c are (complex) numbers, <bf | cg > = b *c<f | g >.

The inner product is a concept of no small significance. It has a geometrical analog, that of the dot product or scalar product of vectors that may already be familiar, which we shall discuss in Chapter 3.

In analogy to the geometrical property of perpendicularity of vectors, both functions and vectors afford the sweeping and general concept of orthogonality.

Definition Two functions, f(x) and g(x), are said to be orthogonal on the interval [a, b] if their inner product on [a, b] is zero:


If the inner product is to be zero, it does not matter which function "comes first" in the inner product, so the orthogonality of f and g may be expressed by either <f |g) = 0 or <g | f > = 0. The perpendicularity of two vectors is related to this definition of orthogonality: two vectors are perpendicular if their dot product is zero.

Definition The norm of a function on the interval [a, b] is the inner product of the function with itself, and may be symbolized byN:


The norm of a function is a real, positive quantity; it is analogous to the square of the length of a vector. That the norm is real and positive may be easily demonstrated by


which is positive definite. Then the integral of f* f, which gives the norm of f, is also positive definite. The positiveness of the norm is of use to us at once.

Definition A function is said to be normalized if its norm is one; that is, if <f | f = 1.

Since the norm of a function on a particular interval is always simply a positive real number, we can always form a multiple of a given function which is normalized. Suppose f has a norm N. Then the function f /N1/2 will have a norm of one, since


The process of dividing a function by the square root of its norm is called normalizing the function, or, sometimes, normalizing the function to unity.

Let us use the five definitions we have introduced thus far in some examples. Suppose we consider functions defined on the interval [-1, 1]. As an example of the computation of an inner product, let us evaluate <x | x2>.


The computation of this simple inner product gives zero. We therefore may state that, on [-1, 1], x and x2 are orthogonal functions. Notice the importance of specifying the interval: on the interval [0, 1], the inner product <x | x2) is


and the functions are not orthogonal. The expansion interval must be specified before a statement about orthogonality can be made. The same is true for normality. On [-1, 1], the function x has the norm


but on the interval [0, 1], the norm


One very useful property of functions may be introduced at this point. Very often, the integrals which form inner products may be simplified by using symmetry properties of the functions. This symmetry may be expressed by two definitions.

DefinitionAn even function is a function for which f (x) = f (-x); an odd function is a function for which f (x) = -f (-x).

Evenness or oddness is easily pictured. Figure 2-1a shows a graph of the function f (x ) = x2, which is even, since (x )2 = (- x)2. Graphically speaking, the plot of f (x) is symmetrical about the ordinate axis. Figure 2-1b shows the function f (x) = x3, which is odd, since (x)3 = -(-x)3. The plot to the right is the negative of the plot to the left of the ordinate axis. The integrals of even or odd functions are especially simple if the interval is symmetric. The following theorem results.

Theorem The integral of an even function on a symmetric interval is twice the integral on the half-interval; the integral of an odd function on a symmetric interval is zero.


Excerpted from Mathematics for Quantum Chemistry by Jay Martin Anderson. Copyright © 2002 Jay Martin Anderson. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

1. Introduction
2. Orthogonal Functions
3. Linear Algebra
4. Classical Mechanics
5. Conclusion
From the B&N Reads Blog

Customer Reviews