Optimization in Function Spaces
This highly readable volume on optimization in function spaces is based on author Amol Sasane's lecture notes, which he developed over several years while teaching a course for third-year undergraduates at the London School of Economics. The classroom-tested text is written in an informal but precise style that emphasizes clarity and detail, taking students step by step through each subject.
Numerous examples throughout the text clarify methods, and a substantial number of exercises provide reinforcement. Detailed solutions to all of the exercises make this book ideal for self-study. The topics are relevant to students in engineering and economics as well as mathematics majors. Prerequisites include multivariable calculus and basic linear algebra. The necessary background in differential equations and elementary functional analysis is developed within the text, offering students a self-contained treatment.
"1120836554"
Optimization in Function Spaces
This highly readable volume on optimization in function spaces is based on author Amol Sasane's lecture notes, which he developed over several years while teaching a course for third-year undergraduates at the London School of Economics. The classroom-tested text is written in an informal but precise style that emphasizes clarity and detail, taking students step by step through each subject.
Numerous examples throughout the text clarify methods, and a substantial number of exercises provide reinforcement. Detailed solutions to all of the exercises make this book ideal for self-study. The topics are relevant to students in engineering and economics as well as mathematics majors. Prerequisites include multivariable calculus and basic linear algebra. The necessary background in differential equations and elementary functional analysis is developed within the text, offering students a self-contained treatment.
26.49 In Stock
Optimization in Function Spaces

Optimization in Function Spaces

by Amol Sasane
Optimization in Function Spaces

Optimization in Function Spaces

by Amol Sasane

eBook

$26.49  $34.95 Save 24% Current price is $26.49, Original price is $34.95. You Save 24%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

This highly readable volume on optimization in function spaces is based on author Amol Sasane's lecture notes, which he developed over several years while teaching a course for third-year undergraduates at the London School of Economics. The classroom-tested text is written in an informal but precise style that emphasizes clarity and detail, taking students step by step through each subject.
Numerous examples throughout the text clarify methods, and a substantial number of exercises provide reinforcement. Detailed solutions to all of the exercises make this book ideal for self-study. The topics are relevant to students in engineering and economics as well as mathematics majors. Prerequisites include multivariable calculus and basic linear algebra. The necessary background in differential equations and elementary functional analysis is developed within the text, offering students a self-contained treatment.

Product Details

ISBN-13: 9780486810966
Publisher: Dover Publications
Publication date: 04/10/2016
Series: Aurora: Dover Modern Math Originals
Sold by: Barnes & Noble
Format: eBook
Pages: 256
File size: 28 MB
Note: This product may take a few minutes to download.

About the Author

Amol Sasane received his PhD in mathematics from the University of Groningen in the Netherlands. He is a Lecturer in Mathematics at the London School of Economics.

Read an Excerpt

Optimization in Function


By Amol Sasane

Dover Publications, Inc.

Copyright © 2016 Amol Sasane
All rights reserved.
ISBN: 978-0-486-81096-6


Contents

Introduction,
Part 1. Calculus in Normed Spaces; Euler-Lagrange Equation,
Chapter 1. Calculus in Normed Spaces,
Chapter 2. Euler-Lagrange Equation,
Part 2. Optimal Control,
Chapter 3. Differential Equations,
Chapter 4. Pontryagin's Minimum Principle,
Chapter 5. Bellman's Equation,
Solutions,
Project on Curve Fitting,
Bibliography,
Index,


CHAPTER 1

Calculus in Normed Spaces


As we had discussed in the previous chapter, we wish to differentiate functions living on vector spaces (such as C[a, b]) whose elements are functions, and taking values in R. In order to talk about the derivative, we need a notion of closeness between points of a vector space so that the derivative can be defined. It turns out that vector spaces such as C[a, b] can be equipped with a "norm," and this provides a "distance" between two vectors. Having done this, we have the familiar setting of calculus, and we can talk about the derivative of a function living on a normed space. We then also have analogues of the two facts from ordinary calculus relevant to optimization that were mentioned in the previous chapter, namely the vanishing of the derivative for minimizers, and the sufficiency of this condition for minimization when the function has an increasing derivative. Thus the outline of this chapter is as follows:

(1) We will learn the notion of a "normed space," that is a vector space equipped with a "norm," enabling one to measure distances between vectors in the vector space. This makes it possible to talk about concepts from calculus, and in particular the notion of differentiability of functions between normed spaces.

(2) We will define what we mean by the derivative of a function (2) f : X -> Y, where X, Y are normed spaces, at a point x0 [member of] X. In other words, we will explain what we mean by "f'(x0)."

(3) We will prove the following two fundamental results in optimization for a real-valued function f on a normed space X:

Fact 1. If x* [member of] X is a minimizer of f, then f'(x*) = 0.

Fact 2. If f is convex and f'(x*) = 0, then x* is a minimizer of f.


1.1. Normed Spaces

We would like to develop calculus in the setting of vector spaces of functions such as C[a, b]. Underlying all the fundamental notions in ordinary calculus is the notion of closeness between points. For example, recall that a sequence [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is said to converge with limit L [member of] R if, for every [member of] > 0, there exists an N [member of] N such that whenever n >N, |anL| < [member of]. In other words, the sequence in R converges to L if no matter what distance[member of] > 0 is given, one can guarantee that all the terms of the sequence beyond a certain index N are at a distance of at most [member of] away from L (this is the inequality |anL| < [member of]). So we notice that in this definition of "convergence of a sequence" indeed the notion of distance played a crucial role. After all, we want to say that the terms of the sequence get "close" to the limit, and to measure closeness, we use the distance between points of . A similar thing happens with continuity and differentiability. For example, recall that a function f : R ->R is said to be continuous at x0 [member of] R if, for every [member of] > 0, there exists a δ > 0 such that whenever |xx0| < δ, there holds that |f(x) – f(x0)| < [member of]. Roughly, given any distance [member of], I can find a distance δ such that whenever I choose an x not farther than a distance δ from x0, I am guaranteed that f(x) is not farther than a distance of [member of] from f(x0). Again, notice the key role played by the distance in this definition. We observe that the distance between two real numbers x, y [member of] is given by |x – y|, where | · | : R -> R is the absolute value function: for x [member of] R,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

So in order to generalize the notions from ordinary calculus (where we work with real numbers and where the absolute value is used to measure distances) to the situation of vector spaces like C[a, b], we need a notion of distance between elements of the vector space. We will see that this can be done with the introduction of a "norm" in a vector space, which is a real-valued function · defined on the vector space, analogous to the role played by the absolute value in R. Once we have a norm on a vector space X (in other words a "normed space"), then the distance between x, y [member of] X will be taken as x – y . But let us first recall the definition of a vector space.

1.1.1. Vector spaces. Recall that, roughly speaking, a vector space is a set of elements, called "vectors," in which any two vectors can be "added," resulting in a new vector, and any vector can be multiplied by an element from R, so as to give a new vector. The precise definition is given below.


Definition 1.1 (Vector space).

A vector space over R is a set V together with two functions, + : V × V -> V, called vector addition, and · : R × V -> V, called scalar multiplication, that satisfy the following:

(V1) For all x1, x2, x3 [member of] V, x1 + (x2 + x3) = (x1 + x2) + x3.

(V2) There exists an element, denoted by 0 (called the zero vector), such that for all x [member of] V, x + 0 = x = 0 + x.

(V3) For every x [member of] V, there exists an element in V, denoted by –x ([member of] V), such that x + (–x) = (–x) + x = 0.

(V4) For all x1, x2 in V, x1 + x2 = x2 + x1.

(V5) For all x [member of] V, 1 · x = x.

(V6) For all x [member of] V and all α, [beta [member of] R α · (β· x) = (αβ) · x.

(V7) For all x [member of] V and all α, β [member of], (α + β) · x = α · x + β · x.

(V8) For all x1, x2 [member of] V and all α [member of] R, α · (x1 + x2) = α · x1 + α · x2

Here are a few examples of vector spaces.

Example 1.2.

(1) R is a vector space over R, with vector addition being the usual addition of real numbers, and scalar multiplication being the usual multiplication of real numbers. It follows from the usual rules of arithmetic that the vector space axioms (V1)-(V8) are satisfied.

(2) Rn is a vector space over R, with addition and scalar multiplication defined as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


It can then be checked that the vector space (V1)-(V8) are satisfied.

(3) It can be checked that the set Rm × n of all matrices having real entries, with m rows and n columns, and with the usual matrix addition and multiplication of matrices by scalars is a vector space. The zero element in Rm × n is the zero matrix with all entries equal to 0.

(4) Let a, b [member of] R and a < b. Define

C[a,b] = {x : [a,b] -> R x is continuous on [a,b]}.

So far we have a set, and now we define

addition + : C[a,b] x C[a,b] -> R and

scalar multiplication : R x C[a,b] -> C[a,b]

as follows. If x1, x2 [member of] C[a, b], then x1 + x2 [member of] C[a, b] is the new function defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1)

If α [member of] R and x [member of] C[a, b], then a · x [member of] C[a,b] is the function given by

(alpha) · x(t) = αx(t), t [member of][a, b]. (1.2)


It can be checked that the vector space axioms (V1)-(V8) are satisfied. Thus each vector inC[a, b] is a function from [a, b] to R. In particular, the zero vector is the zero function 0 [member of] C[a, b]: 0(t) = 0, t [member of] [a, b]. (5) Let C1[a, b] denote the space of continuously differentiable functions on [a, b]:

C1[a, b] - {x:[a,b] -> R x is continuously differentiable on [a,b]}.


(Recall that a function x : [a, b] ->R is continuously differentiable if for all t [member of] [a,b], the derivative of x at t, namely x'(t), exists, and the map t [??] x'(t) : [a, b] ->R is a continuous function.) We note that

C1[a,b] [subset] C [a,b],


because whenever a function x : [a, b] ->R is differentiable at a point t [member of] [a, b], x is continuous at t. In fact, C1[a, b] is a subspace of C[a, b] because it is closed under addition and scalar multiplication, and is nonempty:

(S1) For all x1, x2 [member of] C1[a, b], x1 + x2 [member of] C1 [a, b].

(S2) For all α [member of] R, x [member of] C1 [a, b], α · x [member of] C1 [a, b].

(S3) 0 [member of] C1[a, b].


Thus C1[a, b] is a vector space with the induced operations from C[a, b], namely the same pointwise operations as defined in (1.1) and (1.2).

Exercise 1.3. Let S := {x [member of] C1[a, b] : x(a) = ya and x(b) = yb}, where ya, yb [member of] R. Prove that S is a subspace of C1[a, b] if and only if ya = yb = 0. (So we see that S is a vector space with pointwise operations if and only if ya = yb = 0.)

Exercise 1.4. Show that C[0, 1] is not a finite dimensional1 vector space.

Hint: One can prove this by contradiction. Let C[0, 1] be a finite dimensional vector space with dimension d. First show that the set B = {t, t2, ..., td} is linearly independent. Then B is a basis for C[0, 1], and so the constant function 1 should be a linear combination of the functions from B. Derive a contradiction.

(So we see that C[a, b], which is isomorphic to C[0, 1], is not finite dimensional either. Typically vector spaces of functions such as C[a, b], C1 [a, b], etc. are not finite dimensional. Hence, morally our optimization problems in function spaces are not problems which can be converted to some equivalent optimization problem in n with a choice of basis.)

1.1.2. Normed spaces. In order to do calculus (that is, speak about limiting processes, convergence, continuity, and differentiability) in vector spaces, we need a notion of "distance" or "closeness" between the vectors of the vector space. This is provided by the notion of a norm, which allows one to do this, as we shall see below. A "normed space" is just a vector space with a norm.

Definition 1.5 (Norm). Let X be a vector space over R . A norm on X is a function · :X ->R such that:

N1. (Positive definiteness) For all x [member of] X, x > 0. If x [member of] X and x = 0, then x =0.

N2. For all α [member of] R and for all x [member of] X, ax =|a |x .

N3. (Triangle inequality) For all x, y [member of] X, x + y ≤ x + y .


A normed space is a vector space X equipped with a norm.

Distance in a normed space. Just like in R, with the absolute value and where the distance between x, y [member of] is |x – y|, now if we have a normed space (X, · ), then for x, y [member of] X, then the number x – y is taken as the distance between x, y [member of] X. Thus x = x – 0 is the distance of x from the zero vector 0 in X.

Remark 1.6. The reader familiar with "metric spaces" may notice that in a normed space (X, · ), if we define d : X × X -> by d(x, y) = x – y for x, y [member of] X, then (X, d) is a metric space with the metric/distance function d.

We now give a few examples of normed spaces.

Example 1.7.R is a vector space over R, and if we define · : R ->R by

then it becomes a normed space.?

Example 1.8. Rn is a vector space over R, and let us define the Euclidean norm · 2 by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then Rn is a normed space (see Exercise 1.15 on page 17; the verification of (N3) requires the Cauchy-Schwarz inequality).

This is not the only norm that can be defined on Rn. For example,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

are also examples of norms (see Exercise 1.15 on page 17).

Note that (Rn, · 2), (Rn, · 1) and ( Rn, · ∞) are all different normed spaces. This illustrates the important fact that from a given vector space, we can obtain various normed spaces by choosing different norms. What norm is considered depends on the particular application at hand. We illustrate this in the next paragraph.

Imagine a city (like New York) in which there are streets and avenues with blocks in between, forming a square grid as shown in Figure 1. Then if we take a taxi/cab to go from point A to point B in the city, it is clear that it isn't the Euclidean norm in R2 which is relevant, but rather the · 1-norm in R2. (It is for this reason that the · 1-norm is sometimes called the taxicab norm.)

So what norm one uses depends on the situation at hand, and is something that the modeler decides. It is not something that falls out of the sky!

Example 1.9. C[a, b] with the supremum norm · ∞. Let a, b [member of] R and a < b. Consider the vector space C[a, b] comprising functions that are continuous on [a, b], with addition and scalar multiplication defined earlier by(1.1) - (1.2). Define

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.3)

Then · ∞ is a norm on C[a, b], and is referred to as the "supremum norm." The second equality above, guaranteeing that the supremum is attained, that is, that there is a c [member of] [a, b] such that

|x(c)| = sup |x(t)|, t [member of] [a,b]


follows from the Extreme Value Theorem for continuous functions.

Exercise 1.10. In C[0, 1] equipped with the · ∞-norm, calculate the norms of t, –t, tn and sin(2πnt), where n [member of] N.

Let us check that · ∞ on C[a, b] does satisfy (N1), (N2), (N3).

(N1) For x [member of] C[a, b], clearly

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

since for each t [member of] [a, b], |x(t)| = 0. Also, if x [member of] C[a, b] is such that x ∞ = 0, then for each t [member of] [a, b],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

So for each t [member of] [a, b], |x(t)| = 0, and so x(t) = 0. In other words, x = 0, the zero function in C[a, b].

(N2) If a [member of] R and x [member of] C[a, b], then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and so

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(N3) Let x1, x2 [member of] C[a, b]. If t [member of] [a, b], then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


As this holds for all t [member of] [a, b], we obtain also that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


(Continues...)

Excerpted from Optimization in Function by Amol Sasane. Copyright © 2016 Amol Sasane. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

From the B&N Reads Blog

Customer Reviews