Predicative Arithmetic. (MN-32)

Predicative Arithmetic. (MN-32)

by Edward Nelson
Predicative Arithmetic. (MN-32)

Predicative Arithmetic. (MN-32)

by Edward Nelson

Paperback

$44.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

This book develops arithmetic without the induction principle, working in theories that are interpretable in Raphael Robinson's theory Q. Certain inductive formulas, the bounded ones, are interpretable in Q. A mathematically strong, but logically very weak, predicative arithmetic is constructed.

Originally published in 1986.

The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.


Product Details

ISBN-13: 9780691610290
Publisher: Princeton University Press
Publication date: 07/14/2014
Series: Mathematical Notes , #32
Pages: 200
Product dimensions: 6.00(w) x 9.00(h) x (d)

Read an Excerpt

Predicative Arithmetic

Mathematical Notes 32


By Edward Nelson

PRINCETON UNIVERSITY PRESS

Copyright © 1986 Princeton University Press
All rights reserved.
ISBN: 978-0-691-08455-8



CHAPTER 1

The impredicativity of induction


The induction principle is this: if a property holds for 0, and if whenever it holds for a number n it also holds for n + 1, then the property holds for all numbers. For example, let θ(n) be the property that there exists a number m such that 2 · m = n · (n + 1). Then θ(0) (let m = 0). Suppose 2 · m = n · (n + 1). Then 2 · (m + n + 1) = (n + 1) · ((n + 1) + 1), and thus if θ(n) then θ(n + 1). The induction principle allows us to conclude θ(n) for all numbers n. As a second example, let π(n) be the property that there exists a non-zero number m that is divisible by all numbers from 1 to n . Then π(0) (let m = 1). Suppose m is a non-zero number that is divisible by all numbers from 1 to n. Then m · (n + 1) is a non-zero number that is divisible by all numbers from 1 to n + 1, and thus if π(n) then π(n + 1). The induction principle would allow us to conclude π(n) for all numbers n.

The reason for mistrusting the induction principle is that it involves an impredicative concept of number. It is not correct to argue that induction only involves the numbers from 0 to n; the property of n being established may be a formula with bound variables that are thought of as ranging over all numbers. That is, the induction principle assumes that the natural number system is given. A number is conceived to be an object satisfying every inductive formula; for a particular inductive formula, therefore, the bound variables are conceived to range over objects satisfying every inductive formula, including the one in question.

In the first example, at least one can say in advance how big is the number m whose existence is asserted by θ(n): it is no bigger than n · (n + 1). This induction is bounded, and one can hope that a predicative treatment of numbers can be constructed that yields the result θ(n). In the second example, the number m whose existence is asserted by π(n) cannot be bounded in terms of the data of the problem.

It appears to be universally taken for granted by mathematicians, whatever their views on foundational questions may be, that the impredicativity inherent in the induction principle is harmless — that there is a concept of number given in advance of all mathematical constructions, that discourse within the domain of numbers is meaningful. But numbers are symbolic constructions; a construction does not exist until it is made; when something new is made, it is something new and not a selection from a preexisting collection. There is no map of the world because the world is coming into being.

Let us explore the possibility of developing arithmetic predicatively.

CHAPTER 2

Logical terminology


I tried several times to write a brief, clear summary of the logical terminology that will be used in this investigation, but it always came out long and muddy. Instead, I refer the reader to the beautiful exposition in Shoenfield's book [Sh], especially the first four chapters. Our logical terminology and notation are those of [Sh] except for some departures and additions that will be indicated.

Lower case italic letters, possibly with 0, 1, ... as a subscript, are variables. The order a, b, ..., z, a0, b0, ..., z0, a1, ... of the variables is called alphabetical order. Roman letters are used as in [Sh] as syntactical variables when talking about expressions.

We define Ax[al as follows: if no variable occurring in the term a occurs bound in the formula A, substitute a for each free occurrence of the variable x in A; otherwise let x1, ..., xv be in alphabetical order the variables that occur in a and occur bound in A, let y1, ..., yv be in alphabetical order the first v variables distinct from all variables occurring [??] A or a, substitute yμ for each bound occurrence of xμ in A for all μ from 1 to v, and then substitute a for each free occurrence of x. For example, if A is

[for all]x[for all]y (x + y) + z = x + (y + z),


then Az[0] is [for all]x[for all]y (x + y) + 0 = x (y + 0) and Az[x] is [for all]a[for all]y (a + y) + x = a + (y + x). We write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].


We make the following abbreviations, in addition to those of [Sh]. If p1 ..., pv are binary predicate symbols, we write

a0p1a1 ... av-1pvav for a0p1a1 & ... & av-1pvav,


as in x = y = z for x = y & y = z. We follow [Sh] is associating [disjunction], &, and -> from right to left, and letting [disjunction] and & take precedence over -> and <-> in restoring parentheses, so that

A1 -> ... -> Av-1 -> Av


is equivalent to

A1 & ... & Av-1 -> Av,


but we adopt the convention that

A1<-> ... <-> Av-1<-> Av

is an abbreviation for

(A1<-> A2) & ... & (Av-1<-> Av).


If A is A1 -> A2, we write hypA for A1 and conA for A2. If A is A1<-> A2, we write lhsA for A1 and rhsA for A2. We write

[there exists]!xA for [there exists]x(A & [for all] y(Ax[y] -> y = x)),


where y is the first variable in alphabetical order distinct from x and all variables occurring in A. If all bound occurrences of x in A occur in the part [there exists]xB, we write scope[there exists]x A for B, and similarly with [there exists] replaced by [there exists]! or [for all].

We write the defining axiom of a function symbol f as

fx1 ... xv = y -> D


(instead of y = fx1 ... xv<-> D as in [Sh,§4.6]). Sometimes we write it in the form

fx1 ... xv = y <-> A, otherwise y = e


where e is a constant. This is an abbreviation for

fx1 ... xv = y <-> A [conjunction] ([logical not][there exists]yA & y = e).


Then the existence condition is trivial, but the uniqueness condition still has to be verified. Also, we sometimes adjoin a new function symbol f by writing

fx1 ... xv = a


where a is a term containing no variable other than x1, ..., xv; the defining axiom is understood to be

fx1 ... xv, = y <-> y = a


where y is the first variable in alphabetical order distinct from x1, ..., xv.

A formula will be called unary in case one and only one variable occurs free in it. If C is unary and x is the variable occurring free in it, we write C[a] for Cx[a]. Let C be a unary formula. Then we write Ac for the formula obtained by replacing each part of A of the form [there exists]yB by [there exists]y(C[y] & B). (We follow [Sh] in regarding [for all]y as an abbreviation for [logical not] [there exists]y[logical not]; if one chooses not to eliminate the defined symbol [for all], then an equivalent formula is obtained if in addition one replaces each part of A of the form [for all]yB by [for all]y(C[y] -> B).) Let x1, ..., xv be in alphabetical order the variables occurring free in A; then we write

C(freeA) for C[x1] & ··· & C[xv].


If A is closed, then C(freeA) is the empty expression; we make the convention that if A is closed, then all occurrences of "C(freeA)" together with attendant logical connectives are to be deleted. (In general, I will not worry about the distinction between use and mention, but in this case I use quotation marks lest the reader be puzzled as to how to go about deleting the empty expression.) We write

Ac for C(freeA) -> Ac


and call Ac the relativization of A by C.

Let C be a unary formula of a theory T. For f a v-ary function symbol, we say that C respects f in case

[??]T C[x1] -> ... -> C[xv] -> C[fx1 ... xv].


We say that C respects A in case [??]T AC. We say that C is inductive in case C respects 0 and S (assuming T to contain the constant 0 and the unary function symbol S), so that C is inductive if and only if

[??]T C[0] & (C[x] -> C[Sx])


(or, equivalently, if and only if [??]T C[0] & [for all]x(C[x] -> C[Sx])). We say that C is hereditary in case

[??]T C[x] & 7 ≤ x -> C[y]


(assuming T to contain the binary predicate symbol ≤). If C' is also unary, we say that C' is stronger than C in case

[??]T C'[x] -> C[x].


If x occurs free in A and y1, ... yv are in alphabetical order the variables distinct from x that occur free in A, we write

A/x for [for all]y1 ... [for all]yvA,


and we say that A in inductive in x in case A/x is inductive. If p is a unary predicate sumbol, we say that p is inductive, etc., in case the formula p(x) is inductive, etc.

The definitions in the preceding paragraph are all relative to the theory T. If T is not clear from the context, we add in T.

We write

indxA for Ax[0] & [for all]x(A -> Ax[Sx]).


An induction formula is a formula of the form indx A -> A.

We write T[B1, ..., Bλ)] for the theory obtained by adjoining to the language of T all of the new nonlogical symbols in B1, ..., Bλ), and then adjoining B1, ..., Bλ), as nonlogical axioms.

Let C be a unary formula of the theory T. We say that C respects T in case C respects every function symbol and nonlogical axiom of T and [??]T [there exists]xC[x]. This last condition follows from the first if T contains a constant (O-ary function symbol) e, for then [??]T C[e]. Let C respect T, let U be the extension by definitions of T obtained by adjoining the unary predicate symbol U1 with defining axiom U1x <-> C[x], and let u1 be u for each nonlogical sumbol u of T. Then we have an interpretation (see [Sh,§4.7]) of T in U, which we call the interpretation associated, with C.

We write ([xi]), where [xi] is some label, as an abbreviation for the formula with that label, and we do not hesitate to treat it as a formula. For example, the formula with label 3.4 will be found in Chapter 3; it is x + Sy = S(x + y), so (3.4)y[0] is x + SO = S(x + 0).

When we introduce a hypothesis in a proof we use the word suppose, and the discharge of the hypothesis is indicated by the word thus. (Suppose we adopt this convention (suppose, that is, that in proofs we use these two words (suppose and thus) in the manner indicated, and only thus). Then these two words (suppose and thus) function much as parentheses (suppose playing the role of the left parenthesis, with the role of the right parenthesis being played by thus). We hope to achieve clarity thus.)

As is customary in mathematics, we use letters — for the purposes of this paragraph, a letter is an italic lower case letter possibly with 0, 1, ... as a subscript — sometimes as variables and, in the course of a proof, sometimes as constants, and a single letter may play both roles in a given proof. When we have proved [there exists]x1 ... [there exists]xvA, we indicate the introduction of constants (see the discussion of special constants in [Sh,§4.2]) by writing

there exist X1, ..., and xv such that A


(with the appropriate change in grammar or punctuation when v is 1 or 2). We write

let x1 = a1, ..., xv = av


instead of

there exist x1, ..., and xv such that x1 = a1 & ... & xv = av.


A numeral is an expression of the form S ... S0. We use n, possibly with a subscript, as a syntactical variable for numerals, and [bar.v] is an abbreviation for the numeral with v occurrences of S.

CHAPTER 3

The axioms of arithmetic


By Peano Arithmetic we mean the theory I whose nonlogical symbols are the constant 0, the unary function symbol S (successor), and the binary function symbols + and ·, and whose nonlogical axioms are

3.1 Ax. Sx ≠ 0,

3.2 Ax. Sx = Sy -> x = y,

3.3 Ax. x + 0 = x,

3.4 Ax. x + S y = S(x + y),

3.5 Ax. x · 0 = 0,

3.6 Ax. x · Sy = x · y + x,,


and all induction formulas in the language of I. We have adopted the usual convention that · takes predecence over + in restoring parentheses, so that (3.6) is an abbreviation for x · Sy = (x · y) + x.

If we simply drop the induction formulas as axioms, then the resulting theory is too weak to be of much arithmetical interest; a model-theoretic argument to this effect is given by Mostowski, Robinson, and Tarski in [MRT, pp.62-64].

Robinson's theory (see [Ro]) is the theory Q with the language of I whose nonlogical axioms are (3.1)–(3.6) and the following formula (which is a theorem of I since it is easily proved by induction):

R. x ≠ 0 -> [there exits]y Sy = x.


This beautiful and much studied theory is in a sense a minimal axiomatization of arithmetic. We will work only in theories that are interpretable in Q.

It will be convenient to reformulate Q as an open theory. The formula

3.7 Ax. Px = y <-> Sy = x [conjunction] (x = 0 & y = 0)

is the defining axiom of a unary function symbol P (predecessor) that can be adjoined to Q; the existence condition holds by (R) and the uniqueness condition holds by (3.2). Let Q0 be the theory whose nonlogical symbols are those of Q together with P and whose nonlogical axioms are (3.1)–(3.7); then (R) is a theorem, but not an axiom, of Q0.


(Continues...)

Excerpted from Predicative Arithmetic by Edward Nelson. Copyright © 1986 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

  • FrontMatter, pg. i
  • Acknowledgments, pg. v
  • Table of Contents, pg. vii
  • Chapter 1. The impredicativity of induction, pg. 1
  • Chapter 2. Logical terminology, pg. 3
  • Chapter 3. The axioms of arithmetic, pg. 8
  • Chapter 4. Order, pg. 10
  • Chapter 5. Induction by relativization, pg. 12
  • Chapter 6. Interpretability in Robinson's theory, pg. 16
  • Chapter 7. Bounded induction, pg. 23
  • Chapter 8. The bounded least number principle, pg. 29
  • Chapter 9. The euclidean algorithm, pg. 32
  • Chapter 10. Encoding, pg. 36
  • Chapter 11. Bounded separation and minimum, pg. 43
  • Chapter 12. Sets and functions, pg. 46
  • Chapter 13. Exponential functions, pg. 51
  • Chapter 14. Exponentiation, pg. 54
  • Chapter 15. A stronger relativization scheme, pg. 60
  • Chapter 16. Bounds on exponential functions, pg. 64
  • Chapter 17. Bounded replacement, pg. 70
  • Chapter 18. An impassable barrier, pg. 73
  • Chapter 19. Sequences, pg. 82
  • Chapter 20. Cardinality, pg. 90
  • Chapter 21. Existence of sets, pg. 95
  • Chapter 22. Semibounded Replacement, pg. 98
  • Chapter 23. Formulas, pg. 101
  • Chapter 24. Proofs, pg. 111
  • Chapter 25. Derived rules of inference, pg. 115
  • Chapter 26. Special constants, pg. 134
  • Chapter 27. Extensions by definition, pg. 136
  • Chapter 28. Interpretations, pg. 152
  • Chapter 29. The arithmetization of arithmetic, pg. 157
  • Chapter 30. The consistency theorem, pg. 162
  • Chapter 31. Is exponentiation total?, pg. 173
  • Chapter 32. A modified Hilbert program, pg. 178
  • Bibliography, pg. 181
  • General index, pg. 183
  • Index of defining axioms, pg. 186



From the B&N Reads Blog

Customer Reviews