Algebraic Logic

Algebraic Logic

by Paul R. Halmos
Algebraic Logic

Algebraic Logic

by Paul R. Halmos

eBook

$11.49  $14.95 Save 23% Current price is $11.49, Original price is $14.95. You Save 23%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Beginning with an introduction to the concepts of algebraic logic, this concise volume features ten articles by a prominent mathematician that originally appeared in journals from 1954 to 1959. Covering monadic and polyadic algebras, these articles are essentially self-contained and accessible to a general mathematical audience, requiring no specialized knowledge of algebra or logic.
Part One addresses monadic algebras, with articles on general theory, representation, and freedom. Part Two explores polyadic algebras, progressing from general theory and terms to equality. Part Three offers three items on polyadic Boolean algebras, including a survey of predicates, terms, operations, and equality. The book concludes with an additional bibliography and index.

Product Details

ISBN-13: 9780486810416
Publisher: Dover Publications
Publication date: 03/17/2016
Series: Dover Books on Mathematics
Sold by: Barnes & Noble
Format: eBook
Pages: 272
Sales rank: 820,683
File size: 30 MB
Note: This product may take a few minutes to download.

About the Author

Paul R. Halmos (1916–2006) was a prominent American mathematician who taught at the University of Chicago, the University of Michigan, and other colleges. His significant contributions to several areas of mathematics include the fields of mathematical logic, ergodic theory, functional analysis, and probability theory.

Read an Excerpt

Algebraic Logic


By Paul R. Halmos

Dover Publications, Inc.

Copyright © 2016 Paul R. Halmos
All rights reserved.
ISBN: 978-0-486-81041-6



CHAPTER 1

INTRODUCTION


THE BASIC CONCEPTS OF ALGEBRAIC LOGIC


1. Introduction. It has often happened that a theory designed originally as a tool for the study of a physical problem came subsequently to have purely mathematical interest. When that happens, the theory is usually generalized way beyond the point needed for applications, the generalizations make contact with other theories (frequently in completely unexpected directions), and the subject becomes established as a new part of pure mathematics. The part of pure mathematics so created does not (and need not) pretend to solve the physical problem from which it arises; it must stand or fall on its own merits.

Physics is not the only external source of mathematical theories; other disciplines (such as economics and biology) can play a similar role. A recent (and possibly somewhat surprising) addition to the collection of mathematical catalysts is formal logic; the branch of pure mathematics that it has precipitated will here be called algebraic logic.

Algebraic logic starts from certain special logical considerations, abstracts from them, places them into a general algebraic context, and, viathe generalization, makes contact with other branches of mathematics (such as topology and functional analysis). It cannot be overemphasized that algebraic logic is more algebra than logic. Algebraic logic does not claim to solve any of the vexing foundation problems that sometimes occupy logicians. All that is claimed for it is that it is a part of pure mathematics in which the concepts that constitute the skeleton of modern symbolic logic can be discussed in algebraic language. The discussion serves to illuminate and clarify those concepts and to indicate their connection with ordinary mathematics. Whether the subject as a whole will come to be considered sufficiently interesting and sufficiently deep to occupy a place among pure mathematical theories remains to be seen.

The literature of algebraic logic is not yet very extensive, and the few items that are available are highly technical. It is for that reason that this expository paper was written; its main purpose is to kindle interest in a young but promising subject. In such a context it does not seem to be appropriate to burden the reader with the usual scholarly references and assignments of credit. At the very least, however, the names of the principal contributors should be mentioned. Here they are: Curry, Henkin, Rasiowa, Sikorski, and Tarski. Many of the ideas of algebraic logic have been in the air for several years and were known, at least subconsciously, by most logicians. The greatest contributions are those of Tarski; especially relevant is his work (with Jónsson) on Boolean algebras with operators and his theory of cylindric algebras. (Most of the latter material is unfortunately unpublished.) The reader who wishes to study the details will find exact references in two papers on algebraic logic by the present author; the first is in Compositio Mathematica (1955), and the second is to appear in Fundamenta Mathematicae (1957).

2. Boolean algebras. The father of algebraic logic is George Boole; it is appropriate that a discussion of the subject begin with a quick review of the algebras that bear his name. The shortest definition of Boolean algebras is in terms of the theory of rings: a Boolean algebra is a ring with unit in which every element is idempotent (i.e., if p is in the ring, then p2 = p). The simplest example, and one that plays a vitally important role throughout the theory, is the field of integers modulo 2; this Boolean algebra will be denoted by O.

Boolean algebras have an almost embarrassingly rich structure. It is, for instance, an easy consequence of the definition that a Boolean algebra always has characteristic 2 (i.e., p + p = 0) and that as a ring it is always commutative (i.e., pq = qp). In every Boolean algebra there is, moreover, a natural order relation; it is defined by writing pq if and only if pq = p. The algebraic structure and the order structure are as compatible as they can be. The algebraic zero is also the order zero (i.e., the least element of the algebra), and the algebraic unit is also the order unit (i.e., the greatest element of the algebra); in other words, 0 ≤ p ≤ 1 for every p. With respect to the order, the algebra turns out to be a complemented lattice. The lattice operations can be expressed in terms of the given algebraic operations, as follows: the complement of p, denoted by p', is 1 + p; the infimum of p and q, denoted by p [conjunction] q, is pq; and the supremum of p and q, denoted by p [disjunction] q, is p + q + pq.

The process of defining many useful operations and relations in a Boolean algebra, in terms of addition and multiplication, is to a large extent reversible. This fact is responsible for the abundance of different axiomatic approaches to the subject. A Boolean algebra can be defined in terms of its partial order, or in terms of complements and suprema, or in terms of complements and infima, and so on and so forth ad almost infinitum. Thus, for instance, since sums and products can be expressed in terms of complements and infima (pq = p [conjunction] q and p + q = (p' [conjunction] q')' [conjunction] (p [conjunction] q)'), it follows that if a set admits a unary operation and a binary operation satisfying the appropriate conditions, then that set is a Boolean algebra. (It isn't really, but it is close enough to make the distinction pedantic. What should be said is that if addition and multiplication are defined as indicated above, then the set, together with the defined operations, constitutes a Boolean algebra.) The appropriate conditions are simple to describe. They require that the underlying set contain at least two distinct elements, and that

(I 1) p [conjunction] (q [conjunction] r) = (p [conjunction] q) [conjunction] r,

(I 2) p [conjunction] q = q [conjunction] p

(I 3) if p [conjunction] q' = r [conjunction] r', then p [conjunction] q = p,

(I 4) if p [conjunction] q = p, then p [conjunction] q' = r [conjunction] r',


for all elements p, q, and r. (Caution: (I 3) means that if p [conjunction] q' = r [conjunction] r' for some r, then p [conjunction] q — p, and (I 4) means that if p [conjunction] q = p, then p [conjunction] q'= r [conjunction] r' for all r.) Since, inside a fixed non-empty set, set-theoretic complementation and the formation of set-theoretic intersections do satisfy these conditions, any class of sets that is closed under these two operations is a Boolean algebra. The class of all subsets of a non-empty set is the most easily described (but far from the only) example of a Boolean algebra obtained in this way.

[Terminological purists sometimes object to the Boolean use of the word "algebra." The objection is not really cogent. In the first place, the theory of Boolean algebras has not yet collided, and it is not likely to collide, with the theory of linear algebras. In the second place, a collision would not be catastrophic; a Boolean algebra is, after all, a linear algebra over the field of integers modulo 2. The last, but not the least, pertinent comment is a pragmatic one. While, to be sure, a shorter and more suggestive term than "Boolean algebra" might be desirable, the nomenclature is so thoroughly established that to change now would do more harm than good.]

It is amusing and instructive to compare the axiom system (I) with the following, somewhat unorthodox, system of axioms for groups. A group may be defined as a non-empty set with a unary operation (p [right arrow] p–) and a binary operation ((p, q) [right arrow] p × q) such that

(II, 1) p x (q x r) = (p x q) r,

(II 2) (p x q)- = q- x p-

(II 3) if p x q = r x r-, then p = q-,

(II 4) if p x q-, then p x q = r x r-


for all elements p, q, and r. (Caution: (II 3) means that if p × q = r × r– for some r, then p = q–, and (II 4) means that if p = q–, then p × q = r × r– for all r.) It is clear that if, in a group, p– is defined to be the inverse of p, and p × q is defined to be the product of p and q (in that order), then the conditions (II) are satisfied. The fact that, conversely, the conditions (II) are characteristic of inversion and multiplication in groups is an easy exercise in elementary axiomatics.

[One comment should be made on the independence of the axiom set (II). The fact is that the set is not independent; the first three axioms are sufficient to characterize groups, and, in particular, they imply the fourth. The reason (II) is offered in the present form is to emphasize its similarity with the axiom set (I) for Boolean algebras. Each of the axioms (II 1), (II 2), and (II 3) is independent of the other three axioms of the set (II).]

3. Propositional calculi. To understand the connection between Boolean algebra and logic, a good way to begin is to examine how sentences are combined by means of sentential connectives. To ensure an unprejudiced approach to the subject, it is desirable to proceed as abstractly as possible. Suppose, therefore, that there is given an arbitrary non-empty set S; intuitively the elements of S are to be thought of as the basic sentences of some theory that is being studied. Suppose, moreover, that A and N are two distinct objects not contained in S; intuitively A and N are to be thought of as the connectives "and" and "not." (Warning: in other expositions, A is frequently used for the dual connective "or".) Consider finite sequences whose terms are either elements of S or else A or N. (The empty sequence is not included.) If s is a sequence of length n, say, so that s0,…, sn–1 are elements of S [union] {A, N}, let Ns be the sequence defined by

(Ns)0 = N, (Ns)i = Si = 1 (I = 1, …, n);


if s and t are sequences (of lengths n and m respectively), let Ast be the sequence defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


Let S* be the smallest set of sequences such that (1) if s is a sequence of length 1 whose unique term is in S, then s [member of] S*, (2) if s [member of] S*, then Ns [member of] S*, and (3) if s [member of] S* and t [member of] S*, then Ast [member of] S*. In other words, S* is the set of sequences generated from the one-term sequences of S by means of the operations of prefixing N to a sequence and prefixing A to the concatenation of two sequences. Intuitively S* is to be thought of as the set of sentences generated from ehe basic sentences by the two basic connectives. The device of writing Ast instead of s A t (an ingenious Polish invention) is designed to avoid the multiple layers of parentheses that the more intuitive infix notation necessitates.

The set S* by itself is not quite a proper object of logical study. The trouble is that if, for instance, s and t are in S*, then Ast and Ats are distinct elements of S*, whereas common sense seems to demand that if s and t are sentences, then "s and t" and "t and s" should be sentences that "say the same thing" in some sense. Sentences admit, in other words, a natural equivalence relation; such a relation should therefore be introduced into S*. A little thought about the intuitive interpretation of the elements of S* will suggest many conditions that equivalence should satisfy. If the assertion that the elements s and t of S* are equivalent is denoted by s = t, then, for all elements s, t, and u of S*, it should at the very least be required that

(III 1) AsAtu [??] Aastu,

(III 2) Ast [??] Ats,

(III 3) if AsNt [??] AuNu, then Ast [??] s

(III 4) if Ast [??] s, then AsNt [??] AuNu,

(III 5) if s [??] t, then Ns [??] Nt,

(III 6) if s [??] t, then Asu [??] Atu.


In addition, of course, it is necessary that the concept of equivalence used here be an honest equivalence, i.e., that it be reflexive, symmetric, and transitive.

There are likely to be many equivalence relations satisfying the conditions (III); one such is defined by writing s = t for all s and t. In order to avoid this triviality (and for other reasons), it is desirable to consider the smallest possible equivalence (i.e., the one in which the fewest possible pairs turn out to be equivalent) satisfying these conditions. This makes sense. Indeed, if an equivalence relation is thought of as a certain set of ordered pairs, then the intersection of all the equivalence relations satisfying (III) is an equivalence relation satisfying (III). If, from now on, the symbol = is used to denote this minimal equivalence, then the pair (S, =) is one of the simplest non-trivial logical structures; it is usually known as the propositional (or sentential) calculus. There are really many propositional calculi; there is one corresponding to each non-empty set S. It is clear, however, that the only thing that matters (all other differences between the various propositional calculi being essentially notational matters) is the cardinal number of S. It is customary (but not particularly profitable) to assume that S is countably infinite.

4. Axioms and rules. The time has come to make the notation more transparent. While the symbols A and N are technically convenient (no parentheses), it is usually a cumbersome job to decode the sentences involving them and to recapture their intended intuitive content. Accordingly, in what follows, (s)' will be used as an alternative symbol for Ns, and, similarly, (s) [conjunction] (t) will be used as an alternative symbol for Ast. Thus, for instance, AsNt can be denoted by (s) [conjunction] ((t)'); with the usual mathematical conventions about omitting superfluous parentheses, this becomes s [conjunction] t'. It should now be clear that the conditions (III 1)(III 4) are only notationally different from (I 1)(I 4); the conditions (III 5) and (III 6) assert, in customary mathematical language, that = is a congruence relation with respect to the operations of attaching' and infixing [conjunction].

The equivalence relations that occur in algebra (e.g., the congruence relations in a ring) are usually described by specifying a particular equivalence class (the kernel) and a particular operation (subtraction); it then turns out that two elements are congruent if and only if their difference belongs to the kernel. A similar procedure is available to define the equivalence = described above. In order to motivate the choice of the kernel and the choice of the pertinent subtraction operation, consider first of all an element of S* that has the form s [conjunction] s'. If the elements of S* are interpreted as sentences, then there is something obviously undesirable about such an element; in some intuitive sense it is "false." By the same token, the result of attaching 'to such an element converts it into something quite laudable; sentences such as that are "true." The kernel that is usually considered is the equivalence class of any particular "true sentence" so obtained. It is pleasant to be able to report that this kernel is independent of the arbitrary choice of its generator. The equivalence class of any element of the form (s [disconjunction] s')' contains all other elements of that form, and, in fact, it consists exactly of all the elements that common sense would declare to be "true." An element of this kernel is called a tautology of the propositional calculus. The subtraction operation is the one that associates with two elements s and t of S* the element (s [conjunction] t')' [conjunction] (s' [conjunction] t)'. (In the original notation this reads ANAsNtNANst.) The perceptive reader will note that if s and t are interpreted as sentences, then the intuitive interpretation of the proposed "difference" between s and t is the sentence "s if and only if t." This subtraction does what it was tacitly promised it would do; it is true that a necessary and sufficient condition that s be equivalent to t is that the difference of s and t belong to the kernel. (In customary logical language: a necessary and sufficient condition for s = t is that the biconditional of s and t be a tautology. It is a happy circumstance that biconditioning is a commutative operation; the order of s and t is immaterial.)

In principle it is clear how the procedure used above could be reversed. The tautologies could be denned by making a complete list of them, and equivalence could then be defined in terms of tautologies and the formation of biconditionals. The list of tautologies would be infinite, to be sure, but a clever classifier might nevertheless succeed in describing it in a finite number of words. Something like this is what is usually done. In most text-book presentations of the propositional calculus one finds a small list of special tautologies, together with a few easy instructions for manufacturing others; a general tautology is then defined to be a member of the subset of S* generated from the special tautologies by repeated applications of the instructions. The given special tautologies (whose particular choice is largely a matter of individual taste) are called axioms, and the instructions for manufacturing others are called rules of inference. This "axiomatic" procedure (which has become traditional) has no particular advantages (or disadvantages) in comparison with the one followed above; its final result is merely an alternative description of the propositional calculus.

[Here, for the sake of completeness, are the details of one of the popular axiomatic approaches to the propositional calculus. Let s [disjunction] t and s [right arrow] t be abbreviations for (s' [conjunction] t')' and (s [conjunction] t')' respectively (or, in the original notation, for NANsNt and NAsNt respectively); the axioms are most conveniently, and most understandably, described by means of these abbreviations. The axioms consist of all those elements of S* that are of one of the following four forms:

(1) (s [disjunction] s) [right arrow] s,

(2) s [right arrow] (s [disjunction] t),

(3) (s [disjunction] t) [right arrow] (t [disjunction] s),

(4) (s [right arrow] t) [right arrow] ((u [disjunction] s) [right arrow] (u [disjunction] t)),


where s, t, and u are arbitrary elements of S*. There is only one rule of inference (called modus ponens). According to that rule, if s and t are elements of S* such that both s and s [right arrow] t are tautologies, then t is a tautology.]


(Continues...)

Excerpted from Algebraic Logic by Paul R. Halmos. Copyright © 2016 Paul R. Halmos. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Beginning with an introduction to the concepts of algebraic logic, this concise volume features ten articles by a prominent mathematician that originally appeared in journals from 1954 to 1959. Covering monadic and polyadic algebras, these articles are essentially self-contained and accessible to a general mathematical audience, requiring no specialized knowledge of algebra or logic.
Part One addresses monadic algebras, with articles on general theory, representation, and freedom. Part Two explores polyadic algebras, progressing from general theory and terms to equality. Part Three offers three items on polyadic Boolean algebras, including a survey of predicates, terms, operations, and equality. The book concludes with an additional bibliography and index.
Dover (2016) republication of the edition originally published by Chelsea Publishing Company, New York, 1962.
See every Dover book in print at
www.doverpublications.com

From the B&N Reads Blog

Customer Reviews