Compilers: Principles, Techniques, and Tools / Edition 2

Compilers: Principles, Techniques, and Tools / Edition 2

ISBN-10:
0321486811
ISBN-13:
9780321486813
Pub. Date:
08/31/2006
Publisher:
Pearson Education
ISBN-10:
0321486811
ISBN-13:
9780321486813
Pub. Date:
08/31/2006
Publisher:
Pearson Education
Compilers: Principles, Techniques, and Tools / Edition 2

Compilers: Principles, Techniques, and Tools / Edition 2

$233.32
Current price is , Original price is $233.32. You
$212.34 
  • SHIP THIS ITEM
    Not Eligible for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$150.98  $233.32 Save 35% Current price is $150.98, Original price is $233.32. You Save 35%.
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

Overview

Compilers: Principles, Techniques and Tools, known to professors, students, and developers worldwide as the "Dragon Book," is available in a new edition. Every chapter has been completely revised to reflect developments in software engineering, programming languages, and computer architecture that have occurred since 1986, when the last edition published. The authors, recognizing that few readers will ever go on to construct a compiler, retain their focus on the broader set of problems faced in software design and software development.


Product Details

ISBN-13: 9780321486813
Publisher: Pearson Education
Publication date: 08/31/2006
Edition description: REV
Pages: 1040
Product dimensions: 263.78(w) x 370.08(h) x 62.99(d)

About the Author

Alfred V. Aho is Lawrence Gussman Professor of Computer Science at Columbia University. Professor Aho has won several awards including the Great Teacher Award for 2003 from the Society of Columbia Graduates and the IEEE John von Neumann Medal. He is a member of the National Academy of Engineering and a fellow of the ACM and IEEE.

Monica S. Lam is a Professor of Computer Science at Stanford University, was the Chief Scientist at Tensilica and the founding CEO of moka5. She led the SUIF project which produced one of the most popular research compilers, and pioneered numerous compiler techniques used in industry.

Ravi Sethi launched the research organization in Avaya and is president of Avaya Labs. Previously, he was a senior vice president at Bell Labs in Murray Hill and chief technical officer for communications software at Lucent Technologies. He has held teaching positions at the Pennsylvania State University and the University of Arizona, and has taught at Princeton University and Rutgers. He is a fellow of the ACM.

Jeffrey Ullman is CEO of Gradiance and a Stanford W. Ascherman Professor of Computer Science at Stanford University. His research interests include database theory, database integration, data mining, and education using the information infrastructure. He is a member of the National Academy of Engineering, a fellow of the ACM, and winner of the Karlstrom Award and Knuth Prize.

Read an Excerpt


From Chapter 10: Code Optimization

...Meet-Over-Paths Solutions to Data-Flow Problems

Let us imagine that a flow graph has associated with each of its nodes a transfer function, one of the functions in the set F. For each block B, let fB be the transfer function for B.

Consider any path P = B0 => B1 => . . . => Bk from the initial node B0 to some block Bk. We can define the transfer function for P to be the composition of fB0, fBk-1. Note that fBk is not part of the composition, reflecting the point of view that this path is taken to reach the beginning of block Bk, not its end.

We have assumed that the values in V represent information about data used by the program, and that the confluence operator tells how that information is combined when paths converge. It also makes sense to see the top element as representing "no information," since a path carrying the top element yields to any other paths. Thus, if B is a block in the flow graph, the information entering B should be computable by considering every possible path from the initial node to B and seeing what happens along the path, starting out with no information.

Inprinciple, this meet could be over an infinite number of different values, since there are an infinite number of different paths. In practice, it is often adequate to consider only acyclic paths, and even when it is not, as for the constant computation framework discussed above, there are usually other reasons we can find to make this infinite meet by finite meetbe finite for any particular flow graph.

The mop solution to a flow graph makes sense when we realize that as far as information reaching block B is concerned, the flow graph may as well be the one suggested in Fig. 10.61, where the transfer function associated with each of the (possibly infinite number of) paths P1, P2, . . . , in the original flow graph has been given an entirely separate path to B. In Fig. 10.61, the information reaching B is given by the meet over all paths.

Conservativbe Solutions to Flow Problems

When we try to solve the data-flow equations that come from an arbitrary framework, we may or may not be able to obtain the mop solution easily. Fortunately, as with the concrete examples of data-flow frameworks in Sections 10.5 and 10.6, there is a safe direction in which to err, and the iterative data-flow algorithm we discussed in those sections turns out always to provide us with a safe solution. We say a solution in[B] is a safe solution if in[B] is less than or equal to mop(B) for all blocks B.

Despite what the reader might imagine, we did not pull this definition out of thin air. Recall that in any flow graph, the set of apparent paths to a node (those that are paths in the flow graph) can be a proper subset of the real paths, those that are taken on some execution of the program corresponding to this flow graph. In order that the result of the data-flow analysis be usable for whatever purpose we intend it, the data must still be safe if we modify the flow graph by deleting some paths, since we cannot in general distinguish real paths from apparent paths that are not real.

While we may prefer the "true" solution to the data-flow problem, we shall almost surely have no efficient way to tell exactly which paths are real and which are not, so we are forced to accept the mop solution as the closest feasible solution. Thus, whatever use we make of the data-flow information must be consistent with the possibility that the solution we obtain is less than or equal to the true solution. Once we accept that, we should also be able to accept a solution that is less than or equal to the mop for those frameworks that are monotone but not distributive. For distributive frameworks, like those of Section 10.6, the simple iterative algorithm computes the mop solution...

Table of Contents

  • 1.1 Language Processors
  • 1.2 The Structure of a Compiler
  • 1.3 The Evolution of Programming Languages
  • 1.4 The Science of Building a Compiler
  • 1.6 Programming Language Basics
  • 1.7 Summary of Chapter 1
  • 1.8 References for Chapter 1
  • 2.1 Introduction
  • 2.2 Syntax Definition
  • 2.3 Syntax-Directed Translation
  • 2.4 Parsing
  • 2.5 A Translator for Simple Expressions
  • 2.6 Lexical Analysis
  • 2.7 Symbol Tables
  • 2.8 Intermediate Code Generation
  • 2.9 Summary of Chapter 2
  • 3.1 The Role of the Lexical Analyzer
  • 3.2 Input Buffering
  • 3.3 Specification of Tokens
  • 3.4 Recognition of Tokens
  • 3.5 The Lexical-Analyzer Generator Lex
  • 3.6 Finite Automata
  • 3.7 From Regular Expressions to Automata
  • 3.8 Design of a Lexical-Analyzer Generator
  • 3.9 Optimization of DFA-Based Pattern Matchers
  • 3.10 Summary of Chapter 3
  • 3.11 References for Chapter 3
  • 4.1 Introduction
  • 4.2 Context-Free Grammars
  • 4.3 Writing a Grammar
  • 4.4 Top-Down Parsing
  • 4.5 Bottom-Up Parsing
  • 4.6 Introduction to LR Parsing: Simple LR
  • 4.7 More Powerful LR Parsers
  • 4.8 Using Ambiguous Grammars
  • 4.9 Parser Generators
  • 4.10 Summary of Chapter 4
  • 4.11 References for Chapter 4
  • 5.1 Syntax-Directed Definitions
  • 5.2 Evaluation Orders for SDD's
  • 5.3 Applications of Syntax-Directed Translation
  • 5.4 Syntax-Directed Translation Schemes
  • 5.5 Implementing L-Attributed SDD's
  • 5.6 Summary of Chapter 5
  • 5.7 References for Chapter 5
  • 6.1 Variants of Syntax Trees
  • 6.2 Three-Address Code
  • 6.3 Types and Declarations
  • 6.4 Translation of Expressions
  • 6.5 Type Checking
  • 6.6 Control Flow
  • 6.7 Backpatching
  • 6.8 Switch-Statements
  • 6.9 Intermediate Code for Procedures
  • 6.10 Summary of Chapter 6
  • 6.11 References for Chapter 6
  • 7.1 Storage Organization
  • 7.2 Stack Allocation of Space
  • 7.3 Access to Nonlocal Data on the Stack
  • 7.4 Heap Management
  • 7.5 Introduction to Garbage Collection
  • 7.6 Introduction to Trace-Based Collection
  • 7.7 Short-Pause Garbage Collection
  • 7.8 Advanced Topics in Garbage Collection
  • 7.9 Summary of Chapter 7
  • 7.10 References for Chapter 7
  • 8.1 Issues in the Design of a Code Generator
  • 8.2 The Target Language
  • 8.3 Addresses in the Target Code
  • 8.4 Basic Blocks and Flow Graphs
  • 8.5 Optimization of Basic Blocks
  • 8.6 A Simple Code Generator
  • 8.7 Peephole Optimization
  • 8.8 Register Allocation and Assignment
  • 8.9 Instruction Selection by Tree Rewriting
  • 8.10 Optimal Code Generation for Expressions
  • 8.11 Dynamic Programming Code-Generation
  • 8.12 Summary of Chapter 8
  • 8.13 References for Chapter 8
  • 9.1 The Principal Sources of Optimization
  • 9.2 Introduction to Data-Flow Analysis
  • 9.3 Foundations of Data-Flow Analysis
  • 9.4 Constant Propagation
  • 9.5 Partial-Redundancy Elimination
  • 9.6 Loops in Flow Graphs
  • 9.7 Region-Based Analysis
  • 9.8 Symbolic Analysis
  • 9.9 Summary of Chapter 9
  • 9.10 References for Chapter 9
  • 10.1 Processor Architectures
  • 10.2 Code-Scheduling Constraints
  • 10.3 Basic-Block Scheduling
  • 10.4 Global Code Scheduling
  • 10.5 Software Pipelining
  • 10.6 Summary of Chapter 10
  • 10.7 References for Chapter 10
  • 11.1 Basic Concepts
  • 11.2 Matrix Multiply: An In-Depth Example
  • 11.3 Iteration Spaces
  • 11.4 Affine Array Indexes
  • 11.5 Data Reuse
  • 11.6 Array Data-Dependence Analysis
  • 11.7 Finding Synchronization-Free Parallelism
  • 11.8 Synchronization Between Parallel Loops
  • 11.9 Pipelining
  • 11.10 Locality Optimizations
  • 11.11 Other Uses of Affine Transforms
  • 11.12 Summary of Chapter 11
  • 11.13 References for Chapter 11
  • 12.1 Basic Concepts
  • 12.2 Why Interprocedural Analysis?
  • 12.3 A Logical Representation of Data Flow
  • 12.4 A Simple Pointer-Analysis Algorithm
  • 12.5 Context-Insensitive Interprocedural Analysis
  • 12.6 Context-Sensitive Pointer Analysis
  • 12.7 Datalog Implementation by BDD's
  • 12.8 Summary of Chapter 12
  • 12.9 References for Chapter 12
  • A.1 The Source Language
  • A.2 Main
  • A.3 Lexical Analyzer
  • A.4 Symbol Tables and Types
  • A.5 Intermediate Code for Expressions
  • A.6 Jumping Code for Boolean Expressions
  • A.7 Intermediate Code for Statements
  • A.8 Parser
  • A.9 Creating the Front End
From the B&N Reads Blog

Customer Reviews