Mastering Data Modeling: A User-Driven Approach / Edition 1

Mastering Data Modeling: A User-Driven Approach / Edition 1

ISBN-10:
020170045X
ISBN-13:
9780201700459
Pub. Date:
11/23/2000
Publisher:
Pearson Education
ISBN-10:
020170045X
ISBN-13:
9780201700459
Pub. Date:
11/23/2000
Publisher:
Pearson Education
Mastering Data Modeling: A User-Driven Approach / Edition 1

Mastering Data Modeling: A User-Driven Approach / Edition 1

$49.99 Current price is , Original price is $49.99. You
$49.99 
  • SHIP THIS ITEM
    This item is available online through Marketplace sellers.
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$37.08 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

This item is available online through Marketplace sellers.


Overview

This book introduces Logical Data Structures (LDS), a powerful new approach to database design that can dramatically improve the cost-effectiveness and business value of any enterprise database system or database-driven application. The authors introduce a new notation, new diagramming approach, and new user-centered, high-ROI techniques for modeling even the most complex, high-volume database applications. This book starts from first principles, asking and answering crucial questions like: "To best serve users, how should the process of data modeling work? To create good, economical software systems, what kind of information should be on a data model? To become an effective data modeler, what skills should you master before talking with users?" Next, it teaches data modeling using LDS, designed to encourage a user-centered, requirements approach that leads directly to more effective applications. The authors walk you through the entire process of creating and enhancing a data model. For all database administrators, analysts, designers, and architects, and for all IT managers responsible for enterprise database system management or deployment.


Product Details

ISBN-13: 9780201700459
Publisher: Pearson Education
Publication date: 11/23/2000
Edition description: New Edition
Pages: 408
Product dimensions: 7.20(w) x 9.10(h) x 0.90(d)

About the Author

John Carlis is on the faculty in the Department of Computer Science at the University of Minnesota. For the past twenty years he has taught, consulted, and conducted research on database systems, particularly in data modeling and database language extensions. Visit his homepage at www.cs.umn.edu/~carlis.

Joseph Maguire is an independent consultant and the creator of the forthcoming Web site www.logicaldatastructures.com. For the past 18 years he has been an employee or consultant for many companies, including Bachman Information Systems, Digital, Lotus, Microsoft, and US WEST.



020170045XAB04062001

Read an Excerpt

This book teaches you the first step of creating software systems: learning about the information needs of a community of stran

This book teaches you the first step of creating software systems: learning about the information needs of a community of strangers. This book is necessary because that step—known as data modeling—is prone to failure.

This book presumes nothing; it starts from first principles and gradually introduces, justifies, and teaches a rigorous process and notation for collecting and expressing the information needs of a business or organization.

This book is for anyone involved in the creation of information-management software. It is particularly useful to the designers of databases and applications driven by database management systems.

In many regards, this book is different from other books about data modeling. First, because it starts from first principles, it encourages you to question what you might already know about data modeling and data-modeling notations. To best serve users, how should the process of data modeling work? To create good, economical software systems, what kind of information should be on a data model? To become an effective data modeler, what skills should you master before talking with users?

Second, this book teaches you the process of data modeling. It doesn't just tell you what you should know; it tells you what to do. You learn fundamental skills, you integrate them into a process, you practice the process, and you become an expert at it. This means that you can become a "content-neutral modeler," moving gracefully among seemingly unrelated projects for seemingly unrelated clients. Because the process of modeling applies equally to all projects, your expertise becomes universally applicable. Being a master data modeler is like being a master statistician who can contribute to a wide array of unrelated endeavors: population studies, political polling, epidemiology, or baseball.

Third, this book does not focus on technology. Instead, it maintains its focus on the process of discovering and articulating the users' information needs, without concern for how those needs can or should be satisfied by any of the myriad technological options available. We do not completely ignore technology; we frequently mention it to remind you that during data modeling, you should ignore it. Users don't care about technology; they care about their information. The notation we use, Logical Data Structures (LDS), encourages you to focus on users' needs. We think a data modeler should conceal technological details from users. But historically, many data modelers are database designers whose everyday working vocabulary is steeped in technology. When technologists talk with users, things can get awkward. In the worst case, users quit the conversation, or they get swept up in the technological details and neglect to paint a complete picture of their technology-independent information needs. Data modeling is not equivalent to database design.

Another undesirable trend: historically, many organizations wrongly think that data modeling can be done only by long-time, richly experienced members of the organization who have reached the status of "unofficial archivist." This is not true. Modeling is a set of skills like computer programming. It can be done by anyone equipped with the skills. In fact, a skilled modeler who is initially unfamiliar with the organization but has access to users will produce a better model than a highly knowledgeable archivist who is unskilled at modeling.

This book has great ambitions for you. To realize them, you cannot read it casually. Remember, we're trying to foster skills in you rather than merely deliver knowledge to you. If you master these skills, you can eventually apply them instinctively.

Study this book the way you would a calculus book or a cookbook. Practice the skills on real-life problems. Work in teams with your classmates or colleagues. Write notes to yourself in the margins. An ambitious book like this, well, we didn't just make it up. For starters, we are indebted to Michael Senko, a pioneer in database systems on whose work ours is based. Beyond him, many people deserve thanks. Most important are the many users we have worked with over the years, studying data: Gordon Decker; George Bluhm and others at the U. S. Soil Conservation Service; Peter O'Kelly and others at Lotus Development Corporation; John Hanna, Tim Dawson, and other employees and consultants at US WEST, Inc.; Jim Brown, Frank Carr, and others at Pacific Northwest National Laboratory; and Jane Goodall, Anne Pusey, Jen Williams, and the entire staff at the University of Minnesota's Center for Primate Studies. Not far behind are our students and colleagues. Among them are several deserving special thanks: Jim Albers, Dave Balaban, Leone Barnett, Doug Barry, Bruce Berra, Diane Beyer, Kelsey Bruso, Jake Chen, Paul Chapman, Jan Drake, Bob Elde, Apostolos Georgopolous, Carol Hartley, Jim Held, Chris Honda, David Jefferson, Verlyn Johnson, Roger King, Joe Konstan, Darryn Kozak, Scott Krieger, Heidi Kvinge, James A. Larson, Sal March, Brad Miller, Jerry Morton, Jose Pardo, Paul Pazandak, Doug Perrin, John Riedl, Maureen Riedl, George Romano, Sue Romano, Karen Ryan, Alex Safonov, Wallie Schmidt, Stephanie Sevcik, Libby Shoop, Tyler Sperry, Pat Starr, Fritz Van Evert, Paul Wagner, Bill Wasserman, George Wilcox, Frank Williams, Mike Young, and several thousand students who used early versions of our work. Thanks also go to Lilly Bridwell-Bowles of the Center for Interdisciplinary Studies of Writing at the University of Minnesota. Several people formally reviewed late drafts of this book and made helpful suggestions:Declan Brady, Paul Irvine Matthew C. Keranen, David Livingstone, and David McGoveran. And finally, thanks to the helpful and pat ent people at Addison-Wesley. Paul Becker, Mariann Kourafas, Mary T. O 'Brien, Ross Venables, Stacie Parillo, Jacquelyn Doucette, the copyeditor, Penny Hull, and the indexer, Ted Laux.

How to Use This Book

To study this book rather than merely read it, you need to understand a bit about what kind of information it contains. The information falls into eight categories.

  • Introduction and justification. Chapters 1 and 2 define the data-modeling problem, introduce the LDS technique and notation, and describe good habits that any data modeler should exhibit. Chapters 22 and 24 justify in more technical detail some of the decisions we made when designing the LDS technique and notation.
  • Definitions. Chapter 4 defines the vocabulary you need to read everything that follows. Chapter 13 defines things more formally—articulating exactly what constitutes a syntactically correct LDS. Chapter 23 presents a formal definition of our Logical Data Structures in a format we especially like—as an LDS.
  • Reading an LDS. Chapter 3 describes how to translate an LDS into declarative sentences. The sentences are typically spoken to users to help them understand an in-progress LDS. Chapter 5 describes how to visualize and annotate sample data for an LDS.
  • Writing an LDS. Chapter 13 describes the syntax rules for writing an LDS. Chapter 14 describes the guidelines for naming the parts of an LDS. Chapter 15 describes some seldom-used names that are part of any LDS. Chapter 16 describes how to label parts of an LDS. (Labels and names differ.) Chapter 17 describes how to document an LDS.
  • LDS shapes and recipes. Chapter 7 introduces the concept of shapes and tells how your expertise with them can make you a master data modeler. Chapters 8 through 12 give an encyclopedic, exhaustive analysis of the shapes you will encounter as a data modeler. Chapter 26 describes some recipes—specific applications of the shapes to common problems encountered by software developers and database designers.
  • Process of LDS development. Chapters 6 and 21 give elaborate examples of the process of LDS development. Chapter 18 describes a step-by-step script, called The Flow, that you follow in your conversations with users. Chapters 19 and 20 describe steps you can take to improve an in-progress LDS at any time—steps that do not fit into the script in any particular place because they fit in every place. Considered as a whole, Chapters 18 through 20 describe the process of controlled evolution, the process by which you guide the users through a conversation that gradually improves the in-progress LDS. "Controlled" implies that the conversation is organized and methodical. "Evolution" implies that the conversation yields a continuously, gradually improving data model.
  • Implementation and technology issues. Chapter 22 describes in detail the forces that compel us to exclude constraints from the LDS notation. Many of these forces stem from implementation issues. Chapter 25 describes a technique for creating a relational schema from an LDS.
  • Critical assessment of the LDS technique and notation. Chapter 24 describes the decisions we made in designing the LDS technique and notation and describes how our decisions differ from those made by the designers of other notations. Chapter 22 is devoted to one such especially noteworthy decision. And throughout the book appear sets of "Story Interludes" which relate anecdotes about our successes and failures learning and using the LDS notation and technique. Taken as a whole, these stories constitute a critical assessment of the technique.
Reading Paths Through This Book

To become a master data modeler, you must appreciate the interplay among four areas of expertise: LDS reading, LDS writing, LDS shapes, and controlled evolution. These four areas are equally important and interrelated. This book presents these four topics in a sensible order, but you cannot master any one of these areas without mastering the other three. Even if you study this book sequentially, when you get to controlled evolution (Chapters 18 through 20), you will find yourself referring to earlier chapters. Controlled evolution integrates virtually everything preceding Chapter 18. As you study that chapter, your incipient mastery of LDS reading, LDS writing, and shapes will be put to the test.

Chapters 3 and 4 are prerequisites to everything that follows. Chapter 13 is a prerequisite to Chapters 14 through 20.

As you work your way toward mastery, you should do the specific exercises at the end of chapters and the whole-skill mastery exercises in the Appendix. You might want to take a peek at Chapter 6 now to get a feel for how a master data modeler works with users.

John Carlis Joseph Maguire September 2000

Table of Contents



Foreword.


Preface.


1. Introduction.


2. Good Habits.


3. Reading an LDS with Sentences.


4. Vocabulary of LDS.


5. Visualizing Allowed and Disallowed Instances.


6. A Conversation with Users about Creatures and Skills.


7. Introduction to Mastering Shapes.


8. One-Entity, No-Relationship Shapes.


9. One-Attribute Shapes.


10. Two-Entity Shapes.


11. Shapes with More Than Two Entities.


12. Shapes with Reflexive Relationships.


13. LDS Syntax Rules.


14. Getting the Names Right.


15. Official Name.


16. Labeling Links.


17. Documenting an LDS.


18. Script for Controlled Evolution.


19. Local, Anytime Steps of Controlled Evolution.


20. Global, Anytime Steps of Controlled Evolution.


21. Conversations about Dairy Farming.


22. Constraints.


23. LDS for LDS.


24: Decisions: Designing a Data-Modeling Notation.


25. LDS and the Relational Model.


26: Cookbook: Recipes for Data Modelers.


Appendix: Exercises for Mastery.


Index.

Preface

This book teaches you the first step of creating software systems: learning about the information needs of a community of stran

This book teaches you the first step of creating software systems: learning about the information needs of a community of strangers. This book is necessary because that step—known as data modeling—is prone to failure.

This book presumes nothing; it starts from first principles and gradually introduces, justifies, and teaches a rigorous process and notation for collecting and expressing the information needs of a business or organization.

This book is for anyone involved in the creation of information-management software. It is particularly useful to the designers of databases and applications driven by database management systems.

In many regards, this book is different from other books about data modeling. First, because it starts from first principles, it encourages you to question what you might already know about data modeling and data-modeling notations. To best serve users, how should the process of data modeling work? To create good, economical software systems, what kind of information should be on a data model? To become an effective data modeler, what skills should you master before talking with users?

Second, this book teaches you the process of data modeling. It doesn’t just tell you what you should know; it tells you what to do. You learn fundamental skills, you integrate them into a process, you practice the process, and you become an expert at it. This means that you can become a "content-neutral modeler," moving gracefully among seemingly unrelated projects for seemingly unrelated clients. Because the process of modeling applies equally to all projects, your expertise becomes universally applicable. Being a master data modeler is like being a master statistician who can contribute to a wide array of unrelated endeavors: population studies, political polling, epidemiology, or baseball.

Third, this book does not focus on technology. Instead, it maintains its focus on the process of discovering and articulating the users’ information needs, without concern for how those needs can or should be satisfied by any of the myriad technological options available. We do not completely ignore technology; we frequently mention it to remind you that during data modeling, you should ignore it. Users don’t care about technology; they care about their information. The notation we use, Logical Data Structures (LDS), encourages you to focus on users’ needs. We think a data modeler should conceal technological details from users. But historically, many data modelers are database designers whose everyday working vocabulary is steeped in technology. When technologists talk with users, things can get awkward. In the worst case, users quit the conversation, or they get swept up in the technological details and neglect to paint a complete picture of their technology-independent information needs. Data modeling is not equivalent to database design.

Another undesirable trend: historically, many organizations wrongly think that data modeling can be done only by long-time, richly experienced members of the organization who have reached the status of "unofficial archivist." This is not true. Modeling is a set of skills like computer programming. It can be done by anyone equipped with the skills. In fact, a skilled modeler who is initially unfamiliar with the organization but has access to users will produce a better model than a highly knowledgeable archivist who is unskilled at modeling.

This book has great ambitions for you. To realize them, you cannot read it casually. Remember, we’re trying to foster skills in you rather than merely deliver knowledge to you. If you master these skills, you can eventually apply them instinctively.

Study this book the way you would a calculus book or a cookbook. Practice the skills on real-life problems. Work in teams with your classmates or colleagues. Write notes to yourself in the margins. An ambitious book like this, well, we didn’t just make it up. For starters, we are indebted to Michael Senko, a pioneer in database systems on whose work ours is based. Beyond him, many people deserve thanks. Most important are the many users we have worked with over the years, studying data: Gordon Decker; George Bluhm and others at the U. S. Soil Conservation Service; Peter O’Kelly and others at Lotus Development Corporation; John Hanna, Tim Dawson, and other employees and consultants at US WEST, Inc.; Jim Brown, Frank Carr, and others at Pacific Northwest National Laboratory; and Jane Goodall, Anne Pusey, Jen Williams, and the entire staff at the University of Minnesota’s Center for Primate Studies. Not far behind are our students and colleagues. Among them are several deserving special thanks: Jim Albers, Dave Balaban, Leone Barnett, Doug Barry, Bruce Berra, Diane Beyer, Kelsey Bruso, Jake Chen, Paul Chapman, Jan Drake, Bob Elde, Apostolos Georgopolous, Carol Hartley, Jim Held, Chris Honda, David Jefferson, Verlyn Johnson, Roger King, Joe Konstan, Darryn Kozak, Scott Krieger, Heidi Kvinge, James A. Larson, Sal March, Brad Miller, Jerry Morton, Jose Pardo, Paul Pazandak, Doug Perrin, John Riedl, Maureen Riedl, George Romano, Sue Romano, Karen Ryan, Alex Safonov, Wallie Schmidt, Stephanie Sevcik, Libby Shoop, Tyler Sperry, Pat Starr, Fritz Van Evert, Paul Wagner, Bill Wasserman, George Wilcox, Frank Williams, Mike Young, and several thousand students who used early versions of our work. Thanks also go to Lilly Bridwell-Bowles of the Center for Interdisciplinary Studies of Writing at the University of Minnesota. Several people formally reviewed late drafts of this book and made helpful suggestions:Declan Brady, Paul Irvine Matthew C. Keranen, David Livingstone, and David McGoveran. And finally, thanks to the helpful and pat ent people at Addison-Wesley. Paul Becker, Mariann Kourafas, Mary T. O ’Brien, Ross Venables, Stacie Parillo, Jacquelyn Doucette, the copyeditor, Penny Hull, and the indexer, Ted Laux.

How to Use This Book

To study this book rather than merely read it, you need to understand a bit about what kind of information it contains. The information falls into eight categories.

  • Introduction and justification. Chapters 1 and 2 define the data-modeling problem, introduce the LDS technique and notation, and describe good habits that any data modeler should exhibit. Chapters 22 and 24 justify in more technical detail some of the decisions we made when designing the LDS technique and notation.
  • Definitions. Chapter 4 defines the vocabulary you need to read everything that follows. Chapter 13 defines things more formally—articulating exactly what constitutes a syntactically correct LDS. Chapter 23 presents a formal definition of our Logical Data Structures in a format we especially like—as an LDS.
  • Reading an LDS. Chapter 3 describes how to translate an LDS into declarative sentences. The sentences are typically spoken to users to help them understand an in-progress LDS. Chapter 5 describes how to visualize and annotate sample data for an LDS.
  • Writing an LDS. Chapter 13 describes the syntax rules for writing an LDS. Chapter 14 describes the guidelines for naming the parts of an LDS. Chapter 15 describes some seldom-used names that are part of any LDS. Chapter 16 describes how to label parts of an LDS. (Labels and names differ.) Chapter 17 describes how to document an LDS.
  • LDS shapes and recipes. Chapter 7 introduces the concept of shapes and tells how your expertise with them can make you a master data modeler. Chapters 8 through 12 give an encyclopedic, exhaustive analysis of the shapes you will encounter as a data modeler. Chapter 26 describes some recipes—specific applications of the shapes to common problems encountered by software developers and database designers.
  • Process of LDS development. Chapters 6 and 21 give elaborate examples of the process of LDS development. Chapter 18 describes a step-by-step script, called The Flow, that you follow in your conversations with users. Chapters 19 and 20 describe steps you can take to improve an in-progress LDS at any time—steps that do not fit into the script in any particular place because they fit in every place. Considered as a whole, Chapters 18 through 20 describe the process of controlled evolution, the process by which you guide the users through a conversation that gradually improves the in-progress LDS. 'Controlled' implies that the conversation is organized and methodical. 'Evolution' implies that the conversation yields a continuously, gradually improving data model.
  • Implementation and technology issues. Chapter 22 describes in detail the forces that compel us to exclude constraints from the LDS notation. Many of these forces stem from implementation issues. Chapter 25 describes a technique for creating a relational schema from an LDS.
  • Critical assessment of the LDS technique and notation. Chapter 24 describes the decisions we made in designing the LDS technique and notation and describes how our decisions differ from those made by the designers of other notations. Chapter 22 is devoted to one such especially noteworthy decision. And throughout the book appear sets of 'Story Interludes' which relate anecdotes about our successes and failures learning and using the LDS notation and technique. Taken as a whole, these stories constitute a critical assessment of the technique.
Reading Paths Through This Book

To become a master data modeler, you must appreciate the interplay among four areas of expertise: LDS reading, LDS writing, LDS shapes, and controlled evolution. These four areas are equally important and interrelated. This book presents these four topics in a sensible order, but you cannot master any one of these areas without mastering the other three. Even if you study this book sequentially, when you get to controlled evolution (Chapters 18 through 20), you will find yourself referring to earlier chapters. Controlled evolution integrates virtually everything preceding Chapter 18. As you study that chapter, your incipient mastery of LDS reading, LDS writing, and shapes will be put to the test.

Chapters 3 and 4 are prerequisites to everything that follows. Chapter 13 is a prerequisite to Chapters 14 through 20.

As you work your way toward mastery, you should do the specific exercises at the end of chapters and the whole-skill mastery exercises in the Appendix. You might want to take a peek at Chapter 6 now to get a feel for how a master data modeler works with users.

John Carlis
Joseph Maguire
September 2000

From the B&N Reads Blog

Customer Reviews