Contact Us. Upload eBook. Privacy Policy. New eBooks. Search Engine. Debugging Chapter Refactoring Chapter Code-Tuning Strategies Chapter Managing Construction Chapter Integration Chapter Layout and Style Chapter Self-Documenting Code Chapter Because patterns represent standardized ways of solving common problems, they embody the wisdom accumu- lated from years of attempting to solve those problems, and they also embody the cor- rections to the false attempts that people have made in solving those problems.
Using a design pattern is thus conceptually similar to using library code instead of writing your own. Sure, everybody has written a custom Quicksort a few times, but what are the odds that your custom version will be fully correct on the first try? And the code arising from a familiar pattern will also be easier for readers of the code to understand than fully custom code would be. Imagine how much longer it would take you to dive into the details of the code for a Creator pattern and the code for a Factory Method pattern and then compare and contrast the two approaches.
Table Popular Design Patterns Pattern Description Abstract Factory Supports creation of sets of related objects by specifying the kind of set but not the kinds of each specific object. Adapter Converts the interface of a class to a different interface. Bridge Builds an interface and an implementation in such a way that either can vary without the other varying. Composite Consists of an object that contains additional objects of its own type so that client code can interact with the top-level object and not concern itself with all the detailed objects.
Decorator Attaches responsibilities to an object dynamically, without creating specific subclasses for each possible configuration of responsibilities. Factory Method Instantiates classes derived from a specific base class without needing to keep track of the individual derived classes anywhere but the Factory Method. Iterator A server object that provides access to each element in a set sequentially. Observer Keeps multiple objects in synch with one another by making an object responsible for notifying the set of related objects about changes to any member of the set.
Singleton Provides global access to a class that has one and only one instance. Strategy Defines a set of algorithms or behaviors that are dynamically interchangeable with each other. Template Method Defines the structure of an algorithm but leaves some of the detailed implementation to subclasses.
Patterns are familiar to most experienced program- mers, and assigning recognizable names to them supports efficient and effective com- munication about them. In some cases, shift- ing code slightly to conform to a well-recognized pattern will improve understandability of the code.
But if the code has to be shifted too far, forcing it to look like a standard pat- tern can sometimes increase complexity. Another potential trap with patterns is feature-itis: using a pattern because of a desire to try out a pattern rather than because the pattern is an appropriate design solution. Overall, design patterns are a powerful tool for managing complexity. You can read more detailed descriptions in any of the good books that are listed at the end of this chapter.
Other Heuristics The preceding sections describe the major software design heuristics. Following are a few other heuristics that might not be useful quite as often but are still worth mentioning. Aim for Strong Cohesion Cohesion arose from structured design and is usually discussed in the same context as coupling.
Cohesion refers to how closely all the routines in a class or all the code in a routine support a central purpose—how focused the class is. Classes that contain strongly related functionality are described as having strong cohesion, and the heuris- tic goal is to make cohesion as strong as possible. Cohesion is a useful tool for manag- ing complexity because the more that code in a class supports a central purpose, the more easily your brain can remember everything the code does.
Thinking about cohesion at the routine level has been a useful heuristic for decades and is still useful today. At the class level, the heuristic of cohesion has largely been subsumed by the broader heuristic of well-defined abstractions, which was discussed earlier in this chapter and in Chapter 6.
Abstractions are useful at the routine level, too, but on a more even footing with cohesion at that level of detail. In software, hierarchies are found in class hierarchies, and, as Level 4 in Figure illustrated, in routine-calling hierarchies as well.
Hierarchies have been an important tool for managing complex sets of information for at least years. Aristotle used a hierarchy to organize the animal kingdom. Humans frequently use outlines to organize complex information like this book. Researchers have found that people generally find hierarchies to be a natural way to organize complex information.
When they draw a complex object such as a house, they draw it hierarchically. In practice, this issue is much more difficult. Assign Responsibilities Another heuristic is to think through how responsibilities should be assigned to objects. Asking what each object should be responsible for is similar to asking what information it should hide, but I think it can produce broader answers, which gives the heuristic unique value.
Design for Test A thought process that can yield interesting design insights is to ask what the system will look like if you design it to facilitate testing. Do you need to separate the user interface from the rest of the code so that you can exercise it independently?
Do you need to orga- nize each subsystem so that it minimizes dependencies on other subsystems? Designing for test tends to result in more formalized class interfaces, which is generally beneficial. Avoid Failure Civil engineering professor Henry Petroski wrote an interesting book, Design Paradigms: Case Histories of Error and Judgment in Engineering Petroski , that chronicles the history of failures in bridge design. Petroski argues that many spectacular bridge failures have occurred because of focusing on previous successes and not adequately consider- ing possible failure modes.
He concludes that failures like the Tacoma Narrows bridge could have been avoided if the designers had carefully considered the ways the bridge might fail and not just copied the attributes of other successful designs.
Code that binds on binding time, see Section early tends to be simpler, but it also tends to be less flexible. Sometimes you can get a What if I bound these values later? What if I initialized this table right here in the code? What if I read the value of this variable from the user at run time? Make Central Points of Control P. Control can be centralized in classes, routines, preprocessor macros, include files—even a named constant is an example of a central point of control.
The reduced-complexity benefit is that the fewer places you have to look for some- thing, the easier and safer it will be to change. A brute-force solu- force. It can take a long —Butler Lampson time to get an elegant solution to work. In describing the history of searching algo- rithms, for example, Donald Knuth pointed out that even though the first description of a binary search algorithm was published in , it took another 16 years for some- one to publish an algorithm that correctly searched lists of all sizes Knuth A binary search is more elegant, but a brute-force, sequential search is often sufficient.
Draw a Diagram Diagrams are another powerful heuristic tool. A picture is worth words—kind of. You actually want to leave out most of the words because one point of using a picture is that a picture can represent the problem at a higher level of abstraction.
Sometimes you want to deal with the problem in detail, but other times you want to be able to work with more generality. The concept of modularity is related to information hiding, encapsulation, and other design heuristics. Brown and W. One of the original books on heuristics in problem solving was G. Figure is a summary of his approach, adapted from a similar summary in his book emphases his.
Understanding the Problem. You have to understand the problem. What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown?
Or is it insufficient? Or redundant? Or contradictory? Draw a figure. Introduce suitable notation. Separate the various parts of the condition. Can you write them down? Devising a Plan. Find the connection between the data and the unknown.
You might be obliged to consider auxiliary problems if you can't find an intermediate connection. You should eventually come up with a plan of the solution. Have you seen the problem before? Or have you seen the same problem in a slightly different form? Do you know a related problem?
Do you know a theorem that could be useful? Look at the unknown! And try to think of a familiar problem having the same or a similar unknown. Here is a problem related to yours and solved before. Can you use it? Can you use its result? Can you use its method? Should you introduce some auxiliary element in order to make its use possible? Can you restate the problem?
Can you restate it still differently? Go back to definitions. If you cannot solve the proposed problem, try to solve some related problem first. Can you imagine a more accessible related problem? A more general problem? A more special problem? An analogous problem? Can you solve a part of the problem? Keep only a part of the condition, drop the other part; how far is the unknown then determined, how can it vary?
Can you derive something useful from the data? Can you think of other data appropriate for determining the unknown?
Can you change the unknown or the data, or both if necessary, so that the new unknown and the new data are nearer to each other? Did you use all the data? Did you use the whole condition? Have you taken into account all essential notions involved in the problem? Carrying out the Plan. Carry out your plan. Carrying out your plan of the solution, check each step. Can you see clearly that the step is correct? Can you prove that it's correct? Looking Back. Examine the solution. Can you check the result?
Can you check the argument? Can you derive the result differently? Can you see it at a glance? Can you use the result, or the method, for some other problem? Figure G. Write a short test pro- gram. Try a completely different approach. Think of a brute-force solution. Keep outlining and sketching with your pencil, and your brain will follow. If all else fails, walk away from the problem. Literally go for a walk, or think about something else before returning to the problem.
Why fight your way through the last 20 percent of the design when it will drop into place easily the next time through? Why make bad decisions based on limited experience with the design when you can make good deci- sions based on more experience with it later?
This section describes design practice heuris- tics, steps you can take that often produce good results. Iterate You might have had an experience in which you learned so much from writing a pro- gram that you wished you could write it again, armed with the insights you gained from writing it the first time.
The same phenomenon applies to design, but the design cycles are shorter and the effects downstream are bigger, so you can afford to whirl through the design loop a few times. Design is an iterative process. The big picture you get from working with high- level issues will help you to put the low-level details in perspective.
The details you get from working with low-level issues will provide a foundation in solid reality for the high-level decisions. Many programmers—many people, for that matter—have trouble ranging between high- level and low-level considerations. For more on this, see Chapter attempt that can improve your overall design. After trying a thousand different mate- 24, "Refactoring.
Divide the program into different areas of concern, and then tackle each of those areas individually. If you run into a dead end in one of the areas, iterate! Incremental refinement is a powerful tool for managing complexity. As Polya recom- mended in mathematical problem solving, understand the problem, devise a plan, carry out the plan, and then look back to see how you did Polya Top-down design begins at a high level of abstraction.
You define base classes or other nonspecific design ele- ments. As you develop the design, you increase the level of detail, identifying derived classes, collaborating classes, and other detailed design elements. Bottom-up design starts with specifics and works toward generalities. It typically begins by identifying concrete objects and then generalizes aggregations of objects and base classes from those specifics.
Here are the arguments on both sides. The divide-and-conquer process is iterative in a couple of senses. You keep going for several levels. You decompose a program one way. You make a choice and see what hap- pens.
Then you start over and decompose it another way and see whether that works better. How far do you decompose a program? Continue decomposing until it seems as if it would be easier to code the next level than to decompose it.
Work until you become somewhat impatient at how obvious and easy the design seems. If you need to work with something more tangible, try the bottom-up design approach.
You might identify a few low-level responsibilities that you can assign to concrete classes. For example, you might know that a system needs to format a partic- ular report, compute data for that report, center its headings, display the report on the screen, print the report on a printer, and so on.
In some other cases, major attributes of the design problem are dictated from the bot- tom. You might have to interface with hardware devices whose interface requirements dictate large chunks of your design. One starts from the general problem and breaks it into manageable pieces; the other starts with manageable pieces and builds up a general solution. People are good at breaking some- thing big into smaller components, and programmers are especially good at it. Another strength of top-down design is that you can defer construction details.
One strength of the bottom-up approach is that it typically results in early identifica- tion of needed utility functionality, which results in a compact, well-factored design. Most people are better at taking one big concept and breaking it into smaller concepts than they are at taking small concepts and making one big one.
To summarize, top down tends to start simple, but sometimes low-level complexity ripples back to the top, and those ripples can make things more complex than they really needed to be. Design is a heuristic process, which means that no solu- tion is guaranteed to work every time. Design contains elements of trial and error.
Try a variety of approaches until you find one that works well. You might not know if a particular database orga- nization will work until you know whether it will meet your performance goals. A general technique for addressing these questions at low cost is experimental proto- typing. You just need to know enough to approximate the problem space—number of tables, number of entries in the tables, and so on.
You can then write very simple prototyping code that uses tables with names like Table1, Table2, and Column1, and Column2, populate the tables with junk data, and do your performance testing.
Prototyping also works poorly when the design question is not specific enough. The reason is the electronic devices divert your attention and also cause strains while reading eBooks. Now this classic book has been fully updated and revised with leading-edge practices-and hundreds of new code samples-illustrating the art and science of software construction. Steve McConnell is recognized as one of the premier authors and voices in the development community. This Second Edition is not just a simple revision and supplement of the first Edition, but completely rewritten, and increased a lot of content keeping pace with the times.
Code Complete, Second Edition covers all details in the software building process. From perspectives such as software quality and programming ideas, it detailedly discussed each problem of software builds, as well as new technologies following the trends, forward-looking views, general concepts, and contained abundant, typical application examples. So we can say, no matter what kind of background the reader has, reading this book would help you write better applications in a shorter period of time, much more easily.
And most often, this makes the differences between novice and expert programmers.
0コメント