Engineers solve practical problems in real life, using detailed understanding of reality. They do so in a context:
There is a key difference. If mathematicians or scientists communally agree to a wrong answer, their reputations are perhaps tarnished, they reinvestigate, and they propose new/better hypotheses. If engineers make mistakes (even if they trusted the current scientific theories), buildings fall down, rockets explode, and people get electrocuted. Therefore, society has demanded that engineers have degrees from accredited institutions (ABET-accredited in the US), plus in some cases professional licenses.
polya57 explains thinking, and the need to translate complex problems into simpler problems for solution. It is nominally for mathematicians, but applies to engineering. The question is, how does one do this practically? From J. J. Shipp's chapter on "Hair-care Products" (in williams96, referenced in Chemical_Engineering):
Engineering is usually an iterative cycle of idea and analysis, as both the customer and the engineer learn more about the problem space and the desired outcome.
Ideas come from independent design shops, dreams at night, zigzag thinking, brainstorming, focus groups, taking a vacation, etc. In other words, you have to be somewhat away from the task to see it in new ways.
Analysis starts with knowing what you want to analyze (the factors or features to be tested), and then constructing and running those tests. There is generally a tradeoff between the richness/fidelity of the test environment and time/money required. Therefore one usually plans a sequence of analyses from quick-and-dirty (to filter out really bad designs), to full-up/near-production-ready.
Notice "generally" and "usually". There is always the possibility that good ideas were incorrectly filtered out early in the sequence. If the payoff is high enough, it pays to do parallel development. Totally different design teams work on the same problem. Effort is made to keep the teams isolated so they do not contaminate each others thinking. Sometimes it is competitive bidding on government contracts. Sometimes it is competing firms in the marketplace. Sometimes it is within a firm, doing risk-abatement alternatives.
What has actually worked in the past? Dimensions of frames, ribs, etc for wooden boats follow this tradition. Industrial age steam engines and boilers, and iron bridge truss dimensions follow this tradition. Even today, an electronics hobbiest might throw in a few 0.1uF capacitors to filter high frequency in a circuit.
These findings are often collated in handbooks filled with tables.
In each case the assumption is that the components are comparable with the experience base, and that the design isn't radically different. If you try to use pine with oak-based timber-frame dimensions there will be problems. If you try to do a 50-story timber-frame building (even with oak), there will be problems.
The problem space is different enough that collected experience does not provide answers. We suspect that the problem space is complex enough to defy computation, but that it can be approximated with relatively cheap (compared to full-up) scale models.
These include boat-hull half-models, 1/12 or 1/3 scale boat prototypes, tank tests for hull efficiency, car and airplane wind tunnels for aerodynamics, tidal basin models, cyclone generators, architectural models, etc. In each case one has to validate the model against reality, and perhaps make sizing adjustments to ensure the findings are relevant to the task at hand.
Full-sized prototypes are used when neither collected experience nor scale models capture the problem space adequately for good design and analysis. The prototype can be a fully functional item, or can address only specific aspects of the problem. E.g., A full-sized airplane cockpit made of plywood may be ok for instrument panel layout, but not for tests of electromagnetic interference.
Prototypes are generally more expensive than other testing techniques, so are reserved for the last step before full production. Further, as other techniques (esp computational modeling) become more powerful, fewer prototypes are needed.
As scientific theories became richer, computational tools more powerful, and design components more standardized, there has been a move to computational modeling.
Some (very few) physical problems allow exact solution. That is, you work out the scientifically-correct mathematical formula, enter the parameters, and obtain the design. In textbooks it works fine. In real life the math can be quite tedious to juggle. Thus, one uses a symbolic math package (e.g., Maxima) to manipulation the equations.
More typically, the problem cannot be formulated as a solvable set of symbolic equations. Instead, various iterative approximations are used. These usually require large grids (100's or thousands's of nodes), recalculated hundreds or thousands of times.
Despite the cost of computers and the setup time, this can be dramatically cheaper and faster than any other kind of analysis. The biggest payoff is usually that the iterative idea/analysis cycle can spin much more rapidly, thus giving much better quality for a given time-to-market.
The computational tools started as domain-specific (e.g., architectural, naval, electronic). However, it turns out that that many physical phenomena behave per laws with similar mathematical forms. Once you have computational tools to construct and analyze such models, the tools apply broadly. Continuous flow problems in any technology can be modeled in Finite Difference or Finite Element Models (FEM). Feedback systems in any technology can be modeling with Laplace transforms and matrix solution of systems of differential equations. Vibrations in any technology can be addressed with time-domain/frequency-domain analyses.
The techniques all rely on iterative calculations -- lots of them. You need a computer. In the olden days, this was a room full of people with slide rules. But for the past 50 years, it has been electronic computers. While the techniques can be done on x386-type PC's, it is more appropriate these days to use a multicore 64-bit computer with lots of fast RAM. A box with AMD x2 64-bit dual core CPU with 2+GB RAM and 100+GB disk, running Linux and using OSS tools will cost under $1000. For tougher problems, a mini supercomputer (e.g., 4 such CPU boards, running Beowulf) can be built for under $2500.
The techniques almost all rely on matrix math, and usually expect blas and lapack to be available. Blas and lapack in turn can be optimized for specific platforms, via ATLAS. If you get Ubuntu AMD x86_64 workstation, you get all these tools for free. You also get some analysis tools, though you may want to add your own -- sticking with OSS if at all possible.
In other words, you can do serious analyses for under $1000 and state-of-the-art work for under $3000. Plus of course your time for understanding what is going on.
Creator: Harry George