The Art of Designing Embedded Systems [Jack Ganssle] on musicmarkup.info *FREE * shipping on qualifying offers. Jack Ganssle has been forming the careers of. Library of Congress Cataloging-in-Publication Data Ganssle, Jack G. The art of designing embedded systems I Jack G. Ganssle. p. cm. ISBN (hc. Jack Ganssle has been forming the careers of embedded engineers for 20+ years. He has done this with four books, over articles, a weekly column, and .
|Language:||English, Spanish, Indonesian|
|Genre:||Children & Youth|
|ePub File Size:||21.34 MB|
|PDF File Size:||17.52 MB|
|Distribution:||Free* [*Sign up for free]|
The art of designing embedded systems / Jack Ganssle. — 2nd ed. May , musicmarkup.info The Art of Designing Embedded Systems is part primer and part reference book, written with the needs of practicing embedded engineers in mind. Embedded. The Art of Designing Embedded Systems. Book • 2nd Edition • Authors: Jack Ganssle. Browse book content. About the book. Search in this book.
It is comprehensive - encompassing not just commonly-used phrases, but esoteric terms from a variety of fields and disciplines. You can be sure that I'll have this book on my shelf and will reference it on a regular basis. My compliments to the authors. Click picture to order from site. It's part primer and part reference, aimed at practicing embedded engineers, whether working on the code or the hardware design.
Embedded systems suffer from a chaotic, ad hoc development process. This books lays out a very simple seven-step plan to get firmware development under control. There are no formal methodologies to master; the ideas are immediately useful. Most designers are unaware that code complexity grows faster than code size.
Ganssle shows ways to get better code and hardware designs by integrating hardware and software design. He also covers troubleshooting, real time and performance issues, relations with bosses and coworkers, and tips for building an environment for creative work.
Get better systems out faster, using the practical ideas discussed in Art of Designing Embedded Systems. Whether you're working with hardware or software, this book offers a unique philosophy of development guaranteed to keep you interested and learning.
Jack Ganssle has 30 years' experience developing embedded systems. Ganssle takes a direction that I find refreshing. Rather than presenting a one-size-fits-all methodology-a Ganssle Unified Process-he shares anecdote after anecdote, suggestion after suggestion.
He's not telling design teams to change their strategies. He just refines them, and throws in a few more tactics along the way, with a conversational style that makes you think you're chatting away at a conference, rather than reading a book about electronics.
This book presents a philosophy of development, instead of a cookbook of directions. We are always looking for ways to improve customer experience on Elsevier. Too many computer-based products are junk. Companies die or lose megabucks as a result of prematurely shipping something that just does not work.
Consumers are frustrated by the constant need to reset their gadgets and by products that suffer the baffling maladies of the binary age. Long-term success will surely result from shipping a qualify product on rime. Cut a few of the less important features to get a first-class device to market fast.
The computer age has brought the advent of the feature-rich product that no one understands or uses. Never use them, of course. I wish the silly thing could reliably establish a connection! Probably both. Use a feature matrix, implementing each in a logical order, and make each one perfect before you move on. Then at any time management can make a reasonable decision: This means you must break down the code by feature, and only then apply top-down decomposition to the components of each feature.
Management may complain that this approach to development is, in a sense, planning for failure. They want it all: This is an impossible dream! Management uses the same strategy in making their projections. No wise CEO creates a cash flow plan that the company must hit to survive: So, while partitioning by features will not reduce complexity, it leads to an earlier shipment with less panic as a workable portion of the product is complete at all times.
Yet test is a necessary part of software development. Firmware testing is dysfunctional and unlikely to be successful when postponed till the end of the project. The panic to ship overwhelms common sense; items at the end of the schedule are cut or glossed over. Test is usually a victim of the panic. The only reasonable way to build an embedded system is to start integrating today, now, on the day you first crank a line of code.
The biggest schedule killers are unknowns; only testing and actually running code and hardware will reveal the existence of these unknowns. Build the startup code. Get chip selects working. Create stub tasks or calling routines. Glue in downloadd packages and prove to yourself that they work as advertised and as required. This is a good time to slip in a ROM monitor, perhaps enabled by a secret command set. Perhaps it runs a null loop.
Using your development tools, test this small scale chunk of the application. Start adding the lowest-level code, testing as you go. Soon your system will have all of the device drivers in place tested , ISRs tested , the startup code tested , and the major support items such as comm packages and the RTOS again tested. Integration of your own applications code can then proceed in a reasonably orderly manner, plopping modules into a known-good code framework, facilitating testing at each step.
The point is to immediately build a framework that operates, and then drop features in one at a time, testing each as it becomes available. Test and integration are no longer individual milestones; they are part of the very fabric of development. Success requires a determination to constantly test. Every day, or at least every week, build the entire system using all of the parts then available and ensure that things work correctly.
Test constantly. Fix bugs immediately. It ensures that the system really can be built and linked. Having lots of little progress points, where we see our system doing something, is tons more satisfying than coding for a year before hitting the ON switch. Mastering the complexities up front removes the fear and helps us work confidently and efficiently.
We simply have to invent a solution to this dysfunctional cycle of starting firmware testing late because of unavailable hardware! And there are a lot of options. One of the cheapest and most available tools around is the desktop PC. Use it! One compelling reason to use an embedded PC in non-cost-sensitive applications is that you can do much of the development on a standard PC.
If your project permits, consider embedding a PC and plan on writing the code using standard desktop compilers and other tools. Cross-develop the code on a PC until hardware comes on line. Using a processor-specific timer or serial channel? This step also helps prove the hardware design early-a benefit to everyone. Some of these methods include the following: IF statements and control loops create a logical flow to implement algorithms and applications. Yet embedded systems are the realm of real time, where getting the result on time is just as important as computing the correct answer.
A hard real-time task or system is one where an activity simply must be completed-always-by a specified deadline. The deadline may be a particular time or time interval, or may be the arrival of some event. Hard real-time tasks fail, by definition, if they miss such a deadline. Notice that this definition makes no assumptions about the frequency or period of the tasks.
A microsecond or a week-if missing the deadline induces failure, then the task has hard real-time requirements. Few designers manage to get their product to market without suffering metaphorical scars from battling interrupt service routines ISRs. Too many of us become experts at ISRs the same way we picked up the secrets of the birds and the bees-from quick conversations in the halls and on the streets with our pals.
New developers rail against interrupts because they are difficult to understand. Somehow peripherals have to tell the CPU that they require service. Maybe a timer counted down and must let the processor know that an interval has elapsed. Novice embedded programmers naturally lean toward polled communication. The code simply looks at each device from time to time, servicing the peripheral if needed. Why, then, not write polled code? The reasons are legion: Polling consumes a lot of CPU horsepower.
Polled code is generally an unstructured mess. Your code is going to be a nightmare unless you encapsulate hardware-handling routines. Real Time Means Right Now! Polling leads to highly variable latency. If the code is busy handling something else just doing a floating-point add on an 8-bit CPU might cost hundreds of microseconds , the device is ignored.
Properly managed interrupts can result in predictable latencies of no more than a handful of microseconds. Use an ISR pretty much any time a device can asynchronously require service. I did a tape interface once, assuming the processor was fast enough to handle each incoming byte via an interrupt.
Only polling worked. In fact. Vectvring Though interrupt schemes vary widely from processor to processor, most modem chips use a variation of vectoring. The entire rationale behind ISRs is to accept, service, and return from the interrupt, all with no visible impact on the code.
It then acknowledges the interrupt, issuing a unique interrupt acknowledge cycle recognized by the interrupting hardware. During this cycle the device places an interrupt code on the data bus that tells the processor where to find the associated vector in memory.
The ISR does whatever it must, then returns with all registers intact to the normal program flow. The main-line application never knows that the interrupt occurred. Figures 4- 1 and show two views of how an x86 processor handles an interrupt. Instead, intack going low tells the system that this cycle is unique. A pair of bit reads extracts the bit ISR address. Important points: Forget to initialize the device and the system will crash as the device supplies a bogus number.
Some peripherals and interrupt inputs will skip the acknowledge cycle because they have predetermined vector addresses. Further, you can generally enable and disable interrupts from specific devices by appropriately setting bits in peripheral or interrupt control registers.
Before invoking the ISR the hardware disables or reprioritizes interrupts.
At first glance the vectoring seems unnecessarily complicated. Its great advantage is support for many varied interrupt sources. Each device inserts a different vector; each vector invokes a different ISR.
The vectoring scheme also limits pin counts, since it requires just one dedicated interrupt line. This greatly simplifies the code, but unless you add a lot of manual processing, it limits the number of interrupt sources a program can conveniently handle. Give yourself a break and design hardware and software that eases the debugging process. Poorly coded interrupt service routines are the bane of our industry.
A few simple rules can alleviate many of the common problems. List each interrupt and give an English description of what the routine should do. Figure the maximum, worst-case time available to service each. This is your guide: The map is a budget. It gives you an assessment of where interrupting time will be spent. One number only is cast in stone: Approximate the complexity of each ISR.
The cardinal rule of ISRs is to keep the handlers short. If the interrupt starts something truly complex, have the ISR spawn off a task that can run independently. This is an area where an RTOS is a real asset, as task management requires nothing more than a call from the application code. Short, of course, is measured in time, not in code size. Avoid loops. Avoid long complex instructions repeating moves, hideous math, and the like. Think like an optimizing compiler: Can you move it out of the ISR into some less critical section of code?
For example, if an interrupt source maintains a time-of-day clock, simply accept the interrupt and increment a counter. Then return. Let some other chunk of code-perhaps a non-real-time task spawned from the ISR-worry about converting counts to time and day of the week.
Ditto for command processing. I see lots of systems where an ISR receives a stream of serial data, queues it to RAM, and then executes commands or otherwise processes the data.
Bad idea! The ISR should simply queue the data. An analogous rule to keeping ISRs short is to keep them simple. Complex ISRs lead to debugging nightmares, especially when the tools may be somewhat less than adequate. An old rule of software design is to use one function in this case the serial ISR to do one thing. A real-time analogy is to do things only when they need to ger done, not at some arbitrary rate.
Reenable interrupts as soon as practical in the ISR. Do the hardwarecritical and non-reentrant things up front, then execute the interrupt enable instruction. Give other ISRs a fighting chance to do their thing.
Fill all of your unused interrupt vectors with a pointer to a null routine Figure During debug, ulwwys set a breakpoint on this routine. Any spurious interrupt, due to hardware problems or misprogrammed peripherals, will then stop the code cleanly and immediately, giving you a prayer of finding the problem in minutes instead of weeks.
Hardwarre Issues Lousy hardware design is just as deadly as crummy software. Modern high-integration CPUs such as the , Interrupts from these sources pose no hardware design issues, since the chip vendors take care of this for you.
All of these chips, though, do permit the use of external interrupt sources. Though some chips do permit edge-triggered inputs, the vast majority of them require you to assert and hold INTR until the processor issues an acknowledgment, such as from the interrupt ACK pin.
A slight slip in asserting the vector can make the chip wander to an erroneous address. If the INTR must be externally synchronized to clock, do exactly what the spec sheet demands.
If your system handles a really fast stream of data, consider adding hardware to supplement the code. A data acquisition system I worked on accepted data at a microsecond rate. Each generated an interrupt, causing the code to stop what it was doing, vector to the ISR, push registers like wild, and then reverse the process at the end of the sequence. If the system was busy servicing another request, it could miss the interrupt altogether. A cheap byte-deep FIFO chip eliminated all of the speed issues.
During this process additional data might come along and be written to the FIFO, but this happened transparently to the code. Most designs seem to connect FULL to the interrupt line. Conceptually simple, this results in the processor being interrupted only after the entire buffer is full. A single byte arriving will cause the micro to read the FIFO. This has the advantage of keeping the FIFOs relatively empty, minimizing the chance of losing data.
It also makes a big demand on CPU time, generating interrupts with practically every byte received. Some processors do amazing things to service an interrupt, stacking addresses and vectoring indirectly all over memory.
The ISR itself no doubt pushes lots of registers, perhaps also preserving other machine information. In mission-critical systems it might also make sense to design a simple circuit that latches the combination of FULL and an incoming new data item. This overflow condition could be disastrous and should be signaled to the processor.
Total system cost is the only price issue in embedded design. Figure shows the result of an Intel study of serial receive interrupts coming to a EX processor.
At ,baud-or around 53, characters per second-the CPU is almost completely loaded servicing interrupts. C or Assembly? If the routine will be in assembly language, convert the time to a rough number of instructions. You have no idea how long a line of C will take. A string compare may result in a runtime library call with totally unpredictable results.
A FOR loop may require a few simple integer comparisons or a vast amount of processing overhead. And so, we write our C functions in a fuzz of ignorance, having no concept of execution times until we actually run the code.
Rather, this is more of a rant against the current state of compiler technology. Years ago assemblers often produced t-state counts on the listing files, so you could easily figure how long a routine ran.
Though there are lots of variables that string compare will take a varying amount of time depending on the data supplied to it , certainly many C operations will give deterministic results.
Until compilers improve, use C if possible, but look at the code generated for a typical routine. Any call to a runtime routine should be immediately suspect, as that routine may be slow or non-reentrant, two deadly sins for ISRs.
Look at the processing overhead-how much pushing and popping takes place? Does the compiler spend a lot of time manipulating the stack frame? You may find one compiler pitifully slow at interrupt handling. Either try another, or switch to assembly. The way we write performance-bound C code is truly astounding. Write some code, compile and run it. A much more reasonable approach would be to get listings from the compiler with typical per-statement execution times.
To get actual times, of course, the compiler needs to know a lot about our system, including clock rates and wait states. Vendors tell me that cache, pipelines, and prefetchers make modeling code performance too difficult. I disagree. Please, Mr. Compiler Vendor, give us some sort of indication about the sort of performance we can expect! Give us a clue about how long a runtime routine or floating-point operation takes.
The compiler is so buggy they have to look for bugs in the assembly listing after each and every compile-and then make a more or less random change and recompile, hoping to lure the tool into creating correct code.
Be especially wary of using complex data structures in ISRs. Watch what the compiler generates. You may gain an enormous amount of performance by sizing an array at an even power of 2, perhaps wasting some memory, but avoiding the need for the compiler to generate complicated and slow indexing code.
An old software adage recommends coding for functionality first, and speed second. Why cripple the entire system because of a little bit of interrupt code? Code the slower ISRs in C. Remember that most processors service an interrupt with the following steps: The device hardware generates the interrupt pulse. The interrupt controller if any prioritizes multiple simultaneous requests and issues a single interrupt to the processor.
The CPU responds with an interrupt acknowledge cycle. The controller drops an interrupt vector on the databus. The CPU reads the vector and computes the address of the userstored vector in memory.
It then fetches this value. A generation of structured programming advocates has caused many of us to completely design the system and write all of the code before debugging. They never behave quite as you expected. Bits might be inverted or transposed, or maybe there are a dozen complex configuration registers that need to be set up. Work with your system, understand its quirks, and develop notes about how to drive each YO device.
Use these notes to write your code. Similarly, start prototyping your interrupt handlers with a hollow shell of an ISR. Set a breakpoint on the ISR. You may have misprogrammed the table entry or the interrupt controller, which would then supply a wrong vector to the CPU.
Trigger collection on the interrupt itself, or on any read from the vector table in RAM. You should see the interrupt controller drop a vector on the bus. Is it the right one? If not, perhaps the interrupt controller is misprogrammed.
Within a few instructions if interrupts are on look for the read from the vector table. Does it access the right table address? Break out the logic analyzer and check this carefully. Frustratingly often the vector is fine; the interrupt just does not occur.
Depending on the processor and peripheral mix, only a handful of things could be wrong: Did you enable interrupts in the main routine? Without an E1 instruction, no interrupt will ever occur. Have you programmed the device to allow interrupt generation? Modern peripherals are often incredibly complex. The only general advice is to be sure your ISR reenables interrupts before returning. Then look into the details of your processor and peripherals.
Look at these lines with a scope. Some of the embedded CPUs in this family have like controllers built into the processor. You may need to service the peripherals as well before another interrupt comes along. Depending on the part, you may have to read registers in the peripheral to clear the interrupt condition. UARTs and timers usually require this. Some have peculiar requirements for clearing the interrupt condition, so be sure to dig deeply into the databook. Finding Missing Interrupts A device that parses a stream of incoming characters will probably crash very obviously if the code misses an interrupt or two.
One that counts interrupts from an encoder to measure position may only exhibit small precision errors, a tough thing to find and troubleshoot. If the counter always shows a value of zero or one, everything is fine.
Most engineering labs have counters-test equipment that just accumulates pulse counts. I have a scope that includes a counter. Use two of these, one on the interrupt pin and another on the interrupt acknowledge pin. The counts should always be the same. You can build a counter by instrumenting the ISR to increment a variable each time it starts. Either show this value on a display, or probe the variable using your debugger.
If you know the maximum interrupt rate, use a performance analyzer to measure the maximum time in the ISR. Be wary of any code that executes a disable-interrupt instruction. The ancient had a wonderful pin that showed interrupt state all of the time.
It was easy to watch this on the scope and look for interrupts that came during that period. Now, having advanced so far, we have no such easy troubleshooting aids. About the best one can do is watch the INTR pin. One design rule of thumb will help minimize missing interrupts: Reentrancy Problems Well-designed interrupt handlers are largely reentrant.
Reentrant functions-a. Too many programmers feel that if they simply avoid self-modifying code, their routines are guaranteed to be reentrant, and thus interrupt-safe. Nothing could be further from the truth. A function is reentrant if, while it is being executed, it can be reinvoked by itself, or by any other routine.
Suppose your main-line routine and the ISRs are all coded in C. The compiler will certainly invoke runtime functions to support floating-point math, VO, string manipulations, etc. If the runtime package is only partially reentrant, then your ISRs may very well corrupt the execution of the main line code. This problem is common, but is virtually impossible to troubleshoot, since symptoms result only occasionally and erratically.
If your ISR merely increments a global bit value, maybe to maintain time, it would seem legal to produce code that does nothing more than a quick and dirty increment. Especially when writing code on an 8- or 16bit processor, remember that the C compiler will surely generate several instructions to do the deed.
Or, if other routines use the variable, the ISR may change its value at the same time other code tries to make sensible use of it. The first solution is to avoid global variables! Globals are an abomination, a sure source of problems in any system, and an utter nightmare in real-time code.
Never, ever pass data between routines in globals unless the following three conditions are fulfilled: Reentrancy issues are dealt with via some method, such as disabling interrupts around their use-though I do not recommend disabling interrupts cavalierly, since that affects latency. The globals are absolutely needed because of a clear performance issue. Most alternatives do impose some penalty in execution time. The global use is limited and well documented. Inside of an ISR, be wary of any variable declared as a static.
Though statics have their uses, the ISR that reenables interrupts, and then is interrupted before it completes, will destroy any statics declared within. In , on a dare, I examined firmware embedded in 23 completed products, all of which were shipping to customers.
Every one had this particular problem! This particularly bad system, which had the reentrancy problem inside an ISR, also had the fastest interrupt rate of any of the products examined.
This suggests using a stress test to reveal latent reentrancy defects. Crank up the interrupt rates! If the timer comes once per second, try driving it every millisecond and see how the system responds. Even the perfectly coded reentrant ISR leads to problems. If such a routine runs so slowly that interrupts keep giving birth to additional copies of it, eventually the stack will fill.
Once the stack bangs into your variables, the program is on its way to oblivion. Again, use the stress test! Power-fail, system shutdown, and imminent disaster are all good things to monitor with NMI. Timer or UART interrupts are not. NMI may alleviate the symptoms, but only masks deeper problems in the code that must be cured.
NMI will break even well-coded interrupt handlers, since most ISRs are non-reentrant during the first few lines of code where the hardware is serviced.
NMI will thwart your stack-management efforts as well. NMI is usually an edge-triggered signal. Any bit of noise or glitching will cause perhaps hundreds of interrupts. NMI mixes poorly with most tools. Few tools do well with single stepping and setting breakpoints inside of the ISR. Breakpoint Problems Using any sort of debugging tool, suppose you set a breakpoint where the ISR starts, and then start single stepping through the code.
All is well. Suddenly, all hell breaks lose. A regularly occurring interrupt such as a timer tick comes along steadily, perhaps dozens or hundreds of times per second. Oddly, the code seems to execute backwards. Consider the case of setting two breakpoints-the first at the start of the ISR and the second much later into the routine. Run to the first breakpoint, stop, and then resume execution. The code may very well stop at the same point, the same first breakpoint, without ever going to the second.
In the case of NMI, though, disaster strikes immediately, since there is no interrupt-safe state. The NMI is free to reoccur at any time, even in the most critical non-reentrant parts of the code, wreaking havoc and despair.
After all, stopping the code stops everything; your entire system shuts down. If your code controls a moving robot arm, for example, and you stop the code as the arm starts moving, it will keeping going and going and going. Years ago I worked on a ton steel gauge; a controlled the motion of this monster on railroad tracks. Hit a breakpoint and the system ran off the end of the tracks!
Datacomm is another problem area. You can cheat Heisenberg-at least in debugging embedded code! Trace collects the execution stream of the code in real time, without slowing or altering the flow. Trace changes the philosophy of debugging. No longer does one stop the code, examine various registers and variables, and then timidly step along. With trace your program is running at full tilt, a breakneck pace that trace does nothing to alter.
You capture program flow, and then examine what happened, essentially looking into the past as the code continues on Figure Trace shows only what happens on the bus. You can view neither registers nor variables unless an instruction reads or writes them to memory.
You may see the transactions pushes and pops , but the tool may display neither the variable name nor the data in its native type. Nor is it desirable, as a trace buffer a hundred million frames deep is simply too much data to plow through. Pick an emulator that offers flexible triggers-breakpointlike resources that start and stop trace collection.
Are the triggers a pain to set up? Most emulators offer special menus with dozens of trigger configuration options. Although this is essential for finding the most obscure bugs, it is just too much work for the usual debugging scenario, where you simply want to start collection when source module line executes. Simple triggers should be as convenient as breakpoints, set perhaps via a right mouse click. The moral is: Minimize their complexity to maximize their debuggability.
If your ISR is only 10 or 20 lines of code, debug by inspection. Don't fire up all kinds of complex and unpredictable tools. Keep the handler simple and short. If it fails to operate correctly, a few minutes reading the code will usually uncover the problem. Amateurs moan and speculate about performance, making random stabs at optimizing code. Professionals take measurements, only then deciding what action, if any, is appropriate. If the ISR is not fast enough, your system will fail. When designing the system, answer two questions: Some people are born lucky.
Not me. Call it high-tech paranoia. Plan for problems, and develop solutions for those problems before they occur. Assume each ISR will be too slow, and plan accordingly. A performance analyzer will instantly show the minimum, maximum, and average execution time required by your code, including your ISRs Figure The study somehow missed the demise of the slide rule their main product within 5 years. Our need to compute, to routinely deal with numbers, led to the invention of dozens of clever tools, from the abacus to logarithm tables to the slide rule.
Now even grade-school children routinely use graphing calculators. The device assumes the entire job of computation and sometimes even data analysis. What a marvel of engineering! This books lays out a very simple seven-step plan to get firmware development under control. There are no formal methodologies to master; the ideas are immediately useful.
Most designers are unaware that code complexity grows faster than code size. It shows ways to get better code and hardware designs by integrating hardware and software design. He also covers troubleshooting, real time and performance issues, relations with bosses and coworkers, and tips for building an environment for creative work. Review by Software Development Times. An excerpt: Ganssle takes a direction that I find refreshing. Rather than presenting a one-size-fits-all methodology'a Ganssle Unified Process'he shares anecdote after anecdote, suggestion after suggestion.
He's not telling design teams to change their strategies. He just refines them, and throws in a few more tactics along the way, with a conversational style that makes you think you're chatting away at a conference, rather than reading a book about electronics. Review by Dr.