Embedded Real-Time Systems vs. General-Purpose Computers
Embedded real-time systems have two main characteristics:1. They have a computer buried inside, but the users don't perceive them as computers.
2. They often must respond to external events in a timely fashion, which means that for all practical purposes, a late computation is just as bad as an outright wrong computation.
Vague as it is, this definition can gain the most strength by contrasting real-time embedded systems with general-purpose computers (such as desktop PCs), in which the two main characteristics are either nonexistent or far less important. So, you can read embedded to mean "not for general-purpose computing" and real-time to mean "dedicated to an application with timeliness requirements." Either way, the definition emphasizes that embedded systems pose different challenges and require different programming strategies than general-purpose computers. I strongly disagree with the opinion that embedded real-time developers face all the challenges of "regular" software development plus the complexities inherent in embedded real-time systems. Although each domain has its fair share of difficulties, each also offers unique opportunities for simplification, so embedded-systems programmers specifically do not have to cope with many problems encountered in programming general-purpose computers.
Consider for example the challenges of programming a desktop PC. As far as hardware is concerned, no desktop application can rely on a specific amount of memory available to it or on how many and what kind of disk drives, network cards, graphics adapters, and other peripherals are present and available at the moment. The software environment is even less predictable. Users frequently install and remove applications and application components from all possible sources (remember the Windows DLL Hell?). All the time, users launch, close, or crash their applications -- drastically changing the CPU load and availability of memory and other resources. The desktop operating system has the tough job of allocating CPU cycles, memory, and other resources among constantly changing tasks in such a way that each receives a fair share of the resources and no single task can hog the CPU. To succeed in this harsh environment, the desktop OS has no other option but to drastically limit the applications. All applications must strictly comply with a specific API (such as Win32 or a Unix API). Interrupt handling is black magic reserved for device drivers that common mortals (application programmers) better not touch. Fiddling directly with external hardware is prohibited.
This scheme is diametrically opposed to the needs of embedded real-time systems, in which a specific task must gain control right now and run until it produces the appropriate output. Fairness isn't part of real-time programming -- meeting the deadlines is. To achieve this, however, embedded software must have full control over the CPU, memory and all the external hardware. Restricted to a desktop-style API, an embedded developer not only loses control that he so badly needs, but must bend backwards just to flash an LED, let alone to service an interrupt. The increased security of a desktop API in the embedded domain is bogus too. In an embedded system, the specific application code is at least as critical as the generic OS (many embedded systems don't use an OS at all), so a failure in the application renders the system useless regardless of the security mechanisms built into the OS.