David

Dynamic Memory Allocation in Critical Embedded Systems

Today I’m going to talk about why dynamic memory allocation is rarely used in critical embedded systems, and whether using only static allocation is a necessary restriction. I’m going to assume that maintaining system availability is critical, that there are hard real-time deadlines to be met, and that the system is long-running.

Issues we have to face when using dynamic memory allocation in C/C++ include the following:

  • Sufficiency: how can we be sure that we have provided sufficient memory, so that a critical memory allocation will never be refused?
  • Garbage management: how can we be sure that memory that is no longer required is released to the memory manager at exactly the right time? If we release memory too early, we will have dangling pointers or double-free errors. If we release memory too late, we will have memory leaks. These kinds of problem plague development of large C/C++ programs.
  • Fragmentation: how can we avoid the situation in which we want to allocate N bytes of memory, but all the available memory is in fragments smaller than N, even though the total available is much larger than N? Memory fragmentation is a plague of long-running C/C++ systems that use dynamic memory extensively.
  • Timeliness: when we need to allocate memory, what is the upper bound on the time that the memory manager may take to service the request? Memory managers for C/C++ typically search freelists or more complex structures for fragments of sufficient size, therefore calls to alloc or new typically exhibit a variable and sometimes long latency.

Many modern languages such as C# and Java provide garbage collection, in which the system automatically identifies memory that is no longer accessible by the program and releases it back to the memory manager. Garbage collectors solve the garbage management issue and generally the fragmentation issue too, since a garbage collection cycle usually includes compacting the heap to move all the free space to one end. Unfortunately, garbage collectors create additional timeliness issues. There has been some work on concurrent and “real-time” garbage collectors, although the ones I am aware of still need to “stop the world” for a short while at the start of a garbage collection cycle.

As we’re talking about C and C++, we have to make do without automatic garbage collection. In fact there is a garbage collector for C++, however it is of the conservative non-copying variety, so it doesn’t fully solve the problems I am about to list. I also believe it would be very difficult to make a good safety case for using any sort of conservative collector.

Although the four issues of sufficiency, garbage management, fragmentation and timeliness are serious obstacles to using dynamic memory in C/C++ critical embedded systems, there are a few strategies that can mitigate them. Here are the ones I am aware of:

1. Use dynamic memory allocation during the initialization phase only

The amount of memory dynamically allocated may be constant, or may depend on static inputs such as configuration jumpers. Showing there is sufficient memory should be relatively easy to establish by analysing the system to identify candidate worst-case static inputs and testing with those inputs. Even if the system finds it has insufficient memory during actual service, the error occurs in the initialization phase, so it can be handled in a similar way to a power-on self-test failure.

2. Allocate memory, but never release it

This addresses garbage management if objects never need to be freed once allocated. Alternatively, if there are only a few types of object that ever need to be freed, each one can have its own freelist. Objects can be allocated from the corresponding freelist if it is not empty, and always returned to the freelist. This is easy to implement in C++ by implementing placement new and delete operators for the classes concerned.

This strategy also addresses the sufficiency issue, provided that an upper limit can be placed on the numbers of each object that may be allocated. The total heap memory requirement is then just the sum of the total requirement for each object type.

Fragmentation will not occur because memory is never released.

Timeliness of memory allocations is easily addressed, since we can use a simple allocator that works in constant time. It just needs to increase the heap pointer by the allocation size and return the old heap pointer.

3. Use reference-counting garbage collection

Reference counting works by having each object keep track of how many pointers there are that point to it. The reference count is typically kept up to date by using C++ “smart pointer” classes instead of simple pointers or references.

This isn’t as easy as it sounds. The overheads of using smart pointers everywhere are high, so in practice it is desirable to use plain pointers in many situations, such as parameter passing. Then you need to make sure that an object’s reference count cannot reach zero while there is a plain pointer referring to it. This is not easy to get right by hand. However, if the code is produced by an automatic code generator (such as Perfect Developer), then the code generator can be written to ensure that plain pointers are only used when it is safe to do so.

Beware of using reference counting on objects shared by multiple processor cores – reference counts need to be updated atomically, which is typically slow in a multiprocessor system. Also, you need to avoid creating circular chains of pointers, since such chains are not reclaimed by reference-counting garbage collection. This is because the reference counts can never drop to zero unless the chain is broken.

ARTICLES   List of Articles     TOPTOP