A look at ECS component approaches

For game implementation, an ECS architecture has a lot of things going for it. This approach puts all the components of a particular type (say, all the Transform components) together in memory by component type instead of by the parent entity. In addition to having the flexibility of a general component architecture, the ECS’s use of contiguous memory to store components and Systems to operate on subsets of those components enables high performance processing.

However, there’s a key challenge that many beginner ECS tutorials don’t address. In these explorations, each example of a System simply iterates along a single array or vector of homogeneous components. What possible kind of interesting behavior can be achieved by examining only one kind of Component at a time? Take even the basic task of movement. In a System processing movement, the calculations would probably go something like this.

  1. Examine the Movement component to get linear and rotational velocities.
  2. Calculate the position and rotation change based on time step size.
  3. Modify the Transform component by the calculated deltas.

This example doesn’t even consider relative position to other entities, which would complicate the matter even more. The issue is that single components alone aren’t enough – we need to be able to easily retrieve and operate on multiple component types that are associated with the same entity. There are several approaches to make this easier, and I consider three of them here.

Assumptions

For context, my interest in these approaches is with respect to usage in a platformer game similar in style to Mario or Megaman. I expect games of this type to have the following characteristics:

  • Players move through levels mostly linearly from beginning to end.
  • Most of the entities exist at the start gameplay.
  • Entities are deleted when destroyed by the player.
  • The player typically destroys enemies from beginning to end, so enemies are destroyed in a somewhat predictable sequence.
  • New enemies are spawned near the player’s location.

Based on the above, a reasonable ordering for the components in a contiguous block of memory is to have the entities encountered first be near the end of the memory block. To see why, let’s assume a vector. When an enemy is destroyed, its associated components get removed from each vector of components: its Movement component get removed from the Movement component vector, its Transform component gets removed from the Transform component vector, etc. Ideally, the removed components would be the very last elements of each vector so that, when removed, there is no shifting of elements to fill the space in memory left by the removed component. By ordering the entities (and by extension, their constituent components) such that the ones encountered by the player first are near the end of the vector, we can greatly reduced the amount of element shifting when entities are removed. Similarly, when entities are added to the game, they are added to the end of the vector to limit element shifting.

Approach 1: Index is entity ID

This is one of the approaches described at t-machine. In each component array, the index of the component corresponds to its associated entity. In other words, Movement[8] and Transform[8] are associated with the same entity. If an entity has no component of a particular type, then the bytes at the index for the component type are null.

I’m only going to touch on this briefly because I don’t like some of the difficulties this method raises.

  • While you save some memory by allowing the position of the component to serve as its identifier, you can lose a lot by having lots of null bytes in the component array to “fill in” the spaces for entities that don’t use components of that particular type.
  • You can’t re-use array positions without some tricks. For instance, let’s say that one of the component types stores a reference to a different entity (say, for a targeted missile). The Targeting component could then store the value 8 to mean that the entity represented by position 8 in each component array is the target. But, before the missile hits its target, entity 8 is destroyed or removed by other means. Can entity 8 be replaced by a new entity 8? Will the missile seek out the new entity 8 automatically, and is this the desired behavior? It requires a lot of bookkeeping to get right, which begins to look a lot like the Handle method (also explored below).
  • Some good things are that entity component lookup is O(1) and the contiguous components keeps the cache full. For a game with a big scope, this might matter, but the scope in which I’m interested is much smaller. Also a big win is that it’s easy to iterate along multiple component arrays simultaneously to do per-entity operations using multiple components.

Approach 2: Handles

Handles are a way to keep track of a resource even if it gets moved around, as explained here. Essentially, a handle works similarly to a component index, but instead of encoding the position of the component directly, it encodes the position of an entry in a lookup table that describes where the component is. This is really useful for a number of reasons.

  • There are no more voids in the component array wasting memory, but we are using additional memory for all the bookkeeping data.
  • It allows us to add, remove, shift, and swap components back and forth in the contiguous memory block without losing track. For example, to remove a component, we can swap it with the end component, pop the new end off, and update the lookup table accordingly.
  • Reusing space for a component is the same as reusing an entry in the lookup table. One advantage of handles is that they not only keep track of the position in the lookup table, but they also keep track of which version of the lookup position was used. This allows outdated handles to automatically become invalidated once the corresponding entry is reused for a different resource.

So at first glance, it looks like our problems with Approach 1 are solved! We actually lost something important, though. With Approach 1, a common index in each respective component array both identifies the location of the component’s data and associates the different components together to form an entity implicitly. Handles, however, have no relationship to each other, so while they can be used to find an individual component quickly, they don’t associate different components together.

We have to do the component association on our own. For simple cases, we might get away with one component directly holding the handle for another, but we’ll quickly get handle spaghetti with that approach. Instead, we could have a component reference a container object that holds handles to all components belonging to an entity. If one component needs the data from another, it first references the container, then it finds the handle of the needed component, and then it can get the needed data.

This approach isn’t terrible, but it doesn’t seem very clean to me. Take the Transform and Movement system mentioned earlier. For each Movement component, you must get the association object, then check that a handle to a Transform component exists, then get and modify that data accordingly. It’s very possible that all these lookups and bouncing around in memory might blow the cache performance. It also subtly introduces a pseudo-singleton pattern by requiring a single, almost-global set of component association objects.

Approach 3: Flat map

This approach returns to the simplicity of Approach 1. Instead of association of components implicitly through their position in the array, it is done explicitly using a matching array of IDs. Thus, each collection of components is a vector of IDs and a vector of components of matching length. I like this approach a lot.

  • Deletion of components for existing entities or adding new components to existing entities could happen anywhere in vector and cause elements shifts, but the linear progression of the levels means it will mostly be happening near the end and be rare compared to other calculations.
  • When looking for an individual component, we search in reverse from the end linearly for the matching ID. We can abort the search early if the iteration encounters IDs lower than the desired one. As a future enhancement, we could even save the position of the previous lookup and then start the next linear search from that position, with the expectation of increasing average lookup speed.
  • Iterating across two component lists is easy because it’s essentially performing an operation on the intersection of two sorted arrays. We can iterate across each linearly in an alternating pattern to find the common elements and then apply the desired operation.
  • There’s no external object associating components together. It’s just an ID.
  • One downside is that the onus of generating unique IDs for each entity is now on the programmer. Typically, we want each new entity to have an ID higher than any other entity so that the new components get appended near the end of the vector. This is easily handled with a counter as long as possible rollover is taken into account.

It’s lightweight and flexible. The real downsides are the linear time complexity for searching as well as the shifting of elements when deleting, but I think the expected access pattern will cause those negatives to be negligible.

Comparison of handles and flat map

I implemented Approach 2 and Approach 3 and timed how long it took each to do the same operation. These aren’t scientifically controlled tests by any stretch. I simply implemented the two approaches in a straightforward, obvious way, and did the same for the tests. I call each collection of components an object_manager. Each implementation was used independently of the other in different git branches, so I don’t think there was any coupling between the tests.

Test 1: Add/remove components

In this test, the object manager was populated with 4000 simple components (which were just ints). Then I timed the amount of time it took to:

  1. Add new component 1 at the end.
  2. Add new component 2 at the end.
  3. Remove the previous end component (now third to end).
  4. Remove new component 1.
  5. Remove new component 2.
  6. Repeat this test until component collection is empty.

This insertion and access pattern is supposed to simulate an action platformer. New components 1 and 2 might belong to two new projectiles launched by the player. The first hits an enemy, so the enemy and projectile are removed. The second projectile misses and is removed once out of bounds.

When using the handles methods, my machine did this test in about 0.0002 seconds for 4k elements. Using the flat map method, the time was a greater 0.0002 to 0.0003 seconds.

Test 2: Dual iteration

In this test, two object managers were used. One had 4000 components (doubles) for IDs 0 to 3999, and another had 2000 components (ints) for every odd ID. For the handle case, the components also had the requisite handle pointing to a game object, which was just a struct with handles to both component types. The test was to update the double of C1 with 1.5 times the value of C2’s int for all entities that had both components (i.e., every odd numbered entity) and skip those that did not meet this criterion.

Using the handles method, my machine completed this in about 4e-5 seconds. The flat map method, however, did it in only 1e-5 seconds! The discrepancy is likely caused by all the checking and auxiliary structures needed in the handles method. Each iteration requires following the handle to the game object container, checking whether the matching component exists, then following the handle to the matching component. The flat map just iterates along two sorted ID vectors looking for common elements.

So which one is best?

It’s hard to say which one of the two methods I tested is best. The flat map method is much faster when iterating along component arrays, which is what most of the work is in the game loop. The handle method is a bit faster when creating and deleting components, but that’s a relatively rare occurrence in the game loop. In games, though, the speed of processing doesn’t really matter as long as it’s being done fast enough to render the game at a smooth 60 fps in both normal and exceptional circumstances. In terms of easy of programming use and decoupling, the flat map is my clear favorite.