Enums, or enumerated types, are a list of constant values with developer-friendly names. They’re used in programming to commonly used to establish a set of predefined values that a variable can take.

Defining a List of Values

Enums are a list of values, each of which is always unique. You can’t have two values in the enum with the same name. This makes them quite useful for defining all possible value states a function can handle.

For example, you could have an enum called “Fruits,” with a list of possible fruits that the application understands. This is the key difference between just using a stringto represent this variable — a string can have infinite possible values, while this enum can only have three.

You can then use this enum in your code. For example, checking if type of the user input equals an Apple, or even passing it as the argument to a switch/case statement, doing something specific for each type of possible value.

Under the hood, enums have values themselves. This will depend on the language implementation, but in C# for example, enums have integer values by default. The default values start at 0, 1, 2, 3, etc., but this can be changed manually to set each enum entry to a specific value. For example, HTTP status codes:

Which can then be converted back and forth to the value, and the enum type, using basic casting:

Beyond just being useful for defining sets of items, enums are great for the developer experience. Even if you’re converting to a string or int before sending data off to an API, using enums in your codebase allows for greater flexibility and leads to cleaner code.

Also, having autocomplete dropdowns for a list of possible values not only helps you, but helps anyone else working on your codebase in the future. You don’t want to be maintaining code that takes strings as arguments and does random stuff based on the input. Using an enum instead strictly defines the application behaviour.

Downsides Of Using Enums

The main downside of enums is when they leave your codebase and lose their special meaning. For example, if you have an API that you’re storing and sending data to, you must serialize the enum first, which by default probably takes on the underlying number of 0, 1, 2, etc. Some languages support enums with underlying string values, or custom enum serialization rules, which can help alleviate this problem.

This can also be a problem if your enums are changing. Once you start using an enum, you can’t change the order of it, only add new items on the end. Otherwise, data stored using the old version of the enum would become outdated and garbled.

Enum Types As Flags

Another common use for enums is defining bitwise flags. These are a pretty advanced concept, but basically, each enum value represents a single boolean value. Together, the entire enum can be stored in one integer, and used to perform quick lookups for boolean data.

This works because each enum value is set to a different bit in the underlying number. For binary, that means the enum values will be 0, 1, 2, 4, 8, 16, etc. Then, you can add them together to represent a list of boolean values.

Why do this rather than multiple booleans? Well for one, it saves space, which in some situations (where you’re storing a lot of them) can be an benefit. But more importantly, it’s really fast to access each value, especially when accessing multiple values. For example, if you wanted to check if it’s the weekend, you can check if it’s Saturday | Sunday, all from the same byte that was loaded into memory. The CPU only needs to fetch one item to get a list of all flag states.

Profile Photo for Anthony Heddings Anthony Heddings
Anthony Heddings is the resident cloud engineer for LifeSavvy Media, a technical writer, programmer, and an expert at Amazon's AWS platform. He's written hundreds of articles for How-To Geek and CloudSavvy IT that have been read millions of times.
Read Full Bio »