Counting from zero is a very common practice in many computer languages, but why? Read on as we explore the phenomenon and why it is so widespread.
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
SuperUser reader DragonLord is curious about why most operating systems and programming languages count from zero. He writes:
Computers traditionally tally numerical values starting from zero. For example, arrays in C-based programming languages start from index zero.
What historical reasons exist for this, and what practical advantages does counting from zero have over counting from one?
Why indeed? As widespread as the practice is, surely there are practical reasons for its implementation.
SuperUser contributor Matteo offers the following insights:
Counting arrays from 0 simplifies the computation of the memory address of each element.
If an array is stored at a given position in memory (it’s called the address) the position of each element can be computed as
element(n) = address + n * size_of_the_element
If you consider the first element the first, the computation becomes
element(n) = address + (n-1) * size_of_the_element
Not a huge difference but it adds an unnecessary subtraction for each access.
Edited to add:
- The usage of the array index as an offset is not a requirement but just an habit. The offset of the first element could be hidden by the system and taken into consideration when allocating and referencing element.
- Dijkstra published a paper “Why numbering should start at zero” (pdf) where he explains why starting with 0 is a better choice. Starting at zero allows a better representation of ranges.
If you’re looking to delve deeper into the answer, the Dijkstra paper is an informative read.
Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.