Blog Post

Ritelink Blog > News > FAQs > Why we don’t have 128-bit CPUs

Why we don’t have 128-bit CPUs

We have moved from 8-bit, to 16-bit, to 32-bit, and ended things at 64-bit. Here’s why 128-bit CPUs don’t exist.

Among computer vocabulary words, bit is certainly one of the most well-known. Whole generations of video game consoles and their pixelated art styles are defined by bits (such as 8-bit and 16-bit) and lots of applications offer both 32-bit and 64-bit versions.

If you look at that history, you can see that our ability to handle bits has increased over the years. However, while 64-bit chips were first introduced in the 90s and became mainstream in the 2000s, we still don’t have 128-bit CPUs. Although 128 might seem like a natural step after 64, it’s anything but.

What even is a bit?

Before talking about why 128-bit CPUs don’t exist, we need to talk about what a bit even is. Essentially, it refers to the capabilities of the CPU. Formed from the words binary and digit, it’s the smallest unit in computing and the starting point of all programming. A bit can only be defined as 1 or 0 (hence binary), though these numbers can be interpreted as true or false, on or off, and even as a plus sign or a minus sign.

On its own, a single bit isn’t very useful, but using more bits is a different story because a combination of ones and zeroes can be defined as something, like a number, letter, or another character. For 128-bit computing, we’re just interested in integers (numbers that don’t have a decimal point), and the more bits there are, the more numbers a processor can define. It uses a pretty simple 2^x formula, with x being how many bits there are. In 4-bit computing, the biggest integer you can count to is 15, which is one lower than the 16 the formula gives you, but programmers start counting from 0 and not from 1.

If 4-bit can only store 16 different integers, then it might not seem like going to 8- or 32- or even 128-bit would be all that big of a deal. But we’re dealing with exponential numbers here, which means things start off slow but then take off very quickly. To demonstrate this, here’s a little table that shows the biggest integers you can calculate in binary from 1- to 128-bit.

BitMaximum Integer
1-bit1
2-bit3
4-bit15
8-bit255
16-bit65,535
32-bit4,294,967,295
64-bit18,446,744,073,709,551,615
128-bit340,282,366,920,938,463,463,374,607,431,768,211,455

So now you can probably see why doubling the amount of bits results in being able to handle numbers that don’t just double in size but are orders of magnitude larger. Yet, even though 128-bit computing would enable us to work on much larger numbers than 64-bit computing can, we still don’t use it.

How we went from 1-bit to 64-bit

It’s pretty clear why CPUs went from 1-bit to having more bits: We wanted our computers to do more stuff. There’s not a ton you can do with one or two or four bits, but at the 8-bit mark, arcade machines, game consoles, and home computers became feasible. Over time, processors got cheaper to make and physically smaller, so adding the hardware necessary to increase the number of bits the CPU could handle was a pretty natural move.

The exponential nature of bits becomes apparent very quickly when comparing 16-bit consoles like the SNES and the Sega Genesis to their 8-bit predecessors, principally the NES. Super Mario Bros 3 was one of the NES’s most complex games in terms of mechanics and graphics, and it was completely dwarfed by Super Mario World, which was released only two years later (although improvements in GPU technology were also a key factor here).

We still don’t have 128-bit CPUs, even though it’s been nearly three decades since the first 64-bit chips hit the market.

It’s not just about video games though; pretty much everything was getting better with more bits. Moving from 256 numbers in 8-bit to 65,356 numbers in 16-bit meant tracking time more precisely, showing more colors on displays, and addressing larger files. Whether you’re using IBM’s Personal Computer, powered by Intel’s 8-bit 8088 CPU, or building a server for a company that’s ready to get online, more bits are just better.

The industry moved pretty quickly from 16-bit to 32-bit and finally, 64-bit computing, which became mainstream in the late 90s and early 2000s. Some of the most important early 64-bit CPUs were found in the Nintendo 64 and computers powered by AMD’s Athlon 64 and Opteron CPUs. On the software side, 64-bit started to receive mainstream support from operating systems like Linux and Windows in the early 2000s. Not all attempts at 64-bit computing were successful, however; Intel’s Itanium server CPUs were a high-profile failure and are some of the company’s worst processors ever.

Today, 64-bit CPUs are everywhere, from smartphones to PCs to servers. Chips with fewer bits are still made and can be desirable for specific applications that don’t handle larger numbers, but they’re pretty niche. Yet, we still don’t have 128-bit CPUs, even though it’s been nearly three decades since the first 64-bit chips hit the market.

128-bit computing is looking for a problem to solve

You might think 128-bit isn’t viable because it’s difficult or even impossible to do, but that’s actually not the case. Lots of parts in processors, CPUs and otherwise, are 128-bit or larger, like memory buses on GPUs and SIMDs on CPUs that enable AVX instructions. We’re specifically talking about being able to handle 128-bit integers, and even though 128-bit CPU prototypes have been created in research labs, no company has actually launched a 128-bit CPU. The answer might be anticlimactic: a 128-bit CPU just isn’t very useful.

A 64-bit CPU can handle over 18 quintillion unique numbers, from 0 to 18,446,744,073,709,551,615. By contrast, a 128-bit CPU would be able to handle over 340 undecillion numbers, and I guarantee you that you have never even seen “undecillion” in your entire life. Finding a use for calculating numbers with that many zeroes is pretty challenging, even if you’re using one of the bits to sign the integer, which would have its range from negative 170 undecillion to positive 170 undecillion.

The only significant use cases for 128-bit integers are IPv6 addresses, universally unique identifiers (or UUID) that are used to create unique IDs for users (Minecraft is a high-profile use case for UUID), and file systems like ZFS. The thing is, 128-bit CPUs aren’t necessary to handle these tasks, which have been able to exist just fine on 64-bit hardware. Ultimately, the key reason why we don’t have 128-bit CPUs is that there’s no demand for a 128-bit hardware-software ecosystem. The industry could certainly make it if it wanted to, but it simply doesn’t.

The door is slightly open for 128-bit

Although 128-bit CPUs aren’t a thing today, and it seems no company will be releasing one any time soon, I wouldn’t go so far as to say 128-bit CPUs will never happen. The specification for the RISC-V ISA leaves the possibility of a future 128-bit architecture on the table but doesn’t detail what it would actually be, presumably because there just wasn’t a pressing need to design it.

Leave a comment

Your email address will not be published. Required fields are marked *