In computing, bytes are the basic unit of storage for digital information. Bits, on the other hand, are the basic unit of speed. Why? Bytes are measured in bytes because they represent a fixed number of bits. For example, a byte can store 8 bits of information. A bit is one unit of information and can be either a 0 or a 1. So, by definition, one byte can store up to 256 different values (0-255). Bits, on the other hand, are measured in bits because they represent how much data can be processed per second. For example, a bit can process 2^8 = 16 different values (0-15). So by definition, one bit can process up to 64 different values (0-63). ..
Megabits vs. Megabytes: What’s the Difference?
A bit or “binary digit” is the smallest piece of information in a binary computer system. A bit can be either a one or a zero, and bits are represented in many different ways: as memory cells in an SSD, as pits and lands on a Blu-ray, or as magnetic patterns on a hard drive platter.
A megabit is a million bits, which is equivalent to 125 Kilobytes. In other words, a single megabyte contains eight megabits worth of data. So, in theory, a 1000 Mbps (Megabits per second) network connection can transfer 125 MB/s (Megabytes per second) worth of data.
Mbps and Mb/s refer to Megabits, and MBps and MB/s refer to megabytes. So it’s not hard to see why so many people confuse the two, leading to them significantly over- or under-estimating the speed of something.
Why Measure Speed in Megabits and Storage in Megabytes?
It’s hard to see immediately why you’d choose either megabits or megabytes for a given measurement. After all, when you transfer a file in Windows, the measurement shown is in MB/s and not Mbps. So it’s not as if you can’t measure data transfer speeds in the larger unit.
However, a byte is a specific arrangement of bits that’s part of a particular standard. Bits are universal to every binary computer system. Even if aliens developed binary computer systems, the bit would still be the fundamental unit of data. In the meantime, there are eight bits to a byte today because you need eight bits to represent every character in the ASCII encoding system. However, bytes could have been a different arbitrary number of bits.
With network data transfer, the system isn’t transferring bytes; it’s transferring bits. Knowing how many raw bits can be sent and received gives you a universal measurement of network bandwidth.
When we’re talking about storage devices such as hard drives or SSDs, the drive is formatted to store data in accordance with the standard byte. A disk is not an arrangement of single bits but of 8-bit bytes. So it makes sense to measure its total storage as a multiple of this unit rather than of the bit.
Ironically, there’s a unit discrepancy with hard drives as well. Hard drive manufacturers define a Kilobyte as 1000 bytes, one Megabyte as 1000 Kilobytes, and so on. Windows, on the other hand, uses groups of 1024 in line with RAM manufacturer convention.
This is what a 1TB hard drive shows up as a 931GB drive in Windows, even though they both describe exactly the same number of bits. This underscored why measuring data transfer rate in bits is the most sensible way to do it, since arbitrary standards don’t muddy the waters.
Just Use the Rule of Eight
If you take care to double-check whether bits or bytes are being used, converting from one to the other is as easy as multiplying or dividing by eight. As long as you remember that there are eight megabits in one megabyte, you’ll have a better idea of how much speed or volume you’re dealing with.
RELATED: How Much Download Speed Do You Really Need?