|Unit system||units derived from bit|
|Unit of||digital information, data size|
|In primary units of information||1 o = 8 bits|
The octet is a unit of digital information in computing and telecommunications that consists of eight bits. The term is often used when the term byte might be ambiguous, as the byte has historically been used for storage units of a variety of sizes.
The international standard IEC 60027-2, chapter 3.8.2, states that a byte is an octet of bits. However, the unit byte has historically been platform-dependent and has represented various storage sizes in the history of computing. Due to the influence of several major computer architectures and product lines, the byte became overwhelmingly associated with eight bits. This meaning of byte is codified in such standards as ISO/IEC 80000-13. While byte and octet are often used synonymously, those working with certain legacy systems are careful to avoid ambiguity.
Octets can be represented using number systems of varying bases such as the hexadecimal, decimal, or octal number systems. The binary value of all eight bits set (or activated) is 111111112, equal to the hexadecimal value FF16, the decimal value 25510, and the octal value 3778. One octet can be used to represent decimal values ranging from 0 to 255.
The term octet (symbol: o[nb 1]) is often used when the use of byte might be ambiguous. It is frequently used in the Request for Comments (RFC) publications of the Internet Engineering Task Force to describe storage sizes of network protocol parameters. The earliest example is RFC 635 from 1974. In 2000, Bob Bemer claimed to have earlier proposed the usage of the term octet for "8-bit bytes" when he headed software operations for Cie. Bull in France in 1965 to 1966.
A variable-length sequence of octets, as in Abstract Syntax Notation One (ASN.1), is referred to as an octet string.
Historically, in Western Europe, the term octad (or octade) was used to specifically denote eight bits, a usage no longer common. Early examples of usage exist in British, Dutch and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. Similar terms are triad for a grouping of three bits and decade for ten bits.
|1 kilooctet (ko)||= 103 octets||= 1000 octets|
|1 megaoctet (Mo)||= 106 octets||= 1000 ko||= 1000000 octets|
|1 gigaoctet (Go)||= 109 octets||= 1000 Mo||= 1000000000 octets|
|1 teraoctet (To)||= 1012 octets||= 1000 Go||= 1000000000000 octets|
|1 petaoctet (Po)||= 1015 octets||= 1000 To||= 1000000000000000 octets|
|1 exaoctet (Eo)||= 1018 octets||= 1000 Po||= 1000000000000000000 octets|
|1 zettaoctet (Zo)||= 1021 octets||= 1000 Eo||= 1000000000000000000000 octets|
|1 yottaoctet (Yo)||= 1024 octets||= 1000 Zo||= 1000000000000000000000000 octets|
|1 kibioctet (Kio, also written Ko, as distinct from ko)||= 210 octets||= 1024 octets|
|1 mebioctet (Mio)||= 220 octets||= 1024 Kio||= 1048576 octets|
|1 gibioctet (Gio)||= 230 octets||= 1024 Mio||= 1073741824 octets|
|1 tebioctet (Tio)||= 240 octets||= 1024 Gio||= 1099511627776 octets|
|1 pebioctet (Pio)||= 250 octets||= 1024 Tio||= 1125899906842624 octets|
|1 exbioctet (Eio)||= 260 octets||= 1024 Pio||= 1152921504606846976 octets|
|1 zebioctet (Zio)||= 270 octets||= 1024 Eio||= 1180591620717411303424 octets|
|1 yobioctet (Yio)||= 280 octets||= 1024 Zio||= 1208925819614629174706176 octets|
The octet is used in representations of Internet Protocol computer network addresses. An IPv4 address consists of four octets, usually displayed individually as a series of decimal values ranging from 0 to 255, each separated by a full stop (dot). Using octets with all eight bits set, the representation of the highest-numbered IPv4 address is 255.255.255.255.
An IPv6 address consists of sixteen octets, displayed in hexadecimal representation (two hexits per octet), using a colon character (:) after each pair of octets (16 bits are also known as hextet) for readability, such as FE80:0000:0000:0000:0123:4567:89AB:CDEF.
[…] I came to work for IBM, and saw all the confusion caused by the 64-character limitation. Especially when we started to think about word processing, which would require both upper and lower case. […] I even made a proposal (in view of STRETCH, the very first computer I know of with an 8-bit byte) that would extend the number of punch card character codes to 256 […]. So some folks started thinking about 7-bit characters, but this was ridiculous. With IBM's STRETCH computer as background, handling 64-character words divisible into groups of 8 (I designed the character set for it, under the guidance of Dr. Werner Buchholz, the man who DID coin the term "byte" for an 8-bit grouping). […] It seemed reasonable to make a universal 8-bit character set, handling up to 256. In those days my mantra was "powers of 2 are magic". And so the group I headed developed and justified such a proposal […] The IBM 360 used 8-bit characters, although not ASCII directly. Thus Buchholz's "byte" caught on everywhere. I myself did not like the name for many reasons. The design had 8 bits moving around in parallel. But then came a new IBM part, with 9 bits for self-checking, both inside the CPU and in the tape drives. I exposed this 9-bit byte to the press in 1973. But long before that, when I headed software operations for Cie. Bull in France in 1965-66, I insisted that "byte" be deprecated in favor of "octet". […]