I'm in the process of creating a byte_buffer
fundamental type for Leaf. This is an unsafe buffer needed for compatibility with C APIs. I have a kind of dilemma on what byte
should actually mean.
I have another type called and octet
which is guaranteed to be 8bits in size.
Traditionally C, and C++, have defined a byte to just be an addressable unit of memory, now at least 8bits in size. A sequence of bytes must also form a contiguous memory area. This makes it work on hardware that chooses not to use 8-bits.
At some point though hardware unified on 8-bit bytes, so the distinction feels a bit odd. A lot of code is written, incorrectly, assuming that is the case. Some languages, like Java and C# define a byte to be 8-bits.
There are some chips around, like DSPs, where a byte is not 8-bits. In the future somebody may experiment again -- which would make it hard for any language assuming a byte is 8-bits to work on those platforms.