logo

Practical Computing Advice and Tutorials

Tue: 23 Jul 2019


Site Content

Programming
&
Development


Technical Knowhow


Command Line Interface


Security

Bits & Bytes

'bit':

A Binary Digit, or 'bit' is either zero or one, and 8-bits make up 1-Byte.


'Byte':

Traditionally, the smallest unit of addressable storage is a 'Byte' (A.K.A a 'Octet') which influences up to eight binary bits at a time, by either 'setting' (changing to a '1') or 're-setting' (changing to '0') all eight bits. All that a digital computer really does, at a base level, is to turn On or turn Off an electrical current (also called a signal): Two states On or Off, one and zero. That's it. Nothing more.

This On/Off state is what a 'bit' is. A bit has two states On (A.K.A 'Set') and Off (A.K.A 'Unset'). It's very similar to one of the first revolutions in communications: Morse Code, which uses an On/Off signal with which to send text messages. I say similar because it's the duration of the On/Off signals in Morse Code that encodes the text (a Dash being 3 times longer than a Dot), rather than the simple On/Off state its self. What is so amazing is what can be achieved by such a simple operation.

These Binary Bits are represented by Ones and Zeros. Now it turns out that it only takes 4-bits (with room to spare) to represent all ten digits in our Denary number system. This spare room (from 1010 to 1111) is represented with the letters A to F. This is where the term Hexadecimal (or HEX for short) comes in.

A=10  B=11  C=12  D=13  E=14  F=15

Binary Coded Decimal
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
Hexadecimal
A 1010
B 1011
C 1100
D 1101
E 1110
F 1111

Position: 3 2 1 0
Value: (2^Pos): 8 4 2 1

The decimal value that binary bits represent are encoded from right to left; least significant to most significant and can be calculated by raising the position of the binary bit to the power of 2, the starting bit position being zero.

0 0 0 1:  1 =  2^0
0 0 1 0:  2 =  2^1
0 0 1 1:  3 = (2^1) + (2^0)
0 1 0 0:  4 =  2^2
0 1 0 1:  5 = (2^2) + (2^0)
0 1 1 0:  6 = (2^2) + (2^1)
0 1 1 1:  7 = (2^2) + (2^1) + (2^0)
1 0 0 0:  8 =  2^3
1 0 0 1:  9 = (2^3) + (2^0)
1 0 1 0: 10 = (2^3) + (2^1)
1 0 1 1: 11 = (2^3) + (2^1) + (2^0)
1 1 0 0: 12 = (2^3) + (2^2)
1 1 0 1: 13 = (2^3) + (2^2) + (2^0)
1 1 1 0: 14 = (2^3) + (2^2) + (2^1)
1 1 1 1: 15 = (2^3) + (2^2) + (2^1) + (2^0)

So, with just 4-bits, the highest positive decimal number we can represent is 15. In the real world, it's handy to be able to work with negative numbers. We can represent negative numbers by using the most significant bit as a 'parity bit', but the range does not change; 4-bits can only represent 15 values; +1 to +15 or, with parity, -8 to +7. It works like this. If the most significant bit is 'set', the decimal number represented becomes a negative value. So, with parity...

0 0 0 1:  1 =  2^0
0 0 1 0:  2 =  2^1
0 0 1 1:  3 = (2^1) + (2^0)
0 1 0 0:  4 =  2^2
0 1 0 1:  5 = (2^2) + (2^0)
0 1 1 0:  6 = (2^2) + (2^1)
0 1 1 1:  7 = (2^2) + (2^1) + (2^0)
1 0 0 0: -8 = -2^3
1 0 0 1: -7 = (-2^3) + (2^0)
1 0 1 0: -6 = (-2^3) + (2^1)
1 0 1 1: -5 = (-2^3) + (2^1) + (2^0)
1 1 0 0: -4 = (-2^3) + (2^2)
1 1 0 1: -3 = (-2^3) + (2^2) + (2^0)
1 1 1 0: -2 = (-2^3) + (2^2) + (2^1)
1 1 1 1: -1 = (-2^3) + (2^2) + (2^1) + (2^0)

Remember, all the computer knows, is that it's storing 4-bits. It's only the way in which we, as humans, choose to interpret those bits, by way of how we've coded our application, that makes the difference between 1 0 0 0 being either +8 or -8. This concept scales to 8-bit, 16-bit, 32-bit and 64-bit machine architecture, with the most significant bit being used for parity.

So, for 8-bits (1-Byte) we have 0 0 0 0 0 0 0 0 to 1 1 1 1 1 1 1 1 which can represent decimal number values from 0 to 255 or, with parity, -128 to +127. Another way of looking at this is to take the approach of needing to 'set' a particular bit or sequence of bits in an area of memory. For example, if we have an area of memory in which bit 5 needs to be 'set', for whatever reason, storing the decimal value of 32 in said memory location will do exactly that, because what will be stored, in binary, is 0 0 1 0 0 0 0 0

Position: 7 6 5 4 3 2 1 0
Value: (2^Pos): 128 64 32 16 8 4 2 1

You should be able to see from all of this, that for each additional bit we have, we can double the possible decimal values with which we can work: with 2-bits we have four possible values; 0 – 3; with 3-bits, we have eight possible values; 0 – 7; with 4-bits, we have sixteen possible values; 0 – 15 and so on. It's a concept that will serve you well to remember and apply to other areas of computer science, such as 'encryption', that is to say, an 8-bit encryption algorithm is NOT simply twice as strong as a 4-bit encryption algorithm, it's sixteen times stronger. With that kind of basic knowledge, you'll much better understand the difference between a 64-bit encryption algorithm and a 128-bit encryption algorithm, and so on.


Data Transmission & Storage

In computer science, 'Kilo = 1024', because of the binary system:

Binary is Base2, so...

2^10 bits  = 1024 Bits       (Kilobit)
2^20 bits  = 1,048,576 Bits  (Megabit)
2^10 Bytes = 1024 Bytes      (KiloByte)
2^20 Bytes = 1,048,576 Bytes (MegaByte)

But, with data transmission and storage, we use Base10 numbers...

10^1  = 10
10^2  = 100
10^3  = 1000              (1K)
10^4  = 10,000            (10K)
10^5  = 100,000           (100K)
10^6  = 1,000,000         (1 Meg)                8    Mbps
10^7  = 10,000,000        (10 Meg)  (10BASE-T)   80   Mbps
10^8  = 100,000,000       (100 Meg) (100BASE-T)  800  Mbps
10^9  = 1,000,000,000     (1 Gig)   (1000BASE-T) 8000 Mbps
10^10 = 10,000,000,000    (10 Gig)  (10GBASE-T)  80   Gbps
10^11 = 100,000,000,000   (100 Gig)
10^12 = 1,000,000,000,000 (1000 Gig)

You may see connection speeds as listed as 'Kbps'. This means 'Kilo-bits-per-second'. If the speed is listed as 'KBps', it means 'Kilo-Bytes-per-second', or 'MBps' for 'Mega-Bytes-per-second'. Because there are 8-bits in 1-Byte, we have to multiply by the Bytes by 8 to get to the bit rate, or divide the bits by 8 to get to the Byte rate. For example, when you see a connection speed listed as a '3 Meg Connection' it means '3 Mega-Bytes-per-second' which is 3,000,000 Bytes per second (3,000 KBps). Multiply that by 8 to get to the bps rate.

To convert between MB and GB, is a factor by 1,000. So, if your service provider says that you've used 2275 MB of your 8 GB plan, that's 2.275 GB used.


Logic Gates

It should now be relatively clear how computers work with and interpret our decimal number system, by simply manipulating combinations of bits. To do this, computers use a series of 'logic gates', the most ubiquitous being the 'AND GATE', which we'll need to understand when we get to the subject of 'Subnetting'.

I'm not going go into any real detail here, as other sites (such as Wikipedia) have some very good content about logic gates and I encourage the reader to go do some research and understand what goes into building a 'half adder' and how two of them are wired-up to make a 'full adder'; understand that, and you'll understand how computers can produce the sum of two binary numbers.

The simplest 'gate' that you use, day-in and day-out, without giving it a second thought, is the electrical switch; for lights, for kettles, for any electrical device that you need to power-up and power-down. So, let's consider a switch that controls a light: it has two positions; On and Off, or, to put that into computer terminology, 1 and 0.

Now, suppose that in order to turn on your light, you needed two switches and that the only way that the light would shine, is if both switches were in the On position. That's what an AND gate is: it needs two Inputs to facilitate an Output.