In computer science, all types of data read by a computer are simply numbers. For example, even this text that I am currently typing is nothing more than numbers. Each character is exactly one byte in length (Unless special characters are used, such as é or ü). This is known as ASCII encoding.
Let's look at the simplest data storage unit: a bit. A bit is simply a 1 or a 0. This can be represented as true or false, yes or no, on or off, enabled or disabled, etc.
The next step up from a bit is a nibble. A nibble is 4 bits, or half of a byte. Since it has 4 bits (which have 2 possible values), there are 2^4 possible nibbles, or 16 (0-15). A nibble is generally represented as a single hexadecimal digit. Hexadecimal (aka hex) is just a simple way to display base-16 data in a readable and editable format. Counting from 0 to 15 in hexadecimal goes like this: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. In hex, the number 16 is represented as '0x10'.
Further up is a more familiar value size, a byte. A byte is comprised of 8 bits, or 2 nibbles. This byte can hold up to 2^8 possible values (0-255). Bytes are generally displayed as two "hexits", or hex digits. For example, the byte '3E' corresponds to the decimal number 62. This can be determined by hand. First, look at the leftmost nibble, '3'. Since this is in the second place from the right, you multiply this value by 16 to get 48. Then, you simply add 'E', or 14, to the 48 to get 62. If this byte were to be expressed in "binary," or 1s and 0s, it would be 00111110. Here's a way to easily convert between decimal and binary:
128 64 32 16 8 4 2 1
* * * * * * * *
0 0 1 1 1 1 1 0
= = = = = = = =
0 + 0 + 32 + 16 + 8 + 4 + 2 + 0 = 62
To convert hex to binary, just do this (easier):
3 E Separate out the nibbles
0011 1110 Write each's corresponding value in binary, knowing that the values for each position go like this: 8 4 2 1
00111110 Concatenate all of the binary values. This is your resulting binary representation of the number.
A more complicated example:
Given a hex value, '8B25C601':
8 B 2 5 C 6 0 1
1000 1100 0010 0101 1101 0110 0000 0001
In C, the smallest data type is a 'char', which is simply a byte. For example:char myByte = 0x3E;
In most programming languages, hex values are prefixed with '0x'. Some older assembly languages (such as 8080) use a '$' prefix, or a 'h' suffix. For this tutorial, however, I will stick with '0x', as this is the most common nowadays.
A 'char' is a character, encoded with ASCII. In ASCII, the character with the value 0x3E is '>'. 'A' is 0x41, while 'a' is 0x61. The rest of the alphabet follows these values. '0' is at 0x30, with '1' at 0x31, and so on. The complete ASCII chart is located here
Beyond a 'char' is a 'short int', which has a length of 2 bytes, and therefore can have 65536 different values (2^16). Beyond this is an 'int', which is 4 bytes. ints can have 4294967296 possible values (that is 2^32).
After this are longs, long longs, and other data types. These longs and long longs have no specific size and can be platform dependent.
Beyond this, the next most common data types are floats and doubles. These are complicated, and they are beyond the scope of this tutorial, but for now you can simply think of these as numbers that can have decimal points in them. Example (C): float money = 24.99; double angle = 3.14 / 2;
If you are interested in how floats and doubles are stored in memory, then you will find this wikipedia article
Thanks for reading! This is all for now. If there is anything that you do not understand or you would like for me to add, then just post a comment that says so