I am currently learning SQL.
When looking at the INT, I came to the understanding that an INT type is 4 bytes long, which translates to 8 bits each byte, leading to each INT being 32 bits.
However, for INT it is said that the max value for unsigned types is (2^32)-1 where the -1 is accounting for 0 value. I understand that the 32 comes from the fact that each int is 32 bits.
My question is where does the 2 come from in the calculation?
My intuition is telling me that each bit will have some sort of measure valued at 2.
Advertisement
Answer
Computers use binary system for storing values. Let’s try analogy between binary and decimal system:
Consider 10-based (decimal) system. If You have number with 32 decimal places, every place having value 0-9, You have 10^32 possible values (obviously enough; we use this system on daily basis).
Now consider 2-based system, which is the one used by computers (for practical reasons – two states are easiest to distinguish and wiring logic is simplest). Every place (bit) has value 0-1, so there are 2^32 possible values.