IE3102 Microprocessors
Contents
1 Week 1 2
1.1 Introduction to Microprocessors . . . . . . . . . . . . . . . . . . . 2
1.1.1 ARM Processors . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Data Sizes and Instruction Sets . . . . . . . . . . . . . . . 2
1.2 Data Representation . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Floating Point Numbers . . . . . . . . . . . . . . . . . . . 3
1
Chapter 1
Week 1
1.1 Introduction to Microprocessors
A microprocessor is a programmable chip that can perform logical, arithmetic
and control operations. Logical will be your AND, OR, NOT, XOR. Arithmetic
operations will be your standard +, −, /, ×, etc. Control operations are your
if-else-statements, loops, and other tools that help control your program flow.
Essentially a CPU on a single chip containing registers, ALU, instruction de-
coder, addressing logic, internal busses, and control logic. It typically cannot be
operated on its own. Microcontrollers on the other hand are microprocessors
that are integrated with memory and I/O peripherals built onto a single chip.
Meaning that it is a complete computer system on a single chip, normally built
to accomplish a single specific purpose/task. Embedded Systems are systems
or devices that has a dedicated computer as one of its subsystems, which may
be microprocessors and/or microcontrollers.
1.1.1 ARM Processors
ARM stands for ”Advanced RISC Machine” where RISC stands for Reduced
Instruction Set Computer. It is based on the RISC architecture. The rationale
of RISC design aims for high performance by reducing the number of total in-
struction classes, introducing a load-store architecture, fixed length instructions,
and a large general purpose register set. This enables simpler hardware which
is then scalable to higher operating frequencies.
1.1.2 Data Sizes and Instruction Sets
The ARM is a 32-bit architecture, meaning that all arithmetic and logical oper-
ations are based on 32-bits. That also means that the registers, address bus, and
data bus are 32-bits in size. In the context of ARM, a byte refers to 8 bits, a
halfword refers to 16 bits, and a word is 32 bits. Most ARMs implement two
2
instruction sets: 32-bit ARM instruction set (default) and the 16-bit Thumb
instruction set.
1.2 Data Representation
1.2.1 Floating Point Numbers
Floating point numbers extend the range of numbers that can be stored by fixed
point numbers in a computer. Generally floating point numbers store a sign S,
exponent E, and a mantissa F , and are of predetermined number base B. B is
usually set to base 2.
Floating point value = (−1)S × F × B E
In a 32-bit single precision format, the S takes up 1 bit (bit 31), E takes up bits
23 to 30, and F takes up bits 0 to 22.
IEEE-754 Floats
They are used in almost all modern floating point unit systems. It allocates 1
sign (S), 8 exponent (E), and 23 mantissa bits (F ) for single precision and 1 S,
11 E and 52 F bits for double precision. One thing to note is that the exponent
E in the IEEE-754 standard is stored by adding 127 onto the actual E value for
single precision, and 1023 for double precision. Since E can be < 0, the reason
for adding 127 (or 1023 for double precision) is to store it as a positive number.
Different Number Types of IEEE-754 Float
Here are the 5 different number types of the IEEE-754 standard. Different
number types will require different ways to handle. The below denotes how
IEEE-754 stores a single precision float in the following format:
[S bit][8(E + 127) bits][23F bits]
Here are the different types and how they are stored,
Zero [±][0][0]
Infinity [±][28 − 1][0]
NaN [±][28 − 1][ non-0 ]
Denormalised [±][0][ any non-0 ]
Normalised [±][1 ≤ E ≤ 28 − 2][ anything ]
3
Normalised IEEE-754 Floats
The normal number bit representation is deciphered in the following way,
(−1)S × (1 + F ) × 2E−127
Where S is the sign bit (first bit), F is the mantissa value (next 8 bits) in the
range 0 < F < 1, and E is the exponent (last 23 bits) of the range 1 ≤ E ≤ 254.
There is an implicit 1 in (1 + F ) when evaluating the float to take note of,
since 0 < F < 1, the multiplier to the exponent is always in the range of
1 < (1 + F ) < 2. This makes sense since if the multiplier exceeds 2, we can
always increase the exponent bit since it is a multiplier of 2 anyway.
To determine the bits from the decimal value, we can first determine S
trivially. E can be found by flooring the log2 of the absolute value. Let our
decimal be −3.14159,
E = ⌊log2 3.14159⌋ = ⌊1.65⌋ = 1
We got E = 1, but the exponent stored by IEEE-754 is offset by 127, so it is
stored as 128 (i.e. 0b10000000). Now to determine the mantissa F , divide the
magnitude of the value by 2, E times (or multiply by 2−E ). Ensure that this
value is between 1 and 2 since as we’ve seen earlier, 1 < (1 + F ) < 2.
3.14159 × 2−1 ≈ 1.570795, 1 < 1.570795 < 2
Undo the (1 + F ) offset by subtracting by 1,
1.570795 − 1 = 0.570795
then convert it into a 23-bit binary fraction.
round(0.570795 × 223 ) = 478817610
478817610 = 0x490FD0 = 0b100 1001 0000 1111 1101 0000
Denormalised IEEE-754 Floats
From the way normalised numbers are deciphered, we can know the range of
normal numbers, hence knowing the smallest magnitude of the normalised num-
bers, i.e. (1) × 1.00 × 2−126 = 2−126 ≈ 1.1755 × 10−38 . Now since the exponent
field E for denormalised numbers are 0, evaluating it will require the mantissa
field (the binary fraction) to be multiplied by the smallest magnitude normalised
number (i.e. 1.00 × 2−126 ). For this case, the mantissa field does not have an
addtional 1 added to it unlike the normalised number. So to decipher the bits,
make sure its E bits are all 0s, evaluate the F bits into a fraction (make sure
the evaluated number is in the range 0 < F < 1), then multiply that number
by 2−126 . To convert a denormalised number into the bits, simply multiply by
2126 to get the value in the range 0 < F < 1 and convert that into the F binary
fraction bits. Obtaining the S bit is trivial and the E bits are all 0s.