So far we have only considered numeric operations with integers. Sometimes we want to represent numbers with fractional parts, or very big or very small numbers. In addition, physical measurements cannot be totally precise, rather they can only be made to a certain degree of accuracy. For example, a book published in 1956 states that a river is 120,000,000 years old. It makes little sense to say that today that river is 120,000,049 years old! (The situation is quite different for a bank, it must keep an accurate total of its deposits, which might be $120,000,049.98)
Very big and very small numbers can be expressed more conveniently in "scientific" notation, using some significant digits multiplied by a power of 10, for instance
1.2×108 0.123456×10-27 -30.09×1030
Digital computers can store numbers of this sort using the binary equivalent of scientific notation, that is, a sign bit, a sequence of significant binary digits (called the mantissa), and the location of the "binary point" relative to these, in the form of a power of 2 (called the exponent ). These are called floating point numbers, because the point "floats" around in (and beyond) the digits. Historically, many different schemes for storing these 3 quantities have been used. More recently, an IEEE standard has been published for 32 and 64 bit floating point numbers. (As well as some other sizes.) Hardware manufacturers now tend to make floating point arithmetic units to operate on numbers stored according to this standard. In any case, if we are provided with a set of instructions for performing the operations, the details of the storage scheme need not concern us.
The important points to remember are:
The IEEE standard for 32-bit numbers uses 1 bit for sign, 8 bits for expenent (where the binary point is), and 23 bits for significant figures. The leftmost (24th) bit is always one, and hence not stored. For details see wikkipedia
Like most processors of its time, MIPS is designed to accomodate one or more coprocessors, other chips that share the processing load. Coprocessor 1 will usually be a floating point coprocessor, and this is what SPIM simulates.
As advancing technology allows more circuit elements on a chip, the "copressor" functions are coming to be incorporated in the main processor chip. Earlier and less expensive personal computers often lacked a floating point processor, this lack was made up by software implementing the floating point operations, at a much slower speed. For those familiar with the Intel processor family, the 80386 and its coprocessor, the 80387, were combined into the 80486 chip.
There are 2 sizes of floating point numbers:
The coprocessor contains 16 FP registers, named $f0 - $f15. Each FP register can hold either a single or a double.
Doubles can be used to particular advantage in intermediate calculations, in order to minimize accumulation of error, particularly the possible loss of accuracy when subtracting nearly-equal numbers. Operations with doubles are denoted with the suffix .d in the op code, I will not discuss them further here.
SPIM provides system services for floating point I/O:
Examples of usage can be found in the program circle.a
Data (normally floating) can be moved between memory and FRegisters. Floating constants can be declared in the data segment with the .float directive.
l.s FRdest, address # Fregister := memory
s.s FRsrc, address # (left to right) memory := Fregister
These are pseudo-instructions. The data actually travels through the
processor.
For example:
l.s $f2, pi # load constant from memory
li $v0, 6 # read diameter
syscall # ... into $f0
mul.s $f12, $f0,$f2 #(see below) circumf := diameter * pi
s.s $f12, ($t0) # store in an array location
li $v0, 2 # print diameter
syscall
--------------
pi: .float 3.14159 # assemble a single float in data segment
The expected operations are provided
The 4 branches of arithmetic: ambition, distraction, uglification, and derision. - Lewis Caroll
add.s FRdest, FRsrc1, FRsrc2
sub.s FRdest, FRsrc1, FRsrc2
mul.s FRdest, FRsrc1, FRsrc2
div.s FRdest, FRsrc1, FRsrc2
The pattern is familiar, 3 registers (this time in the coprocessor): destination := source1 (op) source2
In addition, these 2-operand operations are available:
abs.s FRdest, FRsrc #absolute value
neg.s FRdest, FRsrc #negate (change sign)
mov.s FRdest, FRsrc #move, note the lack of 'e'
No hardware exists t do arithmetic on a mixture of numeric types. If you want to add an integer to a float, for example, one of them must be converted to the other's type. Likewise single <--> double conversion must be done to match types.
Type conversions are the responsibility of the Floating point coprocessor, and are done with data in the F-registers.The instructions are of the form
cvt.s.w FRdest, FRsource
which means: convert to single floating, the integer word in FRsource, storing result in FRdest. Note the right-to-left pattern.
Converting integer to floating transforms the bit pattern used, but preserves the numeric value of small integers (up to about 6 digits). Converting from floating to integer, on the other hand, causes any fractional part to be lost (truncated), and if the floating number has a very large magnitude, perhaps its value cannot be stored as an integer.
In this example, we convert pi = 3.14159 to an integer, with the value 3:
l.s $f0, pi
cvt.w.s $f1,$f0 #now $f1 = 3 (integer) and cannot be used by the coprocessor
Since conversions are done by the coprocessor, it is necessary to move integer data back and forth between processor general registers and coprocessor registers. This movement does not change the bit patterns, so the data must be converted before being used for operations. The programmer must keep track of the type of data in each coprocessor register. It ia also possible to have single floating data in the general registers, but not very useful.
The instructions for moving data between general registers and coprocessor registers are:
The instruction set allows for multiple coprocessors, for now we are only using coprocessor one, the floating point coprocessor. Note that the general register is always listed first, just like in load and store instructions. Generally, we should be moving integer data.
Example: continuing from above, to calculate the radius of the circle (diameter/2), from the diameter in $f0:
li $t1,2 #integer 2
mtc1 $t1,$f1 #integer 2 arrives in coprocessor
cvt.s.w $f3,$f1 #single 2.0 in $f3
div.s $f3,$f0,$f3 # radius in $f3
We can compare floating values, and branch depending upon the result. The compares are done by the coprocessor, which generates a "status" of true or false. Then the special branch instructions, "branch on coprocessor condition flag" are used to effect the branch.
The compare instructions have the form c. condition.s FR, FR with 3 of the expected 6 conditions implemented. Instead of .s you may also use .d The 3 forms are:
c.eq.s
c.le.s
c.lt.s
Each compare instruction compares the two operands and then sets the coprocessor status, which is then tested by the branch instructions:
bc1t label | branch if coprocessor 1 condition flag true |
bc1f label | branch if coprocessor 1 condition flag false |
Example:
# IF x > y THEN k = 1 ELSE k = 2 x in $f1, y in $f2
c.lt.s $f2, $f1 # y < x is same as x > y
bc1f else
li $t0,1 #THEN k = 1 (in $t0)
b endif
else:
li $t0,2 #ELSE k = 2
endif:
All the possible comparisons can be obtained either by reversing the operands, as we did here, or choosing the opposite branch instruction.
Thats all you need to know to program floating point operations in (this) assembly language. However, let us remember that floating point calculations are not exact, and therefore it is not a really good idea to expect exact equality of two different computations, even if they should be theoretically equal. This caution holds regardless of the computer language you are using. For instance, adding 1/10 to itself 10 times may not be exactly equal to 1.0! It is more likely to be equal if the additions are done in double, and the result is converted to single for comparing, but even this is not foolproof.
(x == 1.0) is more safely written as abs(x-1.0) < 0.000001