I/O SYSTEMS
I/O HARDWARE
Computers operate many kind of devices. A device communicates with a computer system by
sending signals over a cable. The device communicates with the machine via connection
point(port). If one or more devices use a common set of wires, the connection is called a bus. A
bus is a set of wires and protocol. When a device A has a cable that plugs into device B, and
device B plugs in to device C , which in turn plugs in to computer, this arrangement is called
daisy chain.
Buses are widely used in computer architecture. Figure shows PCI bus(the common PC bus
system). This connects processor memory system to fast devices, expansion bus connects slow
devices such as keyboard, serial and parallel ports. disks are connected to SCSI(Small Computer
System Interface) controller or IDE controller (Integrated drive electronics).IDE is slow in data
[Link] is fast but not simple. A controller is a collection of electronics that can operate a
bus or a device.
An I/O Port typically consists of 4 registers
1. status register- contains bits indicates current command completed or not
2. control register- to start a command or change the mode of device
3. data-in register-read by host to get input
4. data-out register-written by host to send output
Methods used to transfer data between I/O devices and CPU ( or interactions between host
and controller) are:
1. Polling
Basic handshaking notion is used here. The controller indicates its status using busy bit in
status register. If it is busy , busy bit is set to 1. if bit is 0 , means it is free. Host signals its wish
through command ready bit in the command register. if command is available to execute, then
command ready bit is 1. The host writes the output through a port by handshaking as follows:\
1. The host repeatedly reads the busy bit until that bit becomes clear. Here the host is busy-
waiting or polling
2. The host sets the write bit in the command register and writes a byte into the data-out
register.
3. The host sets the command-ready bit
4. When the controller see that command-ready bit is 1, it sets the busy bit.
5. The controller sees the write command. It reads the data-out register to get the byte and
does the I/O to the device.
6. The command ready bit, busy bit is cleared.
This loop is repeated.
Polling has the disadvantage that if there are too many devices to check, the time required to poll
them can exceed the time available to service the I/O device.
So the hardware mechanism that enables a device to notify the CPU is called an interrupt.
2. Interrupts
Interrupts allow devices to notify the CPU when they have data to transfer or when an
operation is complete, allowing the CPU to perform other duties when no I/O transfers need its
immediate attention.
The CPU hardware has a wire called interrupt- request line that the cpu senses after executing
every instruction.
A device's controller raises an interrupt by asserting a signal on the interrupt request
line.
The CPU then performs a state save, and transfers control to the interrupt
handler routine at a fixed address in memory
The interrupt handler determines the cause of the interrupt, performs the necessary
processing, performs a state restore, and executes a return from interrupt instruction to
return control to the CPU.
CPU resumes its interrupted task
Most CPUs now have two interrupt-request lines:
One that is non-maskable for unrecoverable error conditions and one that is maskable, that
the CPU can temporarily ignore during critical processing.
3. Direct Memory Access ( DMA)
For devices that transfer large quantities of data ( such as disk controllers ), it is wasteful
to tie up the CPU transferring data in and out of registers one byte at a time.
Instead this work can be off-loaded to a special processor, known as the Direct Memory
Access, DMA, Controller.
The host issues a command to the DMA controller, indicating the location where the
data is located, the location where the data is to be transferred to, and the number of
bytes of data to transfer. The DMA controller handles the data transfer, and then
interrupts the CPU when the transfer is complete.
Handshaking between DMA controllers and their devices is accomplished through two
wires called the DMA-request and DMA-acknowledge wires.
While the DMA transfer is going on the CPU does not have access to the PCI bus (
including main memory ) (known as Cycle stealing), but it does have access to its
internal registers and primary and secondary caches.
APPLICATION I/O INTERFACE
User application access to a wide variety of different devices is accomplished
through layering, and through encapsulating all of the device-specific code
into device drivers, while application layers are presented with a common
interface for all devices.
Devices differ on many different dimensions
Character stream or block- Character stream device transfers bytes one by one, block device
transfers block of bytes
Sequential or random access- Sequential device transfers data in a fixed order, random access
seeks data randomly
Synchronous or asynchronous- Synchronous device performs data transfer within a predictable
response time, but asynchronous not.
Sharable or dedicated-sharable device can be used concurrently by several processes,
dedicated cannot
Device Speed- bytes per second to gigabytes per second
I/O direction-devices perform, input only, output only , both
Most devices can be characterized as either block I/O, character I/O, memory
mapped file access, or network sockets. A few devices are special, such as
time-of-day clock and the system timer.
Block and Character Devices:
Block devices are accessed a block at a [Link] supported
include read( ), write( ), and seek( ).
Accessing blocks on a hard drive directly ( without going through the
file system structure ) is called raw I/O.
Memory-mapped file I/O can be layered on top of block-device drivers.
o Rather than reading in the entire file, it is mapped to a range of
memory addresses, and then paged into memory as needed using
the virtual memory system.
Character stream devices are accessed one byte at a time. Eg: keyboard
Network Devices
Because network access is inherently different from local disk access,
most systems provide a separate interface for network devices.
One common and popular interface is the socket interface, which acts
like a cable or pipeline connecting two networked entities. Data can be
put into the socket at one end, and read out sequentially at the other end.
Sockets are normally full-duplex, allowing for bi-directional data
transfer.
The select( ) system call allows servers ( or other applications ) to
identify sockets which have data waiting, without having to poll all
available sockets.
Clocks and Timers
Three types of time services are commonly needed in modern systems:
o Get the current time of day.
o Get the elapsed time since a previous event.
o Set a timer to trigger event X at time T.
Unfortunately time operations are not standard across all systems.
A programmable interrupt timer, PIT can be used to trigger operations
and to measure elapsed time. It can be set to trigger an interrupt at a
specific future time, or to trigger interrupts periodically on a regular
basis.
On most systems the system clock is implemented by counting interrupts
generated by the PIT.
An alternate approach is to provide direct access to a high frequency
hardware counter, which provides much higher resolution and accuracy,
but which does not support interrupts.
Blocking and Non-blocking I/O
With blocking I/O a process is moved to the wait queue when an I/O request is
made, and moved back to the ready queue when the request completes,
allowing other processes to run in the meantime.
With non-blocking I/O the I/O request returns immediately, whether the
requested I/O operation has ( completely ) occurred or not. This allows the
process to check for available data without getting hung completely if it is not
there.
One approach for programmers to implement non-blocking I/O is to have a
multi-threaded application, in which one thread makes blocking I/O calls ( say
to read a keyboard or mouse ), while other threads continue to update the
screen or perform other tasks.
Kernel I/O Subsystem
I/O Scheduling
Scheduling I/O requests can greatly improve overall efficiency. Priorities can also play a part in
request scheduling.
On systems with many devices, separate request queues are often kept for each device:
Buffering
Buffering of I/O is performed for ( at least ) 3 major reasons:
Speed differences between two devices. A slow device may write data into a buffer, and when
the buffer is full, the entire buffer is sent to the fast device all at once. So that the slow device
still has somewhere to write while this is going on, a second buffer is used, and the two buffers
alternate as each becomes full. This is known as double buffering. ( Double buffering is often
used in ( animated ) graphics, so that one screen image can be generated in a buffer while the
other ( completed ) buffer is displayed on the screen. This prevents the user from ever seeing
any half-finished screen images. )
Data transfer size differences. Buffers are used in particular in networking systems to break
messages up into smaller packets for transfer, and then for re-assembly at the receiving side.
To support copy semantics. For example, when an application makes a request for a disk write,
the data is copied from the user's memory area into a kernel buffer. Now the application can
change their copy of the data, but the data which eventually gets written out to disk is the
version of the data at the time the write request was made.
Caching
Caching involves keeping a copy of data in a faster-access location than where the data
is normally stored.
Buffering and caching are very similar, except that a buffer may hold the only copy of a
given data item, whereas a cache is just a duplicate copy of some other data stored
elsewhere.
Spooling and Device Reservation
A spool ( Simultaneous Peripheral Operations On-Line ) buffers data for ( peripheral )
devices such as printers
If multiple processes want to print at the same time, they each send their print data to
files stored in the spool directory. When each file is closed, then the application sees
that print job as complete, and the print scheduler sends each file to the appropriate
printer one at a time.
Support is provided for viewing the spool queues, removing jobs from the queues,
moving jobs from one queue to another queue
Error Handling
I/O requests can fail for many reasons, buffers overflow disk crash .
I/O requests usually return an error bit ( or more ) indicating the problem.
Some devices, such as SCSI devices, are capable of providing much more detailed
information about errors, and even keep an on-board error log that can be requested by
the host.
Kernel Data Structures
The kernel maintains a number of important data structures pertaining to the I/O
system, such as the open file table.
These structures are object-oriented, and flexible to allow access to a wide variety of I/O
devices through a common interface.
Transforming I/O Requests to Hardware Operations
Users request data using file names, which must ultimately be mapped to
specific blocks of data from a specific device managed by a specific device
driver.
o Each device file has a major and minor number associated with it,
stored and displayed where the file size would normally go.
o The major number is an index into a table of device drivers, and
indicates which device driver handles this device. ( E.g. the disk drive
handler. )
o The minor number is a parameter passed to the device driver, and
indicates which specific device is to be accessed, out of the many which
may be handled by a particular device driver. ( e.g. a particular disk
drive or partition. )
Life Cycle of an I/O request