Neha Patil (Editor)

Prefetch input queue

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

Fetching the instruction opcodes from program memory well in advance is known as prefetching and it is served by using prefetch input queue (PIQ).The pre-fetched instructions are stored in data structure - namely a queue. The fetching of opcodes well in advance, prior to their need for execution increases the overall efficiency of the processor boosting its speed. The processor no longer has to wait for the memory access operations for the subsequent instruction opcode to complete. This architecture was prominently used in the Intel 8086 microprocessor.

Contents

Introduction

Pipelining was brought to the forefront of computing architecture design during the 1960s due to the need for faster and more efficient computing. Pipelining is the broader concept and most modern processors load their instructions some clock cycles before they execute them. This is achieved by pre-loading machine code from memory into a prefetch input queue.

This behavior only applies to von Neumann computers (that is, not Harvard architecture computers) that can run self-modifying code and have some sort of instruction pipelining. Nearly all modern high-performance computers fulfill these three requirements.

Usually, the prefetching behavior of the PIQ is invisible to the programming model of the CPU. However, there are some circumstances where the behavior of PIQ is visible, and needs to be taken into account by the programmer.

When the x86-processor changes mode from realmode to protected mode and vice versa, the PIQ has to be flushed, or else the CPU will continue to translate the machine code as if it were written in its last mode. If the PIQ is not flushed, the processor might translate its codes wrong and generate an invalid instruction exception.

When executing self-modifying code, a change in the processor code immediately in front of the current location of execution might not change how the processor interprets the code, as it is already loaded into its PIQ. It simply executes its old copy already loaded in the PIQ instead of the new and altered version of the code in its RAM and/or cache.

This behavior of the PIQ can be used to determine if code is being executed inside an emulator or directly on the hardware of a real CPU. Most emulators will probably never simulate this behavior. If the PIQ-size is zero (changes in the code always affect the state of the processor immediately), it can be deduced that either the code is being executed in an emulator or the processor invalidates the PIQ upon writes to addresses loaded in the PIQ.

Performance evaluation based on queuing theory

It was A.K Erlang (1878-1929) who first conceived of a queue as a solution to congestion in telephone traffic. Different queueing models are proposed in order to approximately simulate the real time queuing systems so that those can be analysed mathematically for different performance specifications.

Queuing models can be represented using Kendall's notation:

A1/A2/A3/A4

where:

  • A1 is the distribution of time between two arrivals
  • A2 is the service time distribution
  • A3 is the total number of servers
  • A4 is the capacity of system
    1. M/M/1 Model (Single Queue Single Server/ Markovian): In this model, elements of queue are served on a first-come, first-served basis. Given the mean arrival and service rates, then actual rates vary around these average values randomly and hence have to be determined using a cumulative probability distribution function.
    2. M/M/r Model: This model is a generalization of the basic M/M/1 model where multiple servers operate in parallel. This kind of model can also model scenarios with impatient users who leave the queue immediately if they are not receiving service. This can also be modeled using a Bernoulli process having only two states, success and failure. The best example of this model is our regular land-line telephone systems.
    3. M/G/1 Model (Takacs' finite input Model) : This model is used to analyze advanced cases. Here the service time distribution is no longer a Markov process. This model considers the case of more than one failed machine being repaired by single repairman. Service time for any user is going to increase in this case.

    Generally in applications like prefetch input queue, M/M/1 Model is popularly used because of limited use of queue features. In this model in accordance with microprocessors, the user takes the role of the execution unit and server is the bus interface unit.

    Instruction queue

    The processor executes a program by fetching the instructions from memory and executing them. Usually the processor execution speed is much faster than the memory access speed. Instruction queue is used to prefetch the next instructions in a separate buffer while the processor is executing the current instruction.

    With a four stage pipeline, the rate at which instructions are executed is four times that of sequential execution.

    The processor usually has two separate units for fetching the instructions and for executing the instructions.

    The implementation of a pipeline architecture is possible only if the bus interface unit and the execution unit are independent. While the execution unit is decoding or executing an instruction which does not require the use of the data and address buses, the bus interface unit fetches instruction opcodes from the memory.

    This process is much faster than sending out an address, reading the opcode and then decoding and executing it. Fetching the next instruction while the current instruction is being decoded or executed is called pipelining.

    The 8086 architecture has a six-byte prefetch instruction pipeline. As the Execution Unit is executing the current instruction, the bus interface unit reads up to six bytes of opcodes in advance from the memory. The six byte long queue was chosen because the maximum number of bytes required for any instruction in 8086 is this long.

    An exception is encountered when the execution unit encounters a branch instruction i.e. either a jump or a call instruction. In this case, the entire queue must be dumped and the contents pointed to by the instruction pointer must be fetched from memory.

    Drawbacks

    Processors implementing the instruction queue prefetch algorithm are rather technically advanced. The CPU design level complexity of the such processors is much higher than for regular processors. This is primarily because of the need to implement two separate units, the BIU and EU, operating separately.

    As the complexity of these chips increases, the cost also increases. These processors are relatively costlier than their counterparts without the prefetch input queue.

    However, these disadvantages are greatly offset by the improvement in processor execution time. After the introduction of prefetch instruction queue in the 8086 processor, all successive processors have incorporated this feature.

    x86 example code

    code_starts_here: mov bx, ahead mov word ptr cs:[bx], 9090h ahead: jmp near to_the_end ; Some other code to_the_end:

    This self-modifying program will overwrite the jmp to_the_end with two NOPs (which is encoded as 0x9090). The jump jmp near to_the_end is assembled into two bytes of machine code, so the two NOPs will just overwrite this jump and nothing else. (That is, the jump is replaced with a do-nothing-code.)

    Because the machine code of the jump is already read into the PIQ, and probably also already executed by the processor (superscalar processors execute several instructions at once, but they "pretend" that they don't because of the need for backward compatibility), the change of the code will not have any change of the execution flow.

    Example program to detect size

    This is an example NASM-syntax self-modifying x86-assembly language algorithm that determines the size of the PIQ:

    code_starts_here: xor bx, bx ; zero register bx xor ax, ax ; zero register ax mov dx, cs mov [code_segment], dx ; "calculate" codeseg in the far jump below (edx here too) around: cmp ax, 1 ; check if ax has been alterd je found_size mov byte [nop_field+bx], 0x90 ; 0x90 = opcode "nop" (NO oPeration) inc bx db 0xEA ; 0xEA = opcode "far jump" dw flush_queue ; should be followed by offset (rm = "dw", pm = "dd") code_segment: dw 0 ; and then the code segment (calculated above) flush_queue: mov byte [nop_field+bx], 0x40 ; 0x40 = opcode "inc ax" (INCrease ax) nop_field: times 256 nop jmp around found_size: ; ; register bx now contains the size of the PIQ ; this code is for real mode and 16-bit protected mode, but it could easily be changed into ; running for 32-bit protected mode as well. just change the "dw" for ; the offset to "dd". you need also change dx to edx at the top as ; well. (dw and dx = 16 bit addressing, dd and edx = 32 bit addressing) ;

    What this code does is basically that it changes the execution flow, and determines by brute force how large the PIQ is. "How far away do I have to change the code in front of me for it to affect me?" If it is too near (it is already in the PIQ) the update will not have any effect. If it is far enough, the change of the code will affect the program and the program has then found the size of the processor's PIQ. If this code is being executed under multitasking OS, the context switch may lead to the wrong value.

    References

    Prefetch input queue Wikipedia