In a reduced instruction set computerPipelining technology: first, the reduced instruction setReduced instruction set design is mainly related to the generation of chips, the fastest possible, so they use some of the technologies, including pipelining.Pipelining is a technical hardware designed to handle multiple instructions at a time when computers are not waiting for a wizard to complete before starting the next.Remember the model in the four stage, we CISC machine? They take, decoding, execution, and writing phases coexist on a reduced instruction set computer, but this phase is parallel. When I finish the result of a phase, it goes into the next stage and then starts working on an indication.As you can see moving from the image, the performance of a system pipelined depends only on the completion of any phase of the time, not at all stages of the number of times, such as with the design not pipelined.In a typical design pipelined RISC instruction set, an instruction cycle table loses every phase, so that the processor can accept a new instruction table. Pipelining delay does not improve the guidance (instruction once required at the same time to complete the amount), but it can always be improved by.Give your Computer CISC, always cannot achieve ideal. Sometimes pipelined conducted multiple clock to complete the stage. When this happens, the processor has stood not to accept new instructions and instructions, until you have to switch to slow the next stage.Since the processor is stopped in idle time, the designer and programming system streamlined instruction sets for a conscious effort to avoid stalls. To do this, designers use some of the techniques, as in the back part.Pipelined system performance problemsA pipelined processor can be a number of reasons, including the delay information reading instruction from memory, a poor, or dependence between guide page to consider some design and chip design system in solving these problems.Memory speedThe memory speed problem is usually solved by using the cache. A cache is part of memory that quickly speeds up the processor and memory. When the processor needs to read a location in memory, this location is also copied to the cache. You can return a result from the cache to the next reference location, much faster than memory.The large cache problem presents a systematic design and programming, which is a problem link when the processor writes a value to memory and results into the cache instead of accessing the memory directly. Therefore, special hardware (as part of the processor that is often executed) to write its own memory information in other locations, or try to read before re using the cache part, some different information.Instruction delayBad scripting can also cause processors to frequent pipelined stations. Some of the more common problems in some areas are:Instruction encoding high - for example using decoding in the man-machine - CISC - Requirements and test ulating thed CalChange the direction of the length of the required reference material, to the entire teaching.Access to memory instructions (rather than their own registration), because it can slow memoryMany complex instructions are required to perform (table floating - multipoint operations, for example).Command, need to read and write the same registration. For example: "five let register" to read the value of 3 to 5, and then return to write the registration of the five (also can be associated with the "busy" before reading the activities, resulting in the processor until the station register to become available.) ()Resource dependent registration code as a single point condition
đang được dịch, vui lòng đợi..
