Most modern desktop and server CPUs have at least three independent caches: an Instruction cache to speed up executable instruction fetch, a data cache to speed up data fetch and store, and a translation lookaside buffer (TLB)???? used to speed up virtual-to-physical address translation for both executable instruction and data.
Cache Entries
Memory is split into "locations," which correspond to cache "lines".
The requested memory location (now called a tag)
a copy of the data
When the processor needs to read or write a location in main memory, it first checks for a corresponding entry in the cache. The cache checks for the contents of the requested memory location in any cache lines that might contain in that address. If the processor finds that the memory location is in the cache, a cache hit has occurred (otherwise, a cache miss).
A cache miss refers to a failed attempt to read or write a piece of data in the cache, which results in a main memory access with much longer latency.
Three kinds of cache misses : instruction read miss, data read miss, and data write miss.
A cache read miss from an instruction cache generally causes the most delay, because the processor, or at least the thread of execution, has to wait (stall) until the instruction is fetched from main memory.
A cache read miss from a data cache usually causes less delay, because instructions not dependent on the cache read can be issued and continue execution until the data is returned from main memory, and the dependent instructions can resume execution.
A cache write miss to a data cache generally causes the least delay, because the write can be queued and there are few limitations on the execution of subsequent instructions. The processor can continue until the queue is full.
Reference:
http://en.wikipedia.org/wiki/CPU_cache#Cache_miss
No comments:
Post a Comment