4 research outputs found

    SAMIE-LSQ: set-associative multiple-instruction entry load/store queue

    Get PDF
    The load/store queue (LSQ) is one of the most complex parts of contemporary processors. Its latency is critical for the processor performance and it is usually one of the processor hotspots. This paper presents a highly banked, set-associative, multiple-instruction entry LSQ (SAMIE-LSQ,) that achieves high performance with small energy requirements. The SAMIE-LSQ classifies the memory instructions (loads and stores) based on the address to be accessed, and groups those instructions accessing the same cache line in the same entry. Our approach relies on the fact that many in-flight memory instructions access the same cache lines. Each SAMIE-LSQ entry has space for several memory instructions accessing the same cache line. This arrangement has a number of advantages. First, it significantly reduces the address comparison activity needed for memory disambiguation since there are less addresses to be compared. It also reduces the activity in the data TLB, the cache tag and cache data arrays. This is achieved by caching the cache line location and address translation in the corresponding SAMIE-LSQ entry once the access of one of the instructions in an entry is performed, so instructions that share an entry can reuse the translation, avoid the tag check and get the data directly from the concrete cache way without checking the others. Besides, the delay of the proposed scheme is lower than that required by a conventional LSQ. We show that the SAMIE-LSQ saves 82% dynamic energy for the load/store queue, 42% for the LI data cache and 73% for the data TLB, with a negligible impact on performance (0.6%)Peer ReviewedPostprint (published version

    Reducing Data Cache Energy Consumption via Cached Load/Store Queue

    Get PDF
    High-performance processors use a large set-associative L1 data cache with multiple ports. As clock speeds and size increase such a cache co nsumes a significant percentageo f the to tal pro cesso energy. This paper pro oses a method o saving energy by reducing the number of data cache accesses. It do esso by mo difying the Lo6 /Stoq Queue design to allo w "caching" o prev io sly accessed data valueso nbo h lo ads and sto res after the co rrespo nding memo ry access instruct io has beenco mitted. It is sho wn that a 32-entry mo dified LSQ designallo ws an averageo 38.5%o the loq s in the SpecINT95 benchmarks and 18.9% in the SpecFP95 benchmarks to get their data fro the LSQ. The reductio in the numbero f L1 cache accesses results in upto a 40% reductio n in the L1 data cache energy co nsumptio n and in an upto a 16% impro vement in the energy--delaypro duct while requiring almox no additioal hardware or complex control logic

    Architectural Improvements for Low-power Processor Cache

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2013. 8. ์ „์ฃผ์‹.๋งˆ์ดํฌ๋กœํ”„๋กœ์„ธ์„œ๋Š” ์ˆ˜ํ–‰ ์„ฑ๋Šฅ์„ ์ฆ๊ฐ€์‹œํ‚ค๊ณ  ์†Œ๋ชจํ•˜๋Š” ์—๋„ˆ์ง€๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ์ˆ˜ํ–‰ ์„ฑ๋Šฅ๊ณผ ์†Œ๋ชจ ์—๋„ˆ์ง€๋“ค ๊ฐ„์—๋Š” ํŠธ๋ ˆ์ด๋“œ์˜คํ”„(trade-off) ๊ด€๊ณ„๊ฐ€ ์„ฑ๋ฆฝํ•˜์—ฌ, ์†Œ๋ชจ ์—๋„ˆ์ง€๋ฅผ ๊ฐ์†Œ์‹œํ‚ค๋ฉด ์ˆ˜ํ–‰ ์„ฑ๋Šฅ์ด ๋‚ฎ์•„์ง€๊ฒŒ ๋œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํ”„๋กœ์„ธ์„œ์˜ ๊ตฌ์กฐ์  ๊ฐœ์„ ์„ ํ†ตํ•ด, ์ˆ˜ํ–‰ ์„ฑ๋Šฅ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š์œผ๋ฉด์„œ ์†Œ๋ชจ ์—๋„ˆ์ง€๋ฅผ ๊ฐ์†Œ์‹œํ‚ค๋Š” ๋ฐฉ์•ˆ๊ณผ ์ˆ˜ํ–‰ ์„ฑ๋Šฅ์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ์—ฌ๋Ÿฌ ์—๋„ˆ์ง€ ๊ฐ์†Œ ๋ฐฉ์•ˆ๋“ค์„ ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ตœ์†Œํ™”ํ•˜๋ฉด์„œ ์กฐํ•ฉํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋กœ, ์ˆ˜ํ–‰ ์„ฑ๋Šฅ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š์œผ๋ฉฐ ๋™์  ์—๋„ˆ์ง€๋ฅผ ๊ฐ์†Œ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ €์žฅ์žฅ์น˜ ๋ณ„ ์ €์žฅ๋‹จ์œ„๊ฐ€ ๋‹ค๋ฅด๋‹ค๋Š” ์ ์— ์ฐฉ์•ˆํ•œ ์ด ๊ธฐ๋ฒ•์€ ์ฃผ์†Œ์˜ ์ผ๋ถ€๋ถ„์„ ์บ์‹œ ์ ‘๊ทผ ์‹œ์— ํ™œ์šฉํ•˜์—ฌ ์ €์žฅ์žฅ์น˜ ๋ณ„๋กœ ํ•„์š”ํ•œ ๋ถ€๋ถ„๋งŒ์„ ์ „๋‹ฌํ•œ๋‹ค. ์ด ๊ธฐ๋ฒ•์„ ๋ชจ์˜ ์‹คํ—˜ํ•˜์—ฌ L1 ์บ์‹œ์—์„œ 67.5%, L2 ์บ์‹œ์—์„œ 27.1%์˜ ๋™์  ์—๋„ˆ์ง€ ๊ฐ์†Œ๋ฅผ ์ด๋Œ์–ด ๋ƒˆ๋‹ค. ์ •์  ์—๋„ˆ์ง€๊นŒ์ง€ ๊ณ ๋ คํ•˜๋ฉด L1 ์บ์‹œ์—์„œ 56.75%์˜ ์—๋„ˆ์ง€ ๊ฐ์†Œ๋ฅผ ์ด๋Œ์–ด ๋ƒˆ๋‹ค. ๋‘ ๋ฒˆ์งธ๋กœ, ์ˆ˜ํ–‰ ์„ฑ๋Šฅ์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ํ•„ํ„ฐ ์บ์‹œ, ์ˆœ์ฐจ์  ์บ์‹œ ๊ทธ๋ฆฌ๊ณ  ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ์™€ ๋…ผ๋ฌธ ์ „๋ฐ˜๋ถ€์—์„œ ์ œ์‹œํ•œ ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๊ธฐ๋ฒ•์„, ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ตœ์†Œํ™”ํ•˜๋ฉด์„œ ์กฐํ•ฉํ•˜๋Š” ์›Œ๋“œ ํ•„ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ ์ˆœ์ฐจ์ , ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ํ•„ํ„ฐ ์บ์‹œ๋Š” ํ”„๋กœ์„ธ์„œ ๋ ˆ์ง€์Šคํ„ฐ์™€ L1 ์บ์‹œ ์‚ฌ์ด์— ์ž‘์€ ์ €์žฅ์žฅ์น˜๋ฅผ ๊ตฌํ˜„ํ•˜์—ฌ ๋™์  ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰์„ ์ค„์ด๋Š” ๊ธฐ๋ฒ•์ด๋‹ค. ํ•ด๋‹น ๊ธฐ๋ฒ•์ด ์ฒ˜์Œ ์ œ์‹œ๋˜์—ˆ์„ ๋•Œ์™€ ๋‹ฌ๋ฆฌ ํด๋ก ์ˆ˜์˜ ์ฆ๊ฐ€๋กœ ์ธํ•ด L1 ์บ์‹œ ์ ‘๊ทผ ์‹œ๊ฐ„์ด ๋Š˜์–ด๋‚˜๊ณ , ์ด๋กœ ์ธํ•ด ํ•„ํ„ฐ ์บ์‹œ๋ฅผ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ ์—๋„ˆ์ง€์˜ ๊ฐ์†Œ์™€ ํ•จ๊ป˜ ์„ฑ๋Šฅ์ƒ์˜ ์ด๋“๊นŒ์ง€ ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ์ด์™€ ํ•จ๊ป˜ ๊ธฐ์กด์— ์„ฑ๋Šฅ์ƒ์˜ ์†ํ•ด๋กœ ์ธํ•ด ์“ฐ์ง€ ๋ชปํ–ˆ๋˜ ์ˆœ์ฐจ์  ์บ์‹œ์™€ ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ์™€ ๊ฐ™์€ ๊ธฐ๋ฒ•๋“ค์„ ์ถ”๊ฐ€์ ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์ˆœ์ฐจ์  ์บ์‹œ๋Š” ์บ์‹œ์˜ ํƒœ๊ทธ ์–ด๋ ˆ์ด์˜ ์ ์ค‘ ์—ฌ๋ถ€๋ฅผ ์•Œ๊ธฐ ์ „๊นŒ์ง€ ๋ฐ์ดํ„ฐ ์–ด๋ ˆ์ด๋ฅผ ๋™์ž‘์‹œํ‚ค์ง€ ์•Š๋Š” ๊ธฐ๋ฒ•์ด๋‹ค. ์ด๋Š” ํƒœ๊ทธ ์–ด๋ ˆ์ด์˜ ์ ์ค‘ ์‹œ๊ฐ„๋งŒํผ ์บ์‹œ ์ ‘๊ทผ ์‹œ๊ฐ„์ด ๊ธธ์–ด์ง€๋Š” ๋ฐ˜๋ฉด, ์ ์ค‘๋œ ์›จ์ด๋งŒ์„ ๊ตฌ๋™์‹œํ‚ค๋ฉด ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฐ์ดํ„ฐ ์–ด๋ ˆ์ด์˜ ๋™์  ์—๋„ˆ์ง€๋ฅผ ๊ฐ์†Œ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ํ•„ํ„ฐ ์บ์‹œ์™€ ๊ฐ™์ด ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ, ์ƒ๋Œ€์ ์œผ๋กœ ์ „๋ ฅ ์†Œ๋ชจ๊ฐ€ ์ ์€ ํƒœ๊ทธ ์–ด๋ ˆ์ด๋ฅผ ํ•„ํ„ฐ ์บ์‹œ์™€ ๋ณ‘๋ ฌ์ ์œผ๋กœ ์ ‘๊ทผํ•˜๊ฒŒ ๋˜๋ฉด ๊ธฐ์กด ์ˆœ์ฐจ์  ์บ์‹œ์—์„œ ์†ํ•ด๋ฅผ ๋ณด๋Š” ํƒœ๊ทธ ์–ด๋ ˆ์ด ์ ‘์† ์‹œ๊ฐ„์„ ์ˆจ๊ธธ ์ˆ˜ ์žˆ๋‹ค. ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ๋Š” SRAM ์…€์— ๋™์ž‘์ „์••์„ ์ •์ƒ ๋ชจ๋“œ(๋†’์€ ์ „์••)์™€ ์ €์ „๋ ฅ ๋ชจ๋“œ(๋‚ฎ์€ ์ „์••), ๋‘ ์ข…๋ฅ˜๋ฅผ ๊ณต๊ธ‰ํ•˜๊ณ  ๋™์ž‘์ด ๋ฐœ์ƒํ•˜์ง€ ์•Š๋Š” ๋ถ€๋ถ„์˜ ์ „์••์„ ๋‚ฎ์ถ”์–ด ๊ณต๊ธ‰ํ•จ์œผ๋กœ ์บ์‹œ์˜ ์ •์  ์ „๋ ฅ ์†Œ๋ชจ๋ฅผ ์ค„์ด๋Š” ๊ธฐ๋ฒ•์ด๋‹ค. ์ €์ „๋ ฅ ๋ชจ๋“œ์— ์žˆ๋Š” ์…€์— ์ ‘๊ทผํ•  ๊ฒฝ์šฐ ๋‚ฎ์€ ์ „์••์„ ๋†’์€ ์ „์••์œผ๋กœ ๋ฐ”๊พธ์–ด์ฃผ๋Š”๋ฐ ์ด๋•Œ ์ถ”๊ฐ€์ ์ธ ์ ‘๊ทผ ์‹œ๊ฐ„์ด ๋ฐœ์ƒํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํ•ด๋‹น ์…€์— ์ ‘๊ทผํ•˜์—ฌ ์ „์••์„ ๋†’์ด๋Š” ๊นจ์›€ ๋น„ํŠธ ์ „์†ก์„ ํ•„ํ„ฐ ์บ์‹œ์™€ L1 ์บ์‹œ ํƒœ๊ทธ ์–ด๋ ˆ์ด ์ ‘์†๊ณผ ๋ณ‘๋ ฌ์ ์œผ๋กœ ํ•˜์—ฌ ๊ธฐ์กด ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ์—์„œ ๋ฐœ์ƒํ•˜๊ฒŒ ๋˜๋Š” ์„ฑ๋Šฅ ๊ฐ์†Œ๋ฅผ ๋ง‰์•˜๋‹ค. ์ด์™€ ๊ฐ™์ด ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ ๊ธฐ๋ฒ•๊ณผ, ํ•„ํ„ฐ ์บ์‹œ, ์ˆœ์ฐจ์  ์บ์‹œ์™€ ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๊ธฐ๋ฒ•์„ ๋ชจ๋‘ ์ ์šฉํ•˜์—ฌ ๋ชจ์˜ ์‹คํ—˜ํ•œ ๊ฒฐ๊ณผ, ์ „์ฒด ํ”„๋กœ์„ธ์„œ ์บ์‹œ์—์„œ 73.4%์˜ ๋™์  ์—๋„ˆ์ง€ ๊ฐ์†Œ๋ฅผ, 83.2%์˜ ์ •์  ์—๋„ˆ์ง€ ๊ฐ์†Œ๋ฅผ, ์ด 71.7%์˜ ์—๋„ˆ์ง€ ๊ฐ์†Œ๋ฅผ ์ด๋Œ์–ด ๋‚ด์—ˆ๋‹ค. ์š”์•ฝํ•˜๋ฉด, ์ •์  ์—๋„ˆ์ง€ ๊ฐ์†Œ๋ฅผ ์œ„ํ•ด ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ๋ฅผ ๊ตฌํ˜„ํ•˜๋ฉด์„œ ๋ฐœ์ƒํ•˜๋Š” ์ถ”๊ฐ€ ์‹œ๊ฐ„์„ ํ•„ํ„ฐ ์บ์‹œ์™€ ์ˆœ์ฐจ์  ์บ์‹œ๋ฅผ ์ด์šฉํ•ด ํšจ์œจ์ ์œผ๋กœ ์ˆจ๊ธฐ๊ณ , ์ €์žฅ ๋‹จ์œ„ ์ฐจ์ด๋ฅผ ์ด์šฉํ•˜๋Š” ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๊ธฐ๋ฒ•์„ ์ถ”๊ฐ€์ ์œผ๋กœ ๊ตฌํ˜„ํ•ด ์ €์ „๋ ฅ ํ”„๋กœ์„ธ์„œ ์„ค๊ณ„๋ฅผ ํ•˜์˜€๋‹ค.The microprocessor is researched to improve the execution performance and reduce the energy consumption. In most cases, the trade-off relationship is established between the energy consumption and execution performance. So if reducing the energy consumption, the execution performance is lowered. In this paper, I propose two low power method by improving the architecture of the processor cache. The one is the method lowering dynamic energy without affecting the execution performance, and the other is the method combined some energy reduction plans which affect a significant impact on execution performance. First, I propose 'Selective Word Reading(SWR)' technique which reduce the dynamic energy of the processor cache without loss of performance. This technique was developed because of the differences between store unit sizes per storage level. In the SWR cache, only the necessary part of a block is activated during the cache access process. For a 32 kB four-way set associative L1 cache, a 32B block size, and four mats per sub-bank, the SWR cache saves dynamic energy by 67.5% without consideration of the leakage energy and by 56.75% with consideration of the leakage energy with no performance degradation and negligible area reduction. Additionally, in a 1MB 16-way set associative L2 cache, a 64B block size, and eight mats per sub-bank, the SWR cache saves dynamic energy by 27.1% for the L2 cache. Second, I propose Sequential-SWR-Drowsy Cache with the Word Filter(SSDF) technique which reduce the entire energy of the processor cache with combining a sequential cache, a selective word reading, a filter cache and a drowsy cache. These techniques are affecting a significant impact on execution performance and I offer the method which can reduce the performance overhead with maximizing the effect of the power consumption. The filter cache is a technique to reduce the dynamic energy consumption that implements a small storage device between the L1 cache and the processor registers. Unlike when it is presented first, by increasing the number of CPU clocks, the access time of the L1 cache is increased and thus, the filter cache, this approach can be seen to advantage of the performance as well as the power consumption. Furthermore, it is possible to use further techniques such as the drowsy cache and the sequential cache without additional damage to the performance. The sequential cache is a technique to delay the operation of the data array until the tag array knows whether it is hit or not. Since the access time of the sequential cache is increased by the tag-array-access time, and to drive only the hit way, so it is possible to reduce the dynamic energy of the data array. When used with the filter cache, if accessed in parallel with the filter cache and the L1 tag array whose power consumption is relatively small, it can hide the tag-array-access time. The drowsy cache supplies the two kind of the operating voltage to the SRAM cell and it makes the cells is placed in two modes โ€“ normal mode in high voltage and drowsy mode (low-power mode) in low voltage. And the some cells which access rarely will be placed in drowsy mode, it will decrease the static energy consumption of the cache. If an application want to access the cell of the drowsy mode, at this time that it converts the low voltage to the high voltage, and it will make the additional access time. In this paper, we prevented the degradation of performance by the parallel access of the wake-up call is occurred when the filter cache and the L1 cache tag array is accessed. This technique, SSDF cache, saves 73.4% of the dynamic energy, 83.2% of the static energy and 71.7% of the total cache energy consumption.์š” ์•ฝ i ๋ชฉ ์ฐจ iv ๊ทธ๋ฆผ ๋ชฉ์ฐจ vii ํ‘œ ๋ชฉ์ฐจ x ์ œ 1์žฅ ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ 1 1.2 ์—ฐ๊ตฌ ๋‚ด์šฉ 9 1.3 ๋…ผ๋ฌธ ๊ตฌ์„ฑ 11 ์ œ2์žฅ ๊ด€๋ จ ์—ฐ๊ตฌ 12 2.1 ๋™์  ์ „๋ ฅ ๊ฐ์†Œ ๊ธฐ๋ฒ• 12 2.2 ์ •์  ์ „๋ ฅ ๊ฐ์†Œ ๊ธฐ๋ฒ• 18 2.2.1 ๋‚ด์šฉ ๋ฏธ์ €์žฅ ๋ฐฉ์‹ 18 2.2.2 ๋‚ด์šฉ ์ €์žฅ ๋ฐฉ์‹ 19 ์ œ 3์žฅ ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๊ธฐ๋ฒ• 22 3.1 ๊ฐœ๋ฐœ ๋™๊ธฐ 22 3.2 ๊ตฌํ˜„ 25 3.2.1 ๊ฐœ๋…์  ๊ตฌํ˜„ 25 3.2.2 ์‹ค์ œ์  ๊ตฌํ˜„ 29 3.3 ์ „๋ ฅ ์†Œ๋ชจ๋Ÿ‰ ๊ณ„์‚ฐ 34 3.3.1 ์ „๋ ฅ ์†Œ๋ชจ๋Ÿ‰ ์ˆ˜์‹ 34 3.4 ์›Œ๋“œ ๋ฒ„ํผ์˜ ๊ณ ๋ ค 38 ์ œ 4์žฅ ์›Œ๋“œ ํ•„ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ ์ˆœ์ฐจ์ ยท์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ 42 4.1 ๊ฐœ๋ฐœ ๋™๊ธฐ 42 4.2 ๊ด€๋ จ ์—ฐ๊ตฌ์™€ ์ œ์•ˆ๋œ ์—ฐ๊ตฌ์˜ ๊ตฌํ˜„ 45 4.2.1 ์ „ํ†ต์  L1 ๊ธฐ์ค€ ์บ์‹œ 45 4.2.2 ํ•„ํ„ฐ ์บ์‹œ 47 4.2.3 ๋™์‹œ ์ ‘๊ทผ ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•œ ํ•„ํ„ฐ ์บ์‹œ 50 4.2.4 ์ˆœ์ฐจ์  ์บ์‹œ 51 4.2.5 ๋ณ‘๋ ฌ์  L1 ํƒœ๊ทธ ์ ‘๊ทผ ๊ธฐ๋ฒ•๊ณผ ํ•„ํ„ฐ ์บ์‹œ๋ฅผ ์‚ฌ์šฉํ•œ ์ˆœ์ฐจ์  ์บ์‹œ 52 4.2.6 ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ 54 4.2.7 ํ•„ํ„ฐ ์บ์‹œ๋ฅผ ์‚ฌ์šฉํ•œ ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ 55 4.2.8 ๋ณ‘๋ ฌ์  ๋™์‹œ ์ ‘๊ทผ์„ ์ด์šฉํ•œ ๋“œ๋ผ์šฐ์ง€ยทํ•„ํ„ฐ ์บ์‹œ 56 4.2.9 ํ•„ํ„ฐ ์บ์‹œ๋ฅผ ์‚ฌ์šฉํ•œ ์ˆœ์ฐจ์  ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ 57 4.2.10 L1 ํƒœ๊ทธ์— ๋ณ‘๋ ฌ์  ์ ‘๊ทผ๊ณผ ๊นจ์›€ ๋™์ž‘์„ ํ•˜๋Š” ์ˆœ์ฐจ์  ๋“œ๋ผ์šฐ์ง€ยทํ•„ํ„ฐ ์บ์‹œ 59 4.3 ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๊ธฐ๋ฒ•๊ณผ์˜ ์กฐํ•ฉ ๊ตฌํ˜„ 61 ์ œ 5์žฅ ์„ฑ๋Šฅ ํ‰๊ฐ€ ๋ฐ ๊ฒฐ๊ณผ 65 5.1 ์‹คํ—˜ ํ™˜๊ฒฝ 65 5.2 ์„ ํƒ์  ์›Œ๋“œ ์ ‘๊ทผ ๊ธฐ๋ฒ• ์‹คํ—˜ ๊ฒฐ๊ณผ 69 5.2.1 ๋™์  ์—๋„ˆ์ง€ ๊ฐ์†Œ๋Ÿ‰ 69 5.2.2 ์ •์  ์—๋„ˆ์ง€๋ฅผ ๊ณ ๋ คํ•œ ์ƒํƒœ์—์„œ์˜ ์—๋„ˆ์ง€ ๊ฐ์†Œ 71 5.2.3 ์›Œ๋“œ ๋ฒ„ํผ๋ฅผ ๊ฐ€์ •ํ•œ ๊ฒฝ์šฐ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰ 74 5.2.4 ์—๋„ˆ์ง€-์ง€์—ฐ์‹œ๊ฐ„ ๊ณฑ 75 5.3 SSDF ์บ์‹œ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 76 5.3.1 ํ•„ํ„ฐ ์บ์‹œ์˜ ์˜ํ–ฅ ๋ถ„์„ 76 5.3.2 ๋“œ๋ผ์šฐ์ง€ ์บ์‹œ์— ์˜ํ•œ ์˜ํ–ฅ ๋ถ„์„ 78 5.3.3 SSDF ์บ์‹œ์˜ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰ 79 5.3.3.1 ๋™์  ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰ 79 5.3.3.2 ์ •์  ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰ 81 5.3.3.3 ์ „์ฒด ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰ 82 5.3.3.4 ๋น„๋Œ€์นญ SSDF ์บ์‹œ 83 ์ œ 6์žฅ ๊ฒฐ๋ก  86 ์ฐธ๊ณ  ๋ฌธํ—Œ 89 Abstract 95Docto

    Computer Science & Technology Series : XVI Argentine Congress of Computer Science - Selected papers

    Get PDF
    CACICโ€™10 was the sixteenth Congress in the CACIC series. It was organized by the School of Computer Science of the University of Moron. The Congress included 10 Workshops with 104 accepted papers, 1 main Conference, 4 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses. (http://www.cacic2010.edu.ar/). CACIC 2010 was organized following the traditional Congress format, with 10 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of three chairs of different Universities. The call for papers attracted a total of 195 submissions. An average of 2.6 review reports were collected for each paper, for a grand total of 507 review reports that involved about 300 different reviewers. A total of 104 full papers were accepted and 20 of them were selected for this book.Red de Universidades con Carreras en Informรกtica (RedUNCI
    corecore