12 research outputs found

    Scheduling in multiprocessor system using genetic algorithms

    Get PDF
    Multiprocessors have emerged as a powerful computing means for running real-time applications, especially where a uniprocessor system would not be sufficient enough to execute all the tasks. The high performance and reliability of multiprocessors have made them a powerful computing resource. Such computing environment requires an efficient algorithm to determine when and on which processor a given task should execute. This paper investigates dynamic scheduling of real-time tasks in a multiprocessor system to obtain a feasible solution using genetic algorithms combined with well-known heuristics, such as 'Earliest Deadline First' and 'Shortest Computation Time First'. A comparative study of the results obtained from simulations shows that genetic algorithm can be used to schedule tasks to meet deadlines, in turn to obtain high processor utilization.Peer ReviewedPostprint (published version

    Scheduling in Multiprocessor System Using Genetic Algorithms

    Full text link

    Optimal rate-based scheduling on multiprocessors

    Get PDF
    The PD2 Pfair/ERfair scheduling algorithm is the most efficient known algorithm for optimally scheduling periodic tasks on multiprocessors. In this paper, we prove that PD2 is also optimal for scheduling โ€œrate-basedโ€ tasks whose processing steps may be highly jittered. The rate-based task model we consider generalizes the widely-studied sporadic task model

    Techniques Optimizing the Number of Processors to Schedule Multi-threaded Tasks

    Full text link
    These last years, we have witnessed a dramatic increase in the number of cores available in computational platforms. Concurrently, a new coding paradigm dividing tasks into smaller execution instances called threads, was developed to take advantage of the inherent parallelism of multiprocessor platforms. However, only few methods were proposed to efficiently schedule hard real-time multi-threaded tasks on multiprocessor. In this paper, we propose techniques optimizing the number of processors needed to schedule such sporadic parallel tasks with constrained deadlines. We first define an optimization problem determining, for each thread, an intermediate (artificial) deadline minimizing the number of processors needed to schedule the whole task set. The scheduling algorithm can then schedule threads as if they were independent sequential sporadic tasks. The second contribution is an efficient and nevertheless optimal algorithm that can be executed online to determine the thread's deadlines. Hence, it can be used in dynamic systems were all tasks and their characteristics are not known a priori. We finally prove that our techniques achieve a resource augmentation bound of 2 when the threads are scheduled with algorithms such as U-EDF, PD2, LLREF, DP-Wrap, etc. ยฉ 2012 IEEE.SCOPUS: cp.pinfo:eu-repo/semantics/publishe

    Fair-share and Energy-efficient Scheduling in Performance-asymmetric Multicore Architecture

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 2. ํ™์„ฑ์ˆ˜.์ตœ๊ทผ ์šฐ์ˆ˜ํ•œ ์‚ฌ์šฉ์ž ์ฒด๊ฐ๊ณผ ์งˆ์ ์œผ๋กœ ํ–ฅ์ƒ๋œ ์ˆ˜์ค€์˜ ์„œ๋น„์Šค๋ฅผ ์œ„ํ•˜์—ฌ ์Šค๋งˆํŠธํฐ, ํƒœ๋ธ”๋ฆฟ๊ณผ ๊ฐ™์€ ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ์—์„œ ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์˜ ์‚ฌ์šฉ์ด ํฌ๊ฒŒ ์ฆ๊ฐ€๋˜๊ณ  ์žˆ๋‹ค. ์ด๋Š”, ์ด๋Ÿฌํ•œ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ์˜ ์ œํ•œ๋œ ๋ฉด์ ๊ณผ ์†Œ๋น„์ „๋ ฅ ํ™˜๊ฒฝํ•˜์—์„œ ์ฃผ๋Š” ํ•˜๋“œ์›จ์–ด์  ์ด์  ๋•Œ๋ฌธ์ด๋‹ค. ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ์— ์‚ฌ์šฉ๋˜๋Š” ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ํŠน์„ฑ์„ ๊ฐ€์ง€๋Š” ๋‘ ๊ฐ€์ง€ ํƒ€์ž…์˜ ์ฝ”์–ด๋“ค๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ์ฒซ ๋ฒˆ์งธ ํƒ€์ž…์˜ ์ฝ”์–ด๋Š” ๋†’์€ ์„ฑ๋Šฅ๊ณผ ๋‚ฎ์€ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ํŠน์ง•์œผ๋กœ ํ•˜๊ณ , ๋‘ ๋ฒˆ์งธ ํƒ€์ž…์˜ ์ฝ”์–ด๋Š” ๋‚ฎ์€ ์„ฑ๋Šฅ ๋Œ€์‹  ๋†’์€ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ํŠน์ง•์œผ๋กœ ํ•œ๋‹ค. ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์šด์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š”, ๊ฐ ์ฝ”์–ด๋“ค์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋”ฐ๋ผ์„œ ๋‘ ๊ฐ€์ง€ ์ข…๋ฅ˜๊ฐ€ ์žˆ๋‹ค. (1) ํ•œ ๊ฐœ์˜ ๋†’์€ ์„ฑ๋Šฅ ์ฝ”์–ด์™€ ํ•œ ๊ฐœ์˜ ์—๋„ˆ์ง€ ํšจ์œจ์ ์ธ ์ฝ”์–ด๋ฅผ ํ•˜๋‚˜์˜ pair๋กœ ํ˜•์„ฑํ•œ ํ›„, ๊ทธ ์ค‘ ํ•œ ๊ฐœ์˜ ์ฝ”์–ด๋งŒ ๋™์ž‘ํ•˜๊ฒŒ ํ•˜๋Š” ์ฝ”์–ด ํƒ€์ž… ์„ ํƒ ๋ฐฉ์‹๊ณผ (2) ์‹œ์Šคํ…œ์˜ ๋ชจ๋“  ์ฝ”์–ด๋ฅผ ๋™์‹œ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ „์ฒด ์ฝ”์–ด ์‚ฌ์šฉ ๋ฐฉ์‹์œผ๋กœ ์šด์šฉ๋œ๋‹ค. Linux kernel์€ ์ด๋Ÿฌํ•œ ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์— ๊ฐ€์žฅ ๋„๋ฆฌ ์“ฐ์ด๋Š” ์šด์˜์ฒด์ œ์ด๋ฉฐ, CFS(completely fair scheduler)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํƒœ์Šคํฌ๋“ค์„ ์Šค์ผ€์ค„๋ง ํ•œ๋‹ค. ๋˜ํ•œ, CFS๋Š” ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์˜ ์ฝ”์–ด ํƒ€์ž… ์„ ํƒ ๋ฐฉ์‹๊ณผ ์ „์ฒด ์ฝ”์–ด ์‚ฌ์šฉ ๋ฐฉ์‹์— ๋งž๋Š” ์Šค์ผ€์ค„๋ง ํ”„๋ ˆ์ž„์›์„ ์ œ๊ณตํ•˜๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ํ˜„์žฌ ์ œ๊ณต๋˜๊ณ  ์žˆ๋Š” ๋‘ ๊ฐ€์ง€ ์ฝ”์–ด ์‚ฌ์šฉ ๋ฐฉ์‹์„ ์œ„ํ•œ ์Šค์ผ€์ค„๋ง ํ”„๋ ˆ์ž„์›์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฌธ์ œ์ ์ด ์žˆ๋‹ค. ์ฒซ์งธ, ์ฝ”์–ด ํƒ€์ž… ์„ ํƒ ๋ฐฉ์‹์—์„œ์˜ ์Šค์ผ€์ค„๋ง ํ”„๋ ˆ์ž„์›์€ ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด๊ฐ„ ๋ถ€ํ•˜๋ถ„์‚ฐ ์‹œ ํƒœ์Šคํฌ์˜ ๊ฐ€์ค‘์น˜(weight)๋งŒ ๊ณ ๋ คํ•˜์—ฌ ๋ถ€ํ•˜๋ถ„์‚ฐ์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์ด๋กœ ์ธํ•˜์—ฌ ํ•„์š”์ด์ƒ์œผ๋กœ ์ฝ”์–ด์˜ ๋™์ž‘ ์ฃผํŒŒ์ˆ˜๋ฅผ ์ƒ์Šน์‹œํ‚ค๊ฑฐ๋‚˜, ์—๋„ˆ์ง€ ํšจ์œจ์ ์ธ ์ฝ”์–ด์—์„œ ์ˆ˜ํ–‰ํ•ด๋„ ์ถฉ๋ถ„ํ•œ ํƒœ์Šคํฌ๊ฐ€ ๊ณ ์„ฑ๋Šฅ ์ฝ”์–ด์—์„œ ์šด์šฉ๋˜์–ด ์†Œ๋น„์ „๋ ฅ์„ ํฌ๊ฒŒ ์ฆ๊ฐ€์‹œํ‚ค๋Š” ๋ฌธ์ œ์ ์„ ์•ผ๊ธฐ์‹œํ‚จ๋‹ค. ๋‘˜์งธ, Linux kernel์˜ CFS๋Š” virtual runtime์„ ํ†ตํ•˜์—ฌ, ํƒœ์Šคํฌ๋“ค์—๊ฒŒ ๊ฐ€์ค‘์น˜์— ๋น„๋ก€ํ•˜๋Š” CPU ์‚ฌ์šฉ ์‹œ๊ฐ„์„ ๋ถ€์—ฌํ•œ๋‹ค. ์ด๋•Œ, ํƒœ์Šคํฌ์˜ virtual runtime ์‚ฐ์ • ์‹œ, ํ˜„์žฌ ํƒœ์Šคํฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ฝ”์–ด์˜ ์ƒํƒœ(์ฝ”์–ด ํƒ€์ž… ํ˜น์€ ๋™์ž‘ ์ฃผํŒŒ์ˆ˜ ๋“ฑ)๋ฅผ ๊ณ ๋ คํ•˜์ง€ ์•Š๋Š”๋‹ค. ์ด๋กœ ์ธํ•˜์—ฌ ๊ณต์ •ํ•œ CPU ์‚ฌ์šฉ ์‹œ๊ฐ„์„ ํƒœ์Šคํฌ๋“ค์—๊ฒŒ ๋ถ€์—ฌํ•˜์ง€ ๋ชปํ•œ๋‹ค. ๋˜ํ•œ, CFS๋Š” ํƒœ์Šคํฌ๋“ค์˜ virtual runtime์„ ๋™์ผํ•˜๊ฒŒ ๋งŒ๋“ค๋ ค๊ณ  ๋…ธ๋ ฅํ•œ๋‹ค. ์ด๋Š” ํƒœ์Šคํฌ๋“ค์˜ ์ƒ๋Œ€์ ์ธ ์ˆ˜ํ–‰ ์ •๋„๋ฅผ ๋น„์Šทํ•˜๊ฒŒ ์œ ์ง€์‹œํ‚ค๊ธฐ ์œ„ํ•ด์„œ์ด๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Š” ๊ฐœ๋ณ„ ์ฝ”์–ด์—์„œ๋Š” ์œ ์ง€๋˜๋‚˜, ์‹œ์Šคํ…œ ์ „์ฒด์ ์œผ๋กœ ๋น„์Šทํ•˜๊ฒŒ ์œ ์ง€ ๋˜์ง€ ์•Š๋Š”๋‹ค. ์ด๋Š” ์‹œ๊ฐ„์ด ์ง€๋‚จ์— ๋”ฐ๋ผ์„œ ํƒœ์Šคํฌ๊ฐ„ virtual runtime ์ฐจ์ด๋ฅผ ์ฆ๊ฐ€์‹œ์ผœ, ํƒœ์Šคํฌ๊ฐ„ ์ƒ๋Œ€์ ์ธ ์ˆ˜ํ–‰ ์ •๋„ ์ฐจ์ด๊ฐ€ ๋”์šฑ ๋” ์ปค์ง€๊ฒŒ ํ•˜๋Š” ๋ฌธ์ œ์ ์„ ๋ฐœ์ƒ์‹œํ‚จ๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜๊ฐ€ ์ง€์›ํ•˜๋Š” ์ฝ”์–ด ํƒ€์ž… ์„ ํƒ ๋ฐฉ์‹๊ณผ ์ „์ฒด ์ฝ”์–ด ์‚ฌ์šฉ ๋ฐฉ์‹์— ์ตœ์ ํ™”๋œ ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ์งธ, ๋ณธ ์—ฐ๊ตฌ๋Š” ์ฝ”์–ด ํƒ€์ž… ์„ ํƒ ๋ฐฉ์‹์˜ ์ €์ „๋ ฅ ์šด์šฉ ๋ชฉ์ ์— ๋งž๋Š” ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•˜์—ฌ ๋จผ์ €, Linux kernel์ด ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์œ„ํ•˜์—ฌ ์ œ๊ณตํ•˜๋Š” DVFS(Dynamic Voltage and Frequency Scaling) ์ •์ฑ…์„ ์ •ํ™•ํžˆ ๋ถ„์„ํ•œ๋‹ค. ๋ถ„์„๋œ ๊ฒฐ๊ณผ๋ฅผ ํ† ๋Œ€๋กœ ์ •ํ™•ํ•œ ๋™์ž‘์„ ๋ชจ๋ธ๋งํ•˜๊ณ , ์ด๋ฅผ ์ œ์•ˆํ•˜๋Š” ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์— ๋ฐ˜์˜ํ•œ๋‹ค. ์ด ๊ธฐ๋ฒ•์€ ๋ถ€ํ•˜๋ถ„์‚ฐ ์‹œ ํƒœ์Šคํฌ์˜ ๊ฐ€์ค‘์น˜๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ์ฝ”์–ด์˜ ์‚ฌ์šฉ๋ฅ ์„ ๊ณ ๋ คํ•˜์—ฌ ๋ถ€ํ•˜๋ฅผ ๋ถ„์‚ฐ์‹œํ‚จ๋‹ค. ์ด๋ฅผ ํ†ตํ•˜์—ฌ ์„ฑ๋Šฅ ์ €ํ•˜๋ฅผ ์ตœ์†Œํ™” ํ•˜๋ฉด์„œ ์ฃผํŒŒ์ˆ˜ ์ƒ์Šน์„ ์–ต์ œํ•˜๊ณ , ๋™์‹œ์— ๊ณ ์„ฑ๋Šฅ ์ฝ”์–ด๋ฅผ ์ตœ๋Œ€ํ•œ ์ ๊ฒŒ ์‚ฌ์šฉํ•˜๋Š” ์ €์ „๋ ฅ, ์—๋„ˆ์ง€ ํšจ์œจ์ ์ธ ์Šค์ผ€์ค„๋ง์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ๋‘˜์งธ, ๋ณธ ์—ฐ๊ตฌ๋Š” ์ „์ฒด ์ฝ”์–ด ์‚ฌ์šฉ ๋ฐฉ์‹์— ์ ํ•ฉํ•œ ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง ๋ฐฉ์‹์„ ์ œ์•ˆํ•œ๋‹ค. ์ด๋Š” ์ฝ”์–ด์˜ ์ƒํƒœ๋ฅผ ๋ฐ˜์˜ํ•œ ์Šค์ผ€์ผ๋œ CPU ์‹œ๊ฐ„์„ ๊ตฌํ•œ๋‹ค. ์ด๋ฅผ CFS์˜ virtual runtime์— ๋ฐ˜์˜ํ•˜๊ณ , SVR (scaled virtual runtime)๋กœ ํ™•์žฅํ•œ๋‹ค. ๋˜ํ•œ ๊ฐ๊ฐ์˜ ๊ณ ์„ฑ๋Šฅ ์ฝ”์–ด๋กœ ์ด๋ฃจ์–ด์ง„ ํด๋Ÿฌ์Šคํ„ฐ์™€ ์—๋„ˆ์ง€ ํšจ์œจ์ ์ธ ์ฝ”์–ด๋กœ ์ด๋ฃจ์–ด์ง„ ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€์—์„œ, ๋ชจ๋“  ํƒœ์Šคํฌ๋“ค์˜ SVR ์ฐจ์ด๋ฅผ ์ผ์ •ํ•œ ์ƒ์ˆ˜ ํฌ๊ธฐ๋กœ ์ œํ•œ์‹œํ‚จ๋‹ค. ์ด๋ฅผ ํ†ตํ•˜์—ฌ ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€์˜ ๋ชจ๋“  ํƒœ์Šคํฌ๋“ค์˜ ์ƒ๋Œ€์  ์ง„์ฒ™ ์ •๋„๋ฅผ ๋น„์Šทํ•˜๊ฒŒ ์œ ์ง€์‹œํ‚จ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, ์‹œ์Šคํ…œ ์ „์ฒด์ ์ธ ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง์„ ์ˆ˜ํ–‰ํ•˜๋„๋ก ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•๋“ค์˜ ํšจ์šฉ์„ฑ์„ ์ž…์ฆํ•˜๊ธฐ ์œ„ํ•ด์„œ, ์‹ค์ œ ์ƒ์šฉ์œผ๋กœ ์ถœ์‹œ๋œ ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜ ๊ธฐ๋ฐ˜ ์ œํ’ˆ์— ์ด๋“ค์„ ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ARM์‚ฌ์˜ ๋น…๋ฆฌํ‹€ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋Œ€์ƒ ์‹œ์Šคํ…œ์œผ๋กœ ํ•˜์˜€์œผ๋ฉฐ, ์ด๋Š” ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ์—์„œ ๊ฐ€์žฅ ๋Œ€ํ‘œ์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์ด๋‹ค. ์ €์ „๋ ฅ ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์€ ์ฝ”์–ด ํƒ€์ž… ์„ ํƒ ๋ฐฉ์‹์ด ์ง€์›๋˜๋Š” Galaxy S4 Android ์Šค๋งˆํŠธํฐ์— ๊ตฌํ˜„๋˜์—ˆ๋‹ค. CPU-intensiveํ•œ ๋ถ€ํ•˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹คํ—˜ํ•œ ๊ฒฐ๊ณผ, ๊ธฐ์กด ๋Œ€๋น„ ์ตœ๋Œ€ 11.35%์˜ ์—๋„ˆ์ง€ ์†Œ๋น„๊ฐ€ ๊ฐ์†Œํ•˜์˜€๋‹ค. ๋˜ํ•œ Android ์‘์šฉํ”„๋กœ๊ทธ๋žจ์„ ์ด์šฉํ•˜์—ฌ ์‹คํ—˜ ์‹œ, ๊ธฐ์กด ๋Œ€๋น„ ๋™์ผํ•œ QoS๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ 7.35% ์—๋„ˆ์ง€ ์†Œ๋น„๊ฐ€ ๊ฐ์†Œํ•˜์˜€๋‹ค. ์ด๋Ÿฌํ•œ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด์—์„œ ์ œ์•ˆ๋œ ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์ด ์—๋„ˆ์ง€ ํšจ์œจ์ ์ž„์„ ์ฆ๋ช…ํ•œ๋‹ค. ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์€, ์ „์ฒด ์ฝ”์–ด ์‚ฌ์šฉ ๋ฐฉ์‹์ด ์ง€์›๋˜๋Š” ARM์‚ฌ์˜ Versatile Express TC2 board์— ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€์˜ ํƒœ์Šคํฌ๊ฐ„ SVR ์ฐจ์ด๊ฐ€ ์ƒ์ˆ˜ ๊ฐ’ ์ด๋‚ด๋กœ ์ œํ•œ๋จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋˜ํ•œ SVR ์ฐจ์ด๋ฅผ ์ž‘๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง์— ์–ด๋– ํ•œ ์˜ํ–ฅ์„ ์‹ค์งˆ์ ์œผ๋กœ ๋ฏธ์น˜๋Š”์ง€ ์•Œ๊ธฐ ์œ„ํ•ด์„œ, ๋™์ผํ•œ ํƒœ์Šคํฌ๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ ์ƒ์„ฑํ•œ ํ›„ ๊ทธ๋“ค์˜ ์™„๋ฃŒ์‹œ๊ฐ„ ํ‘œ์ค€ ํŽธ์ฐจ๋ฅผ ์ธก์ •ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ๊ธฐ์กด ๋Œ€๋น„ ์™„๋ฃŒ์‹œ๊ฐ„ ํ‘œ์ค€ํŽธ์ฐจ๊ฐ€ 56% ์ค„์–ด ๋“ค์—ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆ๋œ ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์ด ํƒœ์Šคํฌ๋“ค์—๊ฒŒ ๋” ๊ณต์ •ํ•œ CPU ์‹œ๊ฐ„์„ ๋ถ€์—ฌํ•จ์„ ์ง๊ด€์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค.์ œ1์žฅ ์„œ๋ก  1 ์ œ1์ ˆ ์—ฐ๊ตฌ ๋™๊ธฐ 3 1.1 ์ฝ”์–ด ํƒ€์ž… ์„ ํƒ ๋ฐฉ์‹์—์„œ์˜ ์ €์ „๋ ฅ ์Šค์ผ€์ค„๋ง ์ตœ์ ํ™” 4 1.2 ์ „์ฒด ์ฝ”์–ด ์‚ฌ์šฉ ๋ฐฉ์‹์—์„œ ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง ์ตœ์ ํ™” 5 ์ œ2์ ˆ ๋…ผ๋ฌธ์˜ ๊ธฐ์—ฌ 6 ์ œ3์ ˆ ๋…ผ๋ฌธ ๊ตฌ์„ฑ 10 ์ œ2์žฅ ๊ด€๋ จ ์—ฐ๊ตฌ ๋ฐ ๋ฐฐ๊ฒฝ ์ง€์‹ 11 ์ œ1์ ˆ ๊ด€๋ จ์—ฐ๊ตฌ 11 1.1 ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์šฉ ์ €์ „๋ ฅ ์Šค์ผ€์ค„๋ง 11 1.2 ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์šฉ ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง 13 ์ œ2์ ˆ ๋ฐฐ๊ฒฝ ์ง€์‹ 16 ์ œ3์žฅ ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์šฉ ์ €์ „๋ ฅ ์Šค์ผ€์ค„๋ง 20 ์ œ1์ ˆ ์‹œ์Šคํ…œ ์ •์˜ 21 1.1 ๊ฐ€์ƒ CPU์™€ ๋ถ€ํ•˜๋ถ„์‚ฐ 21 1.2 Governor์™€ ์ฝ”์–ด์˜ ์‚ฌ์šฉ๋ฅ  24 1.3 CPU๊ฐ„ ์ด์ฃผ ๋ชจ๋“œ์—์„œ์˜ DVFS ๋ชจ๋ธ 25 ์ œ2์ ˆ ๋ฌธ์ œ ์ •์˜ 29 ์ œ3์ ˆ ํ•ด๊ฒฐ์ฑ… 29 3.1 ์‚ฌ์šฉ๋ฅ ์ธ์ง€ ๊ธฐ๋ฐ˜ ๋ถ€ํ•˜๋ถ„์‚ฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜ 30 3.2 ์‚ฌ์šฉ๋ฅ  ๊ธฐ๋ฐ˜ ์ถ”์ •๊ธฐ 32 3.3 ์ˆ˜ํ–‰์ด๋ ฅ์„ ๋ฐ˜์˜ํ•œ ์‚ฌ์šฉ๋ฅ  ๊ธฐ๋ฐ˜ ์ถ”์ •๊ธฐ 33 ์ œ4์žฅ ๋น„๋Œ€์นญ ๋ฉ€ํ‹ฐ์ฝ”์–ด ์•„ํ‚คํ…์ฒ˜์šฉ ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง 35 ์ œ1์ ˆ ์‹œ์Šคํ…œ ์ •์˜ 37 1.1 ๋Œ€์ƒ ์‹œ์Šคํ…œ ๋ชจ๋ธ๋ง ๋ฐ ์šฉ์–ด ์ •๋ฆฌ 37 1.2 GTS ๋ชจ๋“œ๋ฅผ ์œ„ํ•œ Linaro ์Šค์ผ€์ค„๋ง ํ”„๋ ˆ์ž„์› 40 ์ œ2์ ˆ ๊ณต์ •ํ• ๋‹น์˜ ์ •์˜ 45 ์ œ3์ ˆ ๋ฌธ์ œ ์ •์˜ 47 ์ œ4์ ˆ ํ•ด๊ฒฐ์ฑ… 48 4.1 SVR ๊ณ„์‚ฐ 50 4.2 SVR ๊ธฐ๋ฐ˜ per-core ์Šค์ผ€์ค„๋ง 54 4.3 SVR ๊ธฐ๋ฐ˜ ๋ถ€ํ•˜๋ถ„์‚ฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜ 56 4.4 ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ˆ˜ํ•™์  ๋ถ„์„ ๋ฐ ๊ฒ€์ฆ 65 ์ œ5์žฅ ์‹คํ—˜ ๋ฐ ๊ฒ€์ฆ 73 ์ œ1์ ˆ ์‹คํ—˜ ํ™˜๊ฒฝ 73 ์ œ2์ ˆ ์‹คํ—˜ ์‹œ๋‚˜๋ฆฌ์˜ค ๋ฐ ์„ฑ๋Šฅ ์ง€ํ‘œ 74 2.1 ์ €์ „๋ ฅ ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ• ์ตœ์ ํ™” 75 2.2 ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ• ์ตœ์ ํ™” 79 ์ œ3์ ˆ ์‹คํ—˜์  ๊ฒ€์ฆ ๊ฒฐ๊ณผ 82 3.1 ์ €์ „๋ ฅ ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 83 3.2 ๊ณต์ •ํ• ๋‹น ์Šค์ผ€์ค„๋ง ๊ธฐ๋ฒ•์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 88 ์ œ6์žฅ ๊ฒฐ ๋ก  96 ์ฐธ๊ณ  ๋ฌธํ—Œ 99 Abstract 106Docto

    ์‹ค์‹œ๊ฐ„ ๋ฉ€ํ‹ฐ์ฝ”์–ด ํ”Œ๋ฃจ์ด๋“œ ์Šค์ผ€์ค„๋ง์—์„œ ์ „์ฒด ์‹œ์Šคํ…œ์˜ ์‹œ๊ฐ„ ยท ๋ฐ€๋„ ํŠธ๋ ˆ์ด๋“œ์˜คํ”„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2017. 8. ์ด์ฐฝ๊ฑด.Recent parallel programming frameworks such as OpenCL and OpenMP allow us to enjoy the parallelization freedom for real-time tasks. The parallelization freedom creates the time vs. density tradeoff problem in fluid scheduling, i.e., more parallelization reduces thread execution times but increases the density. By system-widely exercising this tradeoff, this dissertation proposes a parameter tuning of real-time tasks aiming at maximizing the schedulability of multicore fluid scheduling. The experimental study by both simulation and actual implementation shows that the proposed approach well balances the time and the density, and results in up to 80% improvement of the schedulability.1 Introduction 1 1.1 Motivation and Objective 1 1.2 Approach 3 1.3 Organization 4 2 Related Work 6 2.1 Real-Time Scheduling 6 2.1.1 Workload Model 6 2.1.2 Scheduling on Multicore Systems 7 2.1.3 Period Control 9 2.1.4 Real-Time Operating System 10 2.2 Parallel Computing 10 2.2.1 Parallel Computing Framework 10 2.2.2 Shared Resource Management 12 3 System-wide Time vs. Density Tradeoff with Parallelizable Periodic Single Segment Tasks 14 3.1 Introduction 14 3.2 Problem Description 14 3.3 Motivating Example 21 3.4 Proposed Approach 26 3.4.1 Per-task Optimal Tradeoff of Time and Density 26 3.4.2 Peak Density Minimization for a Task Group with the Same Period 27 3.4.3 Heuristic Algorithm for System-wide Time vs. Density Tradeoff 38 3.5 Experimental Results 45 3.5.1 Simulation Study 45 3.5.2 Actual Implementation Results 51 4 System-wide Time vs. Density Tradeoff with Parallelizable Periodic Multi-segment Tasks 64 4.1 Introduction 64 4.2 Problem Description 64 4.3 Extension to Parallelizable Periodic Multi-segment Task Model 70 4.3.1 Peak Density Minimization for a Task Group of Multi-segment Tasks with Same Period 71 4.3.2 Heuristic Algorithm for System-wide Time vs. Density Tradeoff 78 5 Conclusion 81 5.1 Summary 81 5.2 Future Work 82 References 84 Appendices 100 A Period Harmonization 100Docto

    Task reweighting under global scheduling on multiprocessors

    Get PDF
    We consider schemes for enacting task share changes - a process called reweighting - on real-time multiprocessor platforms. Our particular focus is reweighting schemes that are deployed in environments in which tasks may frequently request significant share changes. Prior work has shown that fair scheduling algorithms are capable of reweighting tasks with minimal allocation error and that partitioning-based scheduling algorithms can reweight tasks with better average-case performance, but greater error. However, preemption and migration overheads can be high in fair schemes. In this paper, we consider the question of whether non-fair, earliest-deadline-first (EDF) global scheduling techniques can improve the accuracy of reweighting relative to partitioning-based schemes and provide improved average-case performance relative to fair-scheduled systems. Our conclusion is that, for soft real-time systems, global EDF schemes provide a good mix of accuracy and average-case performance

    Tardiness bounds under global EDF scheduling on a multiprocessor

    Get PDF
    We consider the scheduling of a sporadic real-time task system on an identical multiprocessor. Though Pfair algorithms are theoretically optimal for such task systems, in practice, their runtime overheads can significantly reduce the amount of useful work that is accomplished. On the other hand, if all deadlines need to be met, then every known non-Pfair algorithm requires restrictions on total system utilization that can approach approximately 50% of the available processing capacity. This may be overkill for soft real-time systems, which can tolerate occasional or bounded deadline misses (i.e., bounded tardiness). In this paper we derive tardiness bounds under preemptive and non-preemptive global EDF when the total system utilization is not restricted, except that it not exceed the available processing capacity. Hence, processor utilization can be improved for soft real-time systems on multiprocessors. Our tardiness bounds depend on the total system utilization and per-task utilizations and execution costs โ€” the lower these values, the lower the tardiness bounds. As a final remark, we note that global EDF may be superior to partitioned EDF for multiprocessor-based soft real-time systems in that the latter does not offer any scope to improve system utilization even if bounded tardiness can be tolerated

    Energy Efficient Scheduling for Real-Time Systems

    Get PDF
    The goal of this dissertation is to extend the state of the art in real-time scheduling algorithms to achieve energy efficiency. Currently, Pfair scheduling is one of the few scheduling frameworks which can optimally schedule a periodic real-time taskset on a multiprocessor platform. Despite the theoretical optimality, there exist large concerns about efficiency and applicability of Pfair scheduling in practical situations. This dissertation studies and proposes solutions to such efficiency and applicability concerns. This dissertation also explores temperature aware energy management in the domain of real-time scheduling. The thesis of this dissertation is: the implementation efficiency of Pfair scheduling algorithms can be improved. Further, temperature awareness of a real-time system can be improved while considering variation of task execution times to reduce energy consumption. This thesis is established through research in a number of directions. First, we explore the applicability of Dynamic Voltage and Frequency Scaling (DVFS) feature in the underlying platform, within Pfair scheduled systems. We propose techniques to reduce energy consumption in Pfair scheduling by using DVFS. Next, we explore the problem of quantum size selection in Pfair scheduled system so that runtime overheads are minimized. We also propose a hardware design for a central Pfair scheduler core in a multiprocessor system to minimized the overheads and energy consumption of Pfair scheduling. Finally, we propose a temperature aware energy management scheme for tasks with varying execution times

    Fair scheduling of dynamic task systems on multiprocessors

    No full text
    In dynamic real-time task systems, tasks that are subject to deadlines are allowed to join and leave the system. In previous work, Stoica et al. and Baruah et al. presented conditions under which such joins and leaves may occur in fair-scheduled uniprocessor systems without causing missed deadlines. In this paper, we extend their work by considering fair-scheduled multiprocessors. We show that their conditions are sufficient on M processors, under any deadline-based Pfair scheduling algorithm, if the utilization of every subset of M โˆ’ 1 tasks is at most one. Further, for the general case in which task utilizations are not restricted in this way, we derive sufficient join/leave conditions for the PD 2 Pfair algorithm. We also show that, in general, these conditions cannot be improved upon without causing missed deadlines
    corecore