18 research outputs found

    Decomposing the High School Timetable Problem

    Get PDF
    The process of timetable construction is a common and repetitive task for High Schools worldwide. In this paper a generic approach is presented for Greek High Schools organized around the idea of solving a significant number of tractable Integer Programming problems. Variables of the underlying mathematical model correspond to daily teacher schedules while a number of hard and soft constraints are included so as for the model to handle practical aspects that manifest themselves in Greek High Schools. By selecting better teacher schedules that exist in subproblems the quality of the overall solution gradually improves. The collected results which are obtained within reasonable time are most promising. The strength of the approach is supported by the fact that it managed to find the best known results for two public instance problems included in the Benchmarking Project for High School Timetabling (XHSTT-2012 1 )

    Model-based Development of Enhanced Ground Proximity Warning System for Heterogeneous Multi-Core Architectures

    Get PDF
    The aerospace domain, very much similar to other cyber-physical systems domains such as automotive or automation, is demanding new methodologies and approaches for increasing performance and reducing cost, while maintaining safety levels and programmability. While the heterogeneous multi-core architectures seem promising, apart from certification issues, there is a solid necessity for complex toolchains and programming processes for exploiting their full potential. The ARGO (WCET-Aware PaRallelization of Model-Based Ap-plications for HeteroGeneOus Parallel Systems) project is addressing this challenge by providing an inte-grated toolchain that realizes an innovative holistic approach for programming heterogeneous multi-core sys-tems in a model-based workflow. Model-based design elevates systems modeling and promotes simulation with the executing these models for verification and validation of the design decisions. As a case study, the ARGO toolchain and workflow will be applied to a model-based Enhanced Ground Proximity Warning System (EGPWS) development. EGPWS is a readily available system in current aircraft which provides alerts and warnings for obstacles and terrain along the flight path utilizing high resolution terrain databases, Global Positioning System and other sensors-. After a gentle introduction to the model-based development approach of the ARGO project for the heterogeneous multi-core architectures, the EGPWS and the EGPWS systems modelling will be presented

    A Genetic Algorithm-Enhanced Sensor Marks Selection Algorithm for Wavefront Aberration Modeling in Extreme-UV (EUV) Photolithography

    No full text
    In photolithographic processes, nanometer-level-precision wavefront-aberration models enable the machine to be able to meet the overlay (OVL) drift and critical dimension (CD) specifications. Software control algorithms take as input these models and correct any expected wavefront imperfections before reaching the wafer. In such way, a near-optimal image is exposed on the wafer surface. Optimizing the parameters of these models, however, involves several time costly sensor measurements which reduce the throughput performance of the machine in terms of exposed wafers per hour. In that case, photolithography machines come across the trade-off between throughput and quality. Therefore, one of the most common optimal experimental design (OED) problems in photolithography machines (and not only) is how to choose the minimum amount of sensor measurements that will provide the maximum amount of information. Additionally, each sensor measurement corresponds to a point on the wafer surface and therefore we must measure uniformly around the wafer surface as well. In order to solve this problem, we propose a sensor mark selection algorithm which exploits genetic algorithms. The proposed solution first selects a pool of points that qualify as candidates to be selected in order to meet the uniformity constraint. Then, the point that provides the maximum amount of information, quantified by the Fisher-based criteria of G-, D-, and A-optimality, is selected and added to the measurement scheme. This process, however, is considered “greedy”, and for this reason, genetic algorithms (GA) are exploited to further improve the solution. By repeating in parallel the “greedy” part several times, we obtain an initial population that will be the input to our GA. This meta-heuristic approach outperforms the “greedy” approach significantly. The proposed solution is applied in a real life semiconductors industry use case and achieves interesting industry as well as academical results

    Parallel crew scheduling in PAROS

    No full text
    . We give an overview of the parallelization work done in PAROS. The specific parallelization objective has been to improve the speed of airline crew scheduling, on a network of workstations. The work is based on the Carmen System, which is used by most European airlines for this task. We give a brief background to the problem. The two most time critical parts of this system are the pairing generator and the optimizer. We present a pairing generator which distributes the enumeration of pairings over the processors. This works efficiently on a large number of loosely coupled workstations. The optimizer, which is based on an iterative Lagrangian heuristic, allows only for rather fine-grained parallelization. On low-latency machines, parallelizing the two innermost loops at once works well. A new "active-set" strategy makes more coarse-grained communication possible and even improves the sequential algorithm. 1 The PAROS project The PAROS (Parallel large scale automatic scheduling) proje..

    Coarse-Grain Optimization and Code Generation for Embedded Multicore Systems

    No full text
    International audienceAs processors and systems-on-chip increasingly become multicore, parallel programming remains a difficult, time-consuming and complicated task. End users who are not parallel programming experts have a need to exploit such processors and architectures, using state of the art fourth generation of high programming languages, like Scilab or MATLAB. The ALMA toolset addresses this problem by receiving Scilab code as input and produces parallel code for embedded multiprocessor systems on chip, using platform quasi-agnostic optimisations. In this paper, coarse grain parallelism extraction and optimization issues as well as parallel code generation for the ALMA toolset are discussed

    Worst-Case Execution-Time-Aware Parallelization of Model-Based Avionics Applications

    Get PDF
    International audienceMulticore processing systems are the solution of choice to provide high embedded computing performance, but drawbacks in timing predictability and programmability limit their adoption in safety-critical aerospace applications. This work presents a compiler tool flow for automated parallelization of model-based real-time software, which addresses the shortcomings of multicore architectures in real-time systems. The flow is demonstrated using a model-based terrain awareness and warning systems (TAWSs) and an edge detection algorithm from the image-processing domain. Model-based applications are first transformed into real-time C code and, from there, into a well-predictable parallel C program. Tight bounds for the worst-case execution time (WCET) of the parallelized program can be determined using an integrated multicore WCET analysis. Thanks to the use of an architecture description language, the general approach is applicable to a wider range of target platforms. An experimental evaluation for a research architecture with network-on-chip interconnect shows that the parallel WCET of the TAWS application can be improved by a factor of 1.77 using the presented compiler tools

    Compiling Scilab to high performance embedded multicore systems

    No full text
    International audienceThe mapping process of high performance embedded applications to today's multiprocessor system-on-chip devices suffers from a complex toolchain and programming process. The problem is the expression of parallelism with a pure imperative programming language, which is commonly C. This traditional approach limits the mapping, partitioning and the generation of optimized parallel code, and consequently the achievable performance and power consumption of applications from different domains. The Architecture oriented paraLlelization for high performance embedded Multicore systems using scilAb (ALMA) European project aims to bridge these hurdles through the introduction and exploitation of a Scilab-based toolchain which enables the efficient mapping of applications on multiprocessor platforms from a high level of abstraction. The holistic solution of the ALMA toolchain allows the complexity of both the application and the architecture to be hidden, which leads to better acceptance, reduced development cost, and shorter time-to-market. Driven by the technology restrictions in chip design, the end of exponential growth of clock speeds and an unavoidable increasing request of computing performance, ALMA is a fundamental step forward in the necessary introduction of novel computing paradigms and methodologies
    corecore