47 research outputs found

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the ā€œsublinear computation paradigm,ā€ which was proposed in the large multiyear academic research project ā€œFoundations of Innovative Algorithms for Big Data.ā€ That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as ā€œfast,ā€ but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    Multi-resolution fault diagnosis in discrete-event systems

    Get PDF
    In this thesis, a framework for multi-resolution fault diagnosis in discrete-event systems (DES) is introduced. Here a sequence of plant models, with increasing resolution, are used in fault diagnosis and the range of possible diagnosis is narrowed down step by step, until the failure node is isolated. In this way, the original problem of fault diagnosis is replaced by a sequence of smaller problems. The plant models used at each step of diagnosis are abstractions of the original plant model. We propose to use model reduction through the solutions of the Relational Coarsest Partition problem to obtain these abstractions. For each diagnosis step, minimal sensor sets are chosen to have a coarser output map, and hence, to improve the efficiency of model reduction. In this thesis, a polynomial algorithm is proposed that verifies failure diagnosability by examining the distinguishability of two plant (normal/faulty) conditions at a time. A procedure is presented that finds minimal sensor sets, referred to as minimal distinguishes for distinguishability of one condition from another. A polynomial procedure is introduced that combines minimal distinguishers to obtain a minimal sensor set for fault diagnosis. The proposed method reduces the computational complexity of sensor selection. A benefit of using minimal distinguishers is that their computation maybe speeded up using expert knowledge. The proposed method for sensor selection is particularly suitable for multi-resolution diagnosis since it permits some of the results of computations, performed for sensor selection at the lowest (finest) level of multi-resolution diagnosis to be reduced at higher levels. This feature is particularly useful in reducing the computations necessary for online reconfiguration of the multi-resolution diagnosis system. An important procedure used in sensor selection is testing diagnosability. In this thesis, a new procedure for testing diagnosability in timed DES is introduced based on the relatively timing of plant output sequence. It is shown through example that the proposed test maybe executed with significantly fewer computations compared to tests developed for untimed models and adapted for timed systems. Furthermore, two new sets of sufficient conditions are provided under which diagnoser design and diagnosability tests based on relative timing of output sequence can be performed efficientl

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), SaarbrĀØucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), WĀØurzburg (1993), Caen (1994), MĀØunchen (1995), Grenoble (1996), LĀØubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Constraint-Based Testing and Tail Tests for Measurement-Based Probabilistic Timing Analysis

    Get PDF
    The Worst-Case Execution Time (WCET) of tasks is an important data to give confidence that Real Time Systems will meet its timing requirements. Unfortunately, due to its tractability, this data is generally unknown. Measurement-Based Timing Analysis (MBTA), which relies on observing execution times driven by test data, has become a promising approach in the recent years. Some of the current testing techniques may take a relative long time at triggering decisions because they require very specific data of the input space. Conversely, Constraint-Based Testing (CBT) can cope better with these decisions as well as being more eficient. State-of-the-art approaches integrating CBT with MBTA have applied code coverage metrics designed for functional testing but not for WCET. Therefore, important functions such as generating test data for a path potentially leading to a large execution time are omitted. A central contribution of this work embraces CBT. Its objective is to meet code coverage needs for WCET. The evaluation compares this approach to state-of-theart Search-Based Testing (SBT) in MBTA and Random Testing (RT) methods and shows that, in most cases, CBT not only does not underestimate the largest observed execution time but also it achieves this data earlier. A downside of MBTA is that it normally underestimates the WCET. To face this issue execution time data is recently combined with probabilistic models. The current probabilistic protocol is hard to automate. Others probabilistic approaches have found alternative ways to achieve similar results automatically. The second main contribution aims for integrating this latter protocol and evaluating its applicability. The evaluation, which uses execution time data from test generators, shows that SBT and RT are more likely to generate data that enable this new approach. The WCET predictions are found more accurate by definition

    Embedded System Design

    Get PDF
    A unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues
    corecore