3,111 research outputs found

    Enabling Scalability: Graph Hierarchies and Fault Tolerance

    Get PDF
    In this dissertation, we explore approaches to two techniques for building scalable algorithms. First, we look at different graph problems. We show how to exploit the input graph\u27s inherent hierarchy for scalable graph algorithms. The second technique takes a step back from concrete algorithmic problems. Here, we consider the case of node failures in large distributed systems and present techniques to quickly recover from these. In the first part of the dissertation, we investigate how hierarchies in graphs can be used to scale algorithms to large inputs. We develop algorithms for three graph problems based on two approaches to build hierarchies. The first approach reduces instance sizes for NP-hard problems by applying so-called reduction rules. These rules can be applied in polynomial time. They either find parts of the input that can be solved in polynomial time, or they identify structures that can be contracted (reduced) into smaller structures without loss of information for the specific problem. After solving the reduced instance using an exponential-time algorithm, these previously contracted structures can be uncontracted to obtain an exact solution for the original input. In addition to a simple preprocessing procedure, reduction rules can also be used in branch-and-reduce algorithms where they are successively applied after each branching step to build a hierarchy of problem kernels of increasing computational hardness. We develop reduction-based algorithms for the classical NP-hard problems Maximum Independent Set and Maximum Cut. The second approach is used for route planning in road networks where we build a hierarchy of road segments based on their importance for long distance shortest paths. By only considering important road segments when we are far away from the source and destination, we can substantially speed up shortest path queries. In the second part of this dissertation, we take a step back from concrete graph problems and look at more general problems in high performance computing (HPC). Here, due to the ever increasing size and complexity of HPC clusters, we expect hardware and software failures to become more common in massively parallel computations. We present two techniques for applications to recover from failures and resume computation. Both techniques are based on in-memory storage of redundant information and a data distribution that enables fast recovery. The first technique can be used for general purpose distributed processing frameworks: We identify data that is redundantly available on multiple machines and only introduce additional work for the remaining data that is only available on one machine. The second technique is a checkpointing library engineered for fast recovery using a data distribution method that achieves balanced communication loads. Both our techniques have in common that they work in settings where computation after a failure is continued with less machines than before. This is in contrast to many previous approaches that---in particular for checkpointing---focus on systems that keep spare resources available to replace failed machines. Overall, we present different techniques that enable scalable algorithms. While some of these techniques are specific to graph problems, we also present tools for fault tolerant algorithms and applications in a distributed setting. To show that those can be helpful in many different domains, we evaluate them for graph problems and other applications like phylogenetic tree inference

    APPLICATION OF SENSOR FUSION FOR SI ENGINE DIAGNOSTICS AND COMBUSTION FEEDBACK

    Get PDF
    Shifting consumer mindsets and evolving government norms are forcing automotive manufacturers the world over to improve vehicle performance and also reduce greenhouse gas emissions. A critical aspect of achieving future fuel economy and emission targets is improved powertrain control and diagnostics. This study focuses on using a sensor fusion based approach to improving control and diagnostics in a gasoline engine. A four cylinder turbocharged engine was instrumented with a suite of sensors including ion sensors, exhaust pressure sensors, crank position sensors and accelerometers. The diagnostic potential of these sensors was studied in detail. The ability of these sensors to detect knock, misfires and also correlate with pressure and combustion metrics was also evaluated. Lastly a neural network based approach to combine individual sensor signal information was developed. The neural network was used to estimate mean effective pressure and location of fifty percent mass fraction fuel burn. Additionally, the influence of various neural network architectures was studied. Results showed that under pseudo transient conditions a recursive neural network could use information from the low cost sensors to estimate mean effective pressure within an error of 0.1bar and combustion phasing within 2.5 crank-angle degrees

    Socio-Cognitive and Affective Computing

    Get PDF
    Social cognition focuses on how people process, store, and apply information about other people and social situations. It focuses on the role that cognitive processes play in social interactions. On the other hand, the term cognitive computing is generally used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, it is a type of computing with the goal of discovering more accurate models of how the human brain/mind senses, reasons, and responds to stimuli. Socio-Cognitive Computing should be understood as a set of theoretical interdisciplinary frameworks, methodologies, methods and hardware/software tools to model how the human brain mediates social interactions. In addition, Affective Computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects, a fundamental aspect of socio-cognitive neuroscience. It is an interdisciplinary field spanning computer science, electrical engineering, psychology, and cognitive science. Physiological Computing is a category of technology in which electrophysiological data recorded directly from human activity are used to interface with a computing device. This technology becomes even more relevant when computing can be integrated pervasively in everyday life environments. Thus, Socio-Cognitive and Affective Computing systems should be able to adapt their behavior according to the Physiological Computing paradigm. This book integrates proposals from researchers who use signals from the brain and/or body to infer people's intentions and psychological state in smart computing systems. The design of this kind of systems combines knowledge and methods of ubiquitous and pervasive computing, as well as physiological data measurement and processing, with those of socio-cognitive and affective computing

    Reducing Detailed Vehicle Energy Dynamics to Physics-Like Models

    Full text link
    The energy demand of vehicles, particularly in unsteady drive cycles, is affected by complex dynamics internal to the engine and other powertrain components. Yet, in many applications, particularly macroscopic traffic flow modeling and optimization, structurally simple approximations to the complex vehicle dynamics are needed that nevertheless reproduce the correct effective energy behavior. This work presents a systematic model reduction pipeline that starts from complex vehicle models based on the Autonomie software and derives a hierarchy of simplified models that are fast to evaluate, easy to disseminate in open-source frameworks, and compatible with optimization frameworks. The pipeline, based on a virtual chassis dynamometer and subsequent approximation strategies, is reproducible and is applied to six different vehicle classes to produce concrete explicit energy models that represent an average vehicle in each class and leverage the accuracy and validation work of the Autonomie software.Comment: 40 pages, 9 figure

    Studies on SI engine simulation and air/fuel ratio control systems design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.More stringent Euro 6 and LEV III emission standards will immediately begin execution on 2014 and 2015 respectively. Accurate air/fuel ratio control can effectively reduce vehicle emission. The simulation of engine dynamic system is a very powerful method for developing and analysing engine and engine controller. Currently, most engine air/fuel ratio control used look-up table combined with proportional and integral (PI) control and this is not robust to system uncertainty and time varying effects. This thesis first develops a simulation package for a port injection spark-ignition engine and this package include engine dynamics, vehicle dynamics as well as driving cycle selection module. The simulations results are very close to the data obtained from laboratory experiments. New controllers have been proposed to control air/fuel ratio in spark ignition engines to maximize the fuel economy while minimizing exhaust emissions. The PID control and fuzzy control methods have been combined into a fuzzy PID control and the effectiveness of this new controller has been demonstrated by simulation tests. A new neural network based predictive control is then designed for further performance improvements. It is based on the combination of inverse control and predictive control methods. The network is trained offline in which the control output is modified to compensate control errors. The simulation evaluations have shown that the new neural controller can greatly improve control air/fuel ratio performance. The test also revealed that the improved AFR control performance can effectively restrict engine harmful emissions into atmosphere, these reduce emissions are important to satisfy more stringent emission standards
    • 

    corecore