34 research outputs found

    Analysis of a Splitting Estimator for Rare Event Probabilities in Jackson Networks

    Full text link
    We consider a standard splitting algorithm for the rare-event simulation of overflow probabilities in any subset of stations in a Jackson network at level n, starting at a fixed initial position. It was shown in DeanDup09 that a subsolution to the Isaacs equation guarantees that a subexponential number of function evaluations (in n) suffice to estimate such overflow probabilities within a given relative accuracy. Our analysis here shows that in fact O(n^{2{\beta}+1}) function evaluations suffice to achieve a given relative precision, where {\beta} is the number of bottleneck stations in the network. This is the first rigorous analysis that allows to favorably compare splitting against directly computing the overflow probability of interest, which can be evaluated by solving a linear system of equations with O(n^{d}) variables.Comment: 23 page

    Sustainable Adaptive Grid Supercomputing: Multiscale Simulation of Semiconductor Processing across the Pacific

    Full text link

    An Optimization Approach for Energy Efficient Coordination Control of Vehicles in Merging Highways

    Get PDF
    Environmental concerns along with stronger governmental regulations regarding automotive fuel-economy and greenhouse-gas emissions are contributing to the push for development of more sustainable transportation technologies. Furthermore, the widespread use of the automobile gives rise to other issues such as traffic congestion and increasing traffic accidents. Consequently, two main goals of new technologies are the reduction of vehicle fuel consumption and emissions and the reduction of traffic congestion. While an extensive list of published work addresses the problem of fuel consumption reduction by optimizing the vehicle powertrain operations, particularly in the case of hybrid electric vehicles (HEV), approaches like eco-driving and traffic coordination have been studied more recently as alternative methods that can, in addition, address the problem of traffic congestion and traffic accidents reduction. This dissertation builds on some of those approaches, with particular emphasis on autonomous vehicle coordination control. In this direction, the objective is to derive an optimization approach for energy efficient and safe coordination control of vehicles in merging highways. Most of the current optimization-based centralized approaches to this problem are solved numerically, at the expense of a high computational load which limits their potential for real-time implementation. In addition, closed-form solutions, which are desired to facilitate traffic analysis and the development of approaches to address interconnected merging/intersection points and achieve further traffic improvements at the road-network level, are very limited in the literature. In this dissertation, through the application of the Pontryagin’s minimum principle, a closed-form solution is obtained which allows the implementation of a real-time centralized optimal control for fleets of vehicles. The results of applying the proposed framework show that the system can reduce the fuel consumption by up to 50% and the travel time by an average of 6.9% with respect to a scenario with not coordination strategy. By integrating the traffic coordination scheme with in-vehicle energy management, a two level optimization system is achieved which allows assessing the benefits of integrating hybrid electric vehicles into the road network. Regarding in-vehicle energy optimization, four methods are developed to improve the tuning process of the equivalent consumption optimization strategy (ECMS). First, two model predictive control (MPC)-based strategies are implemented and the results show improvements in the efficiency obtained with the standard ECMS implementation. On the other hand, the research efforts focus in performing analysis of the engine and electric motor operating points which can lead to the optimal tuning of the ECMS with reduced iterations. Two approaches are evaluated and even though the results in fuel economy are slightly worse than those for the standard ECMS, they show potential to significantly reduce the tuning time of the ECMS. Additionally, the benefits of having less aggressive driving profiles on different powertrain technologies such as conventional, plug-in hybrid and electric vehicles are studied

    Who wrote this scientific text?

    No full text
    The IEEE bibliographic database contains a number of proven duplications with indication of the original paper(s) copied. This corpus is used to test a method for the detection of hidden intertextuality (commonly named "plagiarism"). The intertextual distance, combined with the sliding window and with various classification techniques, identifies these duplications with a very low risk of error. These experiments also show that several factors blur the identity of the scientific author, including variable group authorship and the high levels of intertextuality accepted, and sometimes desired, in scientific papers on the same topic

    Instrumenting and analyzing platform-independent communication in applications

    Get PDF
    The performance of microprocessors is limited by communication. This limitation, sometimes alluded to as the memory wall, refers to the hardware-level cost of communicating with memory. Recent studies have found that the promise of speedup from transistor scaling, or employing heterogeneous processors, such as GPUs, is diminished when such hardware communication costs are included. Based on the insight that hardware communication at run-time is a manifestation of communication in software, this dissertation proposes that automatically capturing and classifying software-level communication is the first step in performing fast, early-stage design space exploration of future multicore systems. Software-level communication refers to the exchange of data between software entities such as functions, threads or basic blocks. Communication classification helps differentiate the first-time use from the reuse of communicated data, and distinguishes between communication external to a software entity and local communication within a software entity. We present Sigil, a novel tool that automatically captures and classifies software-level communication in an efficient way. Due to its platform-independent nature, software-level communication can be useful during the early-stage design of future multicore systems. Using the two different representations of output data that Sigil produces, we show that the measurement of software-level communication can be used to analyze i) function-level interaction in single-threaded programs to determine which specialized logic can be included in future heterogeneous multicore systems, and ii) thread-level interaction in multi-threaded programs to aid in chip multi-processor(CMP) design space exploration.Ph.D., Electrical Engineering -- Drexel University, 201

    Requirement-based Root Cause Analysis Using Log Data

    Get PDF
    Root Cause Analysis for software systems is a challenging diagnostic task due to complexity emanating from the interactions between system components. Furthermore, the sheer size of the logged data makes it often difficult for human operators and administrators to perform problem diagnosis and root cause analysis. The diagnostic task is further complicated by the lack of models that could be used to support the diagnostic process. Traditionally, this diagnostic task is conducted by human experts who create mental models of systems, in order to generate hypotheses and conduct the analysis even in the presence of incomplete logged data. A challenge in this area is to provide the necessary concepts, tools, and techniques for the operators to focus their attention to specific parts of the logged data and ultimately to automate the diagnostic process. The work described in this thesis aims at proposing a framework that includes techniques, formalisms, and algorithms aimed at automating the process of root cause analysis. In particular, this work uses annotated requirement goal models to represent the monitored systems' requirements and runtime behavior. The goal models are used in combination with log data to generate a ranked set of diagnostics that represent the combination of tasks that failed leading to the observed failure. In addition, the framework uses a combination of word-based and topic-based information retrieval techniques to reduce the size of log data by filtering out a subset of log data to facilitate the diagnostic process. The process of log data filtering and reduction is based on goal model annotations and generates a sequence of logical literals that represent the possible systems' observations. A second level of investigation consists of looking for evidence for any malicious (i.e., intentionally caused by a third party) activity leading to task failures. This analysis uses annotated anti-goal models that denote possible actions that can be taken by an external user to threaten a given system task. The framework uses a novel probabilistic approach based on Markov Logic Networks. Our experiments show that our approach improves over existing proposals by handling uncertainty in observations, using natively generated log data, and by providing ranked diagnoses. The proposed framework has been evaluated using a test environment based on commercial off-the-shelf software components, publicly available Java Based ATM machine, and the large publicly available dataset (DARPA 2000)

    L'intertextualité dans les publications scientifiques

    No full text
    La base de données bibliographiques de l'IEEE contient un certain nombre de duplications avérées avec indication des originaux copiés. Ce corpus est utilisé pour tester une méthode d'attribution d'auteur. La combinaison de la distance intertextuelle avec la fenêtre glissante et diverses techniques de classification permet d'identifier ces duplications avec un risque d'erreur très faible. Cette expérience montre également que plusieurs facteurs brouillent l'identité de l'auteur scientifique, notamment des collectifs de chercheurs à géométrie variable et une forte dose d'intertextualité acceptée voire recherchée

    Planning and Routing Algorithms for Multi-Skill Contact Centers

    Get PDF
    Koole, G.M. [Promotor

    Practical Strategic Reasoning with Applications in Market Games.

    Full text link
    Strategic reasoning is part of our everyday lives: we negotiate prices, bid in auctions, write contracts, and play games. We choose actions in these scenarios based on our preferences, and our beliefs about preferences of the other participants. Game theory provides a rich mathematical framework through which we can reason about the influence of these preferences. Clever abstractions allow us to predict the outcome of complex agent interactions, however, as the scenarios we model increase in complexity, the abstractions we use to enable classical game-theoretic analysis lose fidelity. In empirical game-theoretic analysis, we construct game models using empirical sources of knowledge—such as high-fidelity simulation. However, utilizing empirical knowledge introduces a host of different computational and statistical problems. I investigate five main research problems that focus on efficient selection, estimation, and analysis of empirical game models. I introduce a flexible modeling approach, where we may construct multiple game-theoretic models from the same set of observations. I propose a principled methodology for comparing empirical game models and a family of algorithms that select a model from a set of candidates. I develop algorithms for normal-form games that efficiently identify formations—sets of strategies that are closed under a (correlated) best-response correspondence. This aids in problems, such as finding Nash equilibria, that are key to analysis but hard to solve. I investigate policies for sequentially determining profiles to simulate, when constrained by a budget for simulation. Efficient policies allow modelers to analyze complex scenarios by evaluating a subset of the profiles. The policies I introduce outperform the existing policies in experiments. I establish a principled methodology for evaluating strategies given an empirical game model. I employ this methodology in two case studies of market scenarios: first, a case study in supply chain management from the perspective of a strategy designer; then, a case study in Internet ad auctions from the perspective of a mechanism designer. As part of the latter analysis, I develop an ad-auctions scenario that captures several key strategic issues in this domain for the first time.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75848/1/prjordan_1.pd
    corecore