3,653 research outputs found
Recommended from our members
Using formal methods to support testing
Formal methods and testing are two important approaches that assist in the development of high quality software. While traditionally these approaches have been seen as rivals, in recent
years a new consensus has developed in which they are seen as complementary. This article reviews the state of the art regarding ways in which the presence of a formal specification can be used to assist testing
Optimizations of Cisco’s Embedded Logic Analyzer Module
Cisco’s embedded logic analyzer module (ELAM) is a debugging device used for many of Cisco’s application specific integrated chips (ASICs). The ELAM is used to capture data of interest to the user and stored for analysis purposes. The user enters a trigger expression containing data fields of interest in the form of a logical equation. The data fields associated with the trigger expression are stored in a set of Match and Mask (MM) registers. Incoming data packets are matched against these registers, and if the user-specified data pattern is detected, the ELAM triggers and begins a countdown sequence to stop data capture. The current ELAM implementation is restricted in the form of trigger expressions that are allowed and in the allocation of resources. Currently, data fields in the trigger expression can only be logically ANDed together, Match and Mask registers are inefficiently utilized, and a static state machine exists in the ELAM trigger logic. To optimize the usage of the ELAM, a trigger expression is first treated as a Boolean expression so that minimization algorithms can be run. Next, the data stored in the Match and Mask registers is analyzed for redundancies. Finally, a dynamic state machine is programmed with a distinct set of states generated from the trigger expression. This set of states is further minimized. A feasibility study is done to analyze the validity of the results
Non-determinism in the narrative structure of video games
PhD ThesisAt the present time, computer games represent a finite interactive system. Even in their more experimental forms, the number of possible interactions between player and NPCs (non-player characters) and among NPCs and the game world has a finite number and is led by a deterministic system in which events can therefore be predicted. This implies that the story itself, seen as the series of events that will unfold during gameplay, is a closed system that can be predicted a priori. This study looks beyond this limitation, and identifies the elements needed for the emergence of a non-finite, emergent narrative structure. Two major contributions are offered through this research. The first contribution comes in the form of a clear categorization of the narrative structures embracing all video game production since the inception of the medium. In order to look for ways to generate a non-deterministic narrative in games, it is necessary to first gain a clear understanding of the current narrative structures implemented and how their impact on users’ experiencing of the story. While many studies have observed the storytelling aspect, no attempt has been made to systematically distinguish among the different ways designers decide how stories are told in games. The second contribution is guided by the following research question: Is it possible to incorporate non-determinism into the narrative structure of computer games? The hypothesis offered is that non-determinism can be incorporated by means of nonlinear dynamical systems in general and Cellular Automata in particular
Flight Safety Assessment and Management.
This dissertation develops a Flight Safety Assessment and Management (FSAM) system to mitigate aircraft loss of control risk. FSAM enables switching between the pilot/nominal autopilot system and a complex flight control system that can potentially recover from high risk situations but can be hard to certify. FSAM monitors flight conditions for high risk situations and selects the appropriate control authority to prevent or recover from loss of control. The pilot/nominal autopilot system is overridden only when necessary to avoid loss of control. FSAM development is pursued using two approaches. First, finite state machines are manually prescribed to manage control mode switching. Constructing finite state machines for FSAM requires careful consideration of possible exception events, but provides a computationally-tractable and verifiable means of realizing FSAM. The second approach poses FSAM as an uncertain reasoning based decision theoretic problem using Markov Decision Processes (MDP), offering a less tedious knowledge engineering process at the cost of computational overhead. Traditional and constrained MDP formulations are presented. Sparse sampling approaches are also explored to obtain suboptimal solutions to FSAM MDPs. MDPs for takeoff and icing-related loss of control events are developed and evaluated. Finally, this dissertation applies verification techniques to ensure that finite state machine or MDP policies satisfy system requirements. Counterexamples obtained from verification techniques aid in FSAM refinement. Real world aviation accidents are used as case studies to evaluate FSAM formulations. This thesis contributes decision making and verification frameworks to realize flight safety assessment and management capabilities. Novel flight envelopes and state abstractions are prescribed to aid decision making.PhDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133348/1/swee_1.pd
Is there a Moore's law for quantum computing?
There is a common wisdom according to which many technologies can progress
according to some exponential law like the empirical Moore's law that was
validated for over half a century with the growth of transistors number in
chipsets. As a still in the making technology with a lot of potential promises,
quantum computing is supposed to follow the pack and grow inexorably to
maturity. The Holy Grail in that domain is a large quantum computer with
thousands of errors corrected logical qubits made themselves of thousands, if
not more, of physical qubits. These would enable molecular simulations as well
as factoring 2048 RSA bit keys among other use cases taken from the intractable
classical computing problems book. How far are we from this? Less than 15 years
according to many predictions. We will see in this paper that Moore's empirical
law cannot easily be translated to an equivalent in quantum computing. Qubits
have various figures of merit that won't progress magically thanks to some new
manufacturing technique capacity. However, some equivalents of Moore's law may
be at play inside and outside the quantum realm like with quantum computers
enabling technologies, cryogeny and control electronics. Algorithms, software
tools and engineering also play a key role as enablers of quantum computing
progress. While much of quantum computing future outcomes depends on qubit
fidelities, it is progressing rather slowly, particularly at scale. We will
finally see that other figures of merit will come into play and potentially
change the landscape like the quality of computed results and the energetics of
quantum computing. Although scientific and technological in nature, this
inventory has broad business implications, on investment, education and
cybersecurity related decision-making processes.Comment: 32 pages, 24 figure
Enhancing the performance of energy harvesting wireless communications using optimization and machine learning
The motivation behind this thesis is to provide efficient solutions for energy harvesting communications. Firstly, an energy harvesting underlay cognitive radio relaying network is investigated. In this context, the secondary network is an energy harvesting network. Closed-form expressions are derived for transmission power of secondary source and relay that maximizes the secondary network throughput. Secondly, a practical scenario in terms of information availability about the environment is investigated. We consider a communications system with a source capable of harvesting solar energy. Two cases are considered based on the knowledge availability about the underlying processes. When this knowledge is available, an algorithm using this knowledge is designed to maximize the expected throughput, while reducing the complexity of traditional methods. For the second case, when the knowledge about the underlying processes is unavailable, reinforcement learning is used. Thirdly, a number of learning architectures for reinforcement learning are introduced. They are called selector-actor-critic, tuner-actor-critic, and estimator-selector-actor-critic. The goal of the selector-actor-critic architecture is to increase the speed and the efficiency of learning an optimal policy by approximating the most promising action at the current state. The tuner-actor-critic aims at improving the learning process by providing the actor with a more accurate estimation about the value function. Estimator-selector-actor-critic is introduced to support intelligent agents. This architecture mimics rational humans in the way of analyzing available information, and making decisions. Then, a harvesting communications system working in an unknown environment is evaluated when it is supported by the proposed architectures. Fourthly, a realistic energy harvesting communications system is investigated. The state and action spaces of the underlying Markov decision process are continuous. Actor-critic is used to optimize the system performance. The critic uses a neural network to approximate the action-value function. The actor uses policy gradient to optimize the policy\u27s parameters to maximize the throughput
- …