24 research outputs found

    Nifty Data Structures Projects

    Full text link
    For computer science, and many technical fields, it is recognized that projects with real-world applicability play a significant roll in what students get out of the course. Creating applicable projects for upper division such as our data structures classes is very difficult and time consuming. We have utilized the Nifty assignments concept and applied it locally to an upper division data structures course. Our primary goal is to provide a forum for the sharing of data structure project ideas and materials (as applicable).https://digitalscholarship.unlv.edu/btp_expo/1062/thumbnail.jp

    Simulating the Effect of Pay Table Standard Deviation on Pulls Per Losing Player at the Single-visit Level

    Full text link
    While holding par constant, changes in the standard deviation of the pay table produced an inverse effect on pulls per losing player (PPLP), across six different virtual slot machines. This result establishes the standard deviation of a game as a crucial determinant of a slot player\u27s experience. Three different single-trip scenarios were examined via computer simulation, with 50,000 players engaging each game. For example, virtual players began with 100 units, terminating play at bankruptcy or 200 units. As players focus on the outcome of single visits, understanding the determinants of PPLP (or time on device) will help management engineer desirable customer experiences at the trip level. In part, this can be achieved by altering the product mix to better match the expectations of the clientele. Given the remarkable bankruptcy rate of the trip simulations, proxies for value such as PPLP serve as crucial evaluation standards in the satisfaction process

    A Fast and Simple Algorithm for Computing M Shortest Paths in Stage Graph

    Full text link
    We consider the problem of computing m shortest paths between a source node s and a target node t in a stage graph. Polynomial time algorithms known to solve this problem use complicated data structures. This paper proposes a very simple algorithm for computing all m shortest paths in a stage graph efficiently. The proposed algorithm does not use any complicated data structure and can be implemented in a straightforward way by using only array data structure. This problem appears as a sub-problem for planning risk reduced multiple k-legged trajectories for aerial vehicles

    Free Regions of Sensor Nodes

    Full text link
    We introduce the notion of free region of a node in a sensor network. Intuitively, a free region of a node is the connected set of points R in its neighborhood such that the connectivity of the network remains the same when the node is moved to any point in R. We characterize several properties of free regions and develop an efficient algorithm for computing them. We capture free region in terms of related notions called in-free region and out-free region. We present an O(n2) algorithm for constructing the free region of a node, where n is the number of nodes in the network

    A Fast and Simple Algorithm for Computing M-shortest Paths in State Graph

    Full text link
    We consider the problem of computing m shortest paths between a source node s and a target node t in a stage graph. Polynomial time algorithms known to solve this problem use complicated data structures. This paper proposes a very simple algorithm for computing all m shortest paths in a stage graph efficiently. The proposed algorithm does not use any complicated data structure and can be implemented in a straightforward way by using only array data structure. This problem appears as a sub-problem for planning risk reduced multiple k-legged trajectories for aerial vehicles

    Beyond Cumulative Sum Charting in Non-Stationarity Detection and Estimation

    Get PDF
    In computer science, stochastic processes, and industrial engineering, stationarity is often taken to imply a stable, predictable flow of events and non-stationarity, consequently, a departure from such a flow. Efficient detection and accurate estimation of non-stationarity are crucial in understanding the evolution of the governing dynamics. Pragmatic considerations include protecting human lives and property in the context of devastating processes such as earthquakes or hurricanes. Cumulative Sum (CUSUM) charting, the prevalent technique to weed out such non-stationarities, suffers from assumptions on a priori knowledge of the pre and post-change process parameters and constructs such as time discretization. In this paper, we have proposed two new ways in which non-stationarity may enter an evolving system - an easily detectable way, which we term strong corruption, where the post-change probability distribution is deterministically governed, and an imperceptible way which we term hard detection, where the post-change distribution is a probabilistic mixture of several densities. In addition, by combining the ordinary and switched trend of incoming observations, we develop a new trend ratio statistic in order to detect whether a stationary environment has changed. Surveying a variety of distance metrics, we examine several parametric and non-parametric options in addition to the established CUSUM and find that the trend ratio statistic performs better under the especially difficult scenarios of hard detection. Simulations (both from deterministic and mixed inter-event time densities), sensitivity-specificity type analyses, and estimated time of change distributions enable us to track the ideal detection candidate under various non-stationarities. Applications on two real data sets sampled from volcanology and weather science demonstrate how the estimated change points are in agreement with those obtained in some of our previous works, using different methods. Incidentally, this study sheds light on the inverse nature of dependence between the Hawaiian volcanoes Kilauea and Mauna Loa and demonstrates how inhabitants of the now-restless Kilauea may be relocated to Mauna Loa to minimize the loss of lives and moving costs

    Resolving Intravoxel White Matter Structures in the Human Brain Using Regularized Regression and Clustering

    Get PDF
    The human brain is a complex system of neural tissue that varies significantly between individuals. Although the technology that delineates these neural pathways does not currently exist, medical imaging modalities, such as diffusion magnetic resonance imaging (dMRI), can be leveraged for mathematical identification. The purpose of this work is to develop a novel method employing machine learning techniques to determine intravoxel nerve number and direction from dMRI data. The method was tested on multiple synthetic datasets and showed promising estimation accuracy and robustness for multi-nerve systems under a variety of conditions, including highly noisy data and imprecision in parameter assumptions

    Reconfiguring Polygonal Linkage to Maximize Area

    Full text link
    corecore