Virginia Tech - Wake Forest University School of Biomedical Engineering & Sciences

Computer Science Technical Reports @Virginia Tech
Not a member yet
    996 research outputs found

    GreenVis: Energy-Saving Color Schemes for Sequential Data Visualization on OLED Displays

    Get PDF
    The organic light emitting diode (OLED) display has recently become popular in the consumer electronics market. Compared with current LCD display technology, OLED is an emerging display technology that emits light by the pixels themselves and doesn’t need an external back light as the illumination source. In this paper, we offer an approach to reduce power consumption on OLED displays for sequential data visualization. First, we create a multi-objective optimization approach to find the most energy-saving color scheme for given visual perception difference levels. Second, we apply the model in two situations: pre-designed color schemes and auto generated color schemes. Third, our experiment results show that the energy-saving sequential color scheme can reduce power consumption by 17.2% for pre-designed color schemes. For auto-generated color schemes, it can save 21.9% of energy in comparison to the reference color scheme for sequential data

    A Practical Blended Analysis for Dynamic Features in JavaScript

    Get PDF
    JavaScript is widely used in Web applications; however, its dynamism renders static analysis ineffective. Our JavaScript Blended Analysis Framework is designed to handle JavaScript dynamic features. It performs a flexible combined static/dynamic analysis. The blended analysis focuses static analysis on a dynamic calling structure collected at runtime in a lightweight manner, and refines the static analysis using dynamic information. The framework is instantiated for points-to analysis with stmt-level MOD analysis and tainted input analysis. Using JavaScript codes from actual webpages as benchmarks, we show that blended points-to analysis for JavaScript obtains good coverage (86.6% on average per website) of the pure static analysis solution and finds additional points-to pairs (7.0% on average per website) contributed by dynamically generated/loaded code. Blended tainted input analysis reports all 6 true positives reported by static analysis, but without false alarms, and finds three additional true positives

    User Intention-Based Traffic Dependence Analysis For Anomaly Detection

    Get PDF
    This paper describes an approach for enforcing dependencies between network traffic and user activities for anomaly detection. We present a framework and algorithms that analyze user actions and network events on a host according to their dependencies. Discovering these relations is useful in identifying anomalous events on a host that are caused by software flaws or malicious code. To demonstrate the feasibility of user intention-based traffic dependence analysis, we implement a prototype called CR-Miner and perform extensive experimental evaluation of the accuracy, security, and efficiency of our algorithm. The results show that our algorithm can identify user intention-based traffic dependence with high accuracy (average 99:6% for 20 users) and low false alarms. Our prototype can successfully detect several pieces of HTTP-based real-world spyware. Our dependence analysis is fast with a minimal storage requirement. We give a thorough analysis on the security and robustness of the user intention-based traffic dependence approach

    A Framework to Analyze the Performance of Load Balancing Schemes for Ensembles of Stochastic Simulations

    Get PDF
    Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, and where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model is consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25%, and the total processor idle time by 85%

    The Poset Cover Problem

    No full text
    A partial order or poset P = (X,<) on a (finite) base set X determines the set L(P) of linear extensions of P. The problem of computing, for a poset P, the cardinality of L(P) is #P-complete. A set {P1, P2, . . . , Pk} of posets on X covers the set of linear orders that is the union of the L(Pi). Given linear orders L1,L2, . . . ,Lm on X, the Poset Cover problem is to determine the smallest number of posets that cover {L1,L2, . . . ,Lm}. Here, we show that the decision version of this problem is NP- complete. On the positive side, we explore the use of cover relations for finding posets that cover a set of linear orders and present a polynomial-time algorithm to find a partial poset cover

    A Practical Blended Analysis for Dynamic Features in JavaScript

    Get PDF
    The JavaScript Blended Analysis Framework is designed to perform a general-purpose, practical combined static/dynamic analysis of JavaScript programs, while handling dynamic features such as run-time generated code and variadic func- tions. The idea of blended analysis is to focus static anal- ysis on a dynamic calling structure collected at runtime in a lightweight manner, and to rene the static analysis us- ing additional dynamic information. We perform blended points-to analysis of JavaScript with our framework and compare results with those computed by a pure static points- to analysis. Using JavaScript codes from actual webpages as benchmarks, we show that optimized blended analysis for JavaScript obtains good coverage (86.6% on average per website) of the pure static analysis solution and nds ad- ditional points-to pairs (7.0% on average per website) con- tributed by dynamically generated/loaded code

    A Practical Method to Estimate Information Content in the Context of 4D-Var Data Assimilation. II: Application to Global Ozone Assimilation

    Get PDF
    Data assimilation obtains improved estimates of the state of a physical system by combining imperfect model results with sparse and noisy observations of reality. Not all observations used in data assimilation are equally valuable. The ability to characterize the usefulness of different data points is important for analyzing the effectiveness of the assimilation system, for data pruning, and for the design of future sensor systems. In the companion paper (Sandu et al., 2012) we derive an ensemble-based computational procedure to estimate the information content of various observations in the context of 4D-Var. Here we apply this methodology to quantify the signal and degrees of freedom for signal information metrics of satellite observations used in a global chemical data assimilation problem with the GEOS-Chem chemical transport model. The assimilation of a subset of data points characterized by the highest information content yields an analysis comparable in quality with the one obtained using the entire data set

    Performance Analysis of a Novel GPU Computation-to-core Mapping Scheme for Robust Facet Image Modeling

    Get PDF
    Though the GPGPU concept is well-known in image processing, much more work remains to be done to fully exploit GPUs as an alternative computation engine. This paper investigates the computation-to-core mapping strategies to probe the efficiency and scalability of the robust facet image modeling algorithm on GPUs. Our fine-grained computation-to-core mapping scheme shows a significant performance gain over the standard pixel-wise mapping scheme. With in-depth performance comparisons across the two different mapping schemes, we analyze the impact of the level of parallelism on the GPU computation and suggest two principles for optimizing future image processing applications on the GPU platform

    CoreTSAR: Task Scheduling for Accelerator-aware Runtimes

    Get PDF
    Heterogeneous supercomputers that incorporate computational accelerators such as GPUs are increasingly popular due to their high peak performance, energy efficiency and comparatively low cost. Unfortunately, the programming models and frameworks designed to extract performance from all computational units still lack the flexibility of their CPU-only counterparts. Accelerated OpenMP improves this situation by supporting natural migration of OpenMP code from CPUs to a GPU. However, these implementations currently lose one of OpenMP’s best features, its flexibility: typical OpenMP applications can run on any number of CPUs. GPU implementations do not transparently employ multiple GPUs on a node or a mix of GPUs and CPUs. To address these shortcomings, we present CoreTSAR, our runtime library for dynamically scheduling tasks across heterogeneous resources, and propose straightforward extensions that incorporate this functionality into Accelerated OpenMP. We show that our approach can provide nearly linear speedup to four GPUs over only using CPUs or one GPU while increasing the overall flexibility of Accelerated OpenMP

    The Green500 List: Escapades to Exascale

    Get PDF
    Energy efï¬ciency is now a top priority. The ï¬rst four years of the Green500 have seen the importance of en- ergy efï¬ciency in supercomputing grow from an afterthought to the forefront of innovation as we near a point where sys- tems will be forced to stop drawing more power. Even so, the landscape of efï¬ciency in supercomputing continues to shift, with new trends emerging, and unexpected shifts in previous predictions. This paper offers an in-depth analysis of the new and shifting trends in the Green500. In addition, the analysis of- fers early indications of the track we are taking toward exas- cale, and what an exascale machine in 2018 is likely to look like. Lastly, we discuss the new efforts and collaborations toward designing and establishing better metrics, method- ologies and workloads for the measurement and analysis of energy-efï¬cient supercomputing

    838

    full texts

    996

    metadata records
    Updated in last 30 days.
    Computer Science Technical Reports @Virginia Tech is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇