10 research outputs found

    A Queuing Network Model Based on Ad Hoc Routing Networks for Multimedia

    No full text
    Abstract: In real-time multimedia applications, the delivery of multimedia information over ad hoc wireless networks has presented difficult challenges requiring considerable research efforts to overcome. To analyze the delivering multimedia packets between mobile nodes with low end-to-end delay and less bandwidth overhead while ensuring high throughput, we propose a queuing network model based on our adaptive-gossip algorithm with probability p n that conserves network bandwidth at each node by reducing the routing overhead. We also analyze the queuing delay in regard to the number of nodes, the transmission range of a node, the behavior of routing, and MAC protocols. We present both analytical and experimental results to thoroughly evaluate our proposed queuing network model, which demonstrates the advantages of an adaptive-gossiping routing method over flooding routing

    Estimation of Real-Time Flood Risk on Roads Based on Rainfall Calculated by the Revised Method of Missing Rainfall

    No full text
    Recently, flood damage by frequent localized downpours in cities is on the increase on account of abnormal climate phenomena and the growth of impermeable areas due to urbanization. This study suggests a method to estimate real-time flood risk on roads for drivers based on the accumulated rainfall. The amount of rainfall of a road link, which is an intensive type, is calculated by using the revised method of missing rainfall in meteorology, because the rainfall is not measured on roads directly. To process in real time with a computer, we use the inverse distance weighting (IDW) method, which is a suitable method in the computing system and is commonly used in relation to precipitation due to its simplicity. With real-time accumulated rainfall, the flooding history, rainfall range causing flooding from previous rainfall information and frequency probability of precipitation are used to determine the flood risk on roads. The result of simulation using the suggested algorithms shows the high concordance rate between actual flooded areas in the past and flooded areas derived from the simulation for the research region in Busan, Korea

    Machine Learning Optimization of Parameters for Noise Estimation

    No full text
    In this paper, a fast and effective method of parameter optimization for noise estimation is proposed for various types of noise. The proposed method is based on gradient descent, which is one of the optimization methods used in machine learning. The learning rate of gradient descent was set to a negative value for optimizing parameters for a speech quality improvement problem. The speech quality was evaluated using a suite of measures. After parameter optimization by gradient descent, the values were re-checked using a wider range to prevent convergence to a local minimum. To optimize the problem's five parameters, the overall number of operations using the proposed method was 99.99958% smaller than that using the conventional method. The extracted optimal values increased the speech quality by 1.1307%, 3.097%, 3.742%, and 3.861% on average for signal-to-noise ratios of 0, 5, 10, and 15 dB, respectively

    Modeling Ion Channel Kinetics with HPC

    No full text
    Abstract — Performance improvements for computational sciences such as biology, physics, and chemistry are critically dependent on advances in multicore and manycore hardware. However, these emerging systems require substantial investment in software development time to migrate, optimize, and validate existing science models. The focus of our study is to examine the step-by-step process of adapting new and existing computational biology models to multicore and distributed memory architectures. We analyze different strategies that may be more efficient in multicore vs. manycore environments. Our target application, Kingen, was developed to simulate AMPAR ion channel activity and to optimize kinetic model rate constants to biological data. Kingen uses a genetic algorithm to stochastically search parameter space to find global optima. As each individual in the population describes a rate constant parameter set in the kinetic model and the model is evaluated for each individual, there is significant computational complexity and parallelism in even a simple model run. Keywords- multicore; cluster; workload characterization; application profiling; kinetic modeling; scientific application; high performance computation; ion channel kinetics I

    A Grid-enabled Adaptive ProblemSolving Environment

    No full text
    Abstract. AS complexity of computational applications and their environments has been increased due to the heterogeneity of resources; complexity, continuous changes of the applications as well as the resources states, and the large number of resources involved, the importance of problem solving environments has been more emphasized. As a PSE for metacomputing environment, Adaptive Distributed Computing Environment (ADViCE) has been developed before the emergence of Grid computing services. Current runtime systems for computing mainly focus on executing applications with static resource configuration and do not adequately change the configuration of application execution environments dynamically to optimize the application performance. In this paper, we present an architectural overview of ADViCE and discuss how it is evolving to incorporate Grid computing services to extend its range of services and decrease the cost of development, deployment, execution and maintenance for an application. We provide that ADViCE optimize the application execution at runtime adaptively optimize based on application requirements in both non-Grid and Grid environment with optimal execution options. We have implemented the ADViCE prototype and currently evaluating the prototype and its adaptive services for a larger set of Grid applications.

    The Design and Evaluation of a Virtual Distributed Computing Environment

    No full text
    this paper we present the Virtual Distributed Computing Environment (VDCE), a metacomputing environment currently being developed at Syracuse University. VDCE provides an efficient web-based approach for developing, evaluating and visualizing large-scale distributed applications that are based on predefined task libraries on diverse platforms. The VDCE task libraries relieve end-users of tedious task implementations and also support reusability. The VDCE software architecture is described in terms of three modules: a) the Application Editor, a user-friendly application development environment that generates the Application Flow Graph (AFG) of an application; b) the Application Scheduler, which provides an efficient task-to-resource mapping of AFG; and c) the VDCE Runtime System, which is responsible for running and managing application execution and for monitoring the VDCE resources. We present experimental results of an application execution on the VDCE prototype for evaluating the performance of different machine and network configurations. We also show how VDCE can be used as a problem-solving environment on which large-scale, network-centric applications can be developed by a novice programmer rather than by an expert in low-level details of parallel programming languages
    corecore