315 research outputs found

    Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach

    Get PDF
    The last years have been characterized by the arising of highly distributed computing platforms composed of a heterogeneity of computing and communication resources including centralized high-performance computing architectures (e.g. clusters or large shared-memory machines), as well as multi-/many-core components also integrated into mobile nodes and network facilities. The emerging of computational paradigms such as Grid and Cloud Computing, provides potential solutions to integrate such platforms with data systems, natural phenomena simulations, knowledge discovery and decision support systems responding to a dynamic demand of remote computing and communication resources and services. In this context time-critical applications, notably emergency management systems, are composed of complex sets of application components specialized for executing specific computations, which are able to cooperate in such a way as to perform a global goal in a distributed manner. Since the last years the scientific community has been involved in facing with the programming issues of distributed systems, aimed at the definition of applications featuring an increasing complexity in the number of distributed components, in the spatial distribution and cooperation between interested parties and in their degree of heterogeneity. Over the last decade the research trend in distributed computing has been focused on a crucial objective. The wide-ranging composition of distributed platforms in terms of different classes of computing nodes and network technologies, the strong diffusion of applications that require real-time elaborations and online compute-intensive processing as in the case of emergency management systems, lead to a pronounced tendency of systems towards properties like self-managing, self-organization, self-controlling and strictly speaking adaptivity. Adaptivity implies the development, deployment, execution and management of applications that, in general, are dynamic in nature. Dynamicity concerns the number and the specific identification of cooperating components, the deployment and composition of the most suitable versions of software components on processing and networking resources and services, i.e., both the quantity and the quality of the application components to achieve the needed Quality of Service (QoS). In time-critical applications the QoS specification can dynamically vary during the execution, according to the user intentions and the Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach Gabriele Mencagli and Marco Vanneschi Department of Computer Science, University of Pisa, L. Bruno Pontecorvo, Pisa Italy 2 2 Will-be-set-by-IN-TECH information produced by sensors and services, as well as according to the monitored state and performance of networks and nodes. The general reference point for this kind of systems is the Grid paradigm which, by definition, aims to enable the access, selection and aggregation of a variety of distributed and heterogeneous resources and services. However, though notable advancements have been achieved in recent years, current Grid technology is not yet able to supply the needed software tools with the features of high adaptivity, ubiquity, proactivity, self-organization, scalability and performance, interoperability, as well as fault tolerance and security, of the emerging applications. For this reason in this chapter we will study a methodology for designing high-performance computations able to exploit the heterogeneity and dynamicity of distributed environments by expressing adaptivity and QoS-awareness directly at the application level. An effective approach needs to address issues like QoS predictability of different application configurations as well as the predictability of reconfiguration costs. Moreover adaptation strategies need to be developed assuring properties like the stability degree of a reconfiguration decision and the execution optimality (i.e. select reconfigurations accounting proper trade-offs among different QoS objectives). In this chapter we will present the basic points of a novel approach that lays the foundations for future programming model environments for time-critical applications such as emergency management systems. The organization of this chapter is the following. In Section 2 we will compare the existing research works for developing adaptive systems in critical environments, highlighting their drawbacks and inefficiencies. In Section 3, in order to clarify the application scenarios that we are considering, we will present an emergency management system in which the run-time selection of proper application configuration parameters is of great importance for meeting the desired QoS constraints. In Section 4we will describe the basic points of our approach in terms of how compute-intensive operations can be programmed, how they can be dynamically modified and how adaptation strategies can be expressed. In Section 5 our approach will be contextualize to the definition of an adaptive parallel module, which is a building block for composing complex and distributed adaptive computations. Finally in Section 6 we will describe a set of experimental results that show the viability of our approach and in Section 7 we will give the concluding remarks of this chapter

    Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems

    Full text link
    [EN] Multiple-Input Multiple-Output (MIMO) technology in Digital Terrestrial Television (DTT) networks has the potential to increase the spectral efficiency and improve network coverage to cope with the competition of limited spectrum use (e.g., assignment of digital dividend and spectrum demands of mobile broadband), the appearance of new high data rate services (e.g., ultra-high definition TV - UHDTV), and the ubiquity of the content (e.g., fixed, portable, and mobile). It is widely recognised that MIMO can provide multiple benefits such as additional receive power due to array gain, higher resilience against signal outages due to spatial diversity, and higher data rates due to the spatial multiplexing gain of the MIMO channel. These benefits can be achieved without additional transmit power nor additional bandwidth, but normally come at the expense of a higher system complexity at the transmitter and receiver ends. The final system performance gains due to the use of MIMO directly depend on physical characteristics of the propagation environment such as spatial correlation, antenna orientation, and/or power imbalances experienced at the transmit aerials. Additionally, due to complexity constraints and finite-precision arithmetic at the receivers, it is crucial for the overall system performance to carefully design specific signal processing algorithms. This dissertation focuses on transmit and received signal processing for DTT systems using MIMO-BICM (Bit-Interleaved Coded Modulation) without feedback channel to the transmitter from the receiver terminals. At the transmitter side, this thesis presents investigations on MIMO precoding in DTT systems to overcome system degradations due to different channel conditions. At the receiver side, the focus is given on design and evaluation of practical MIMO-BICM receivers based on quantized information and its impact in both the in-chip memory size and system performance. These investigations are carried within the standardization process of DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) the handheld evolution of DVB-T2 (Terrestrial - Second Generation), and ATSC 3.0 (Advanced Television Systems Committee - Third Generation), which incorporate MIMO-BICM as key technology to overcome the Shannon limit of single antenna communications. Nonetheless, this dissertation employs a generic approach in the design, analysis and evaluations, hence, the results and ideas can be applied to other wireless broadcast communication systems using MIMO-BICM.[ES] La tecnología de múltiples entradas y múltiples salidas (MIMO) en redes de Televisión Digital Terrestre (TDT) tiene el potencial de incrementar la eficiencia espectral y mejorar la cobertura de red para afrontar las demandas de uso del escaso espectro electromagnético (e.g., designación del dividendo digital y la demanda de espectro por parte de las redes de comunicaciones móviles), la aparición de nuevos contenidos de alta tasa de datos (e.g., ultra-high definition TV - UHDTV) y la ubicuidad del contenido (e.g., fijo, portable y móvil). Es ampliamente reconocido que MIMO puede proporcionar múltiples beneficios como: potencia recibida adicional gracias a las ganancias de array, mayor robustez contra desvanecimientos de la señal gracias a la diversidad espacial y mayores tasas de transmisión gracias a la ganancia por multiplexado del canal MIMO. Estos beneficios se pueden conseguir sin incrementar la potencia transmitida ni el ancho de banda, pero normalmente se obtienen a expensas de una mayor complejidad del sistema tanto en el transmisor como en el receptor. Las ganancias de rendimiento finales debido al uso de MIMO dependen directamente de las características físicas del entorno de propagación como: la correlación entre los canales espaciales, la orientación de las antenas y/o los desbalances de potencia sufridos en las antenas transmisoras. Adicionalmente, debido a restricciones en la complejidad y aritmética de precisión finita en los receptores, es fundamental para el rendimiento global del sistema un diseño cuidadoso de algoritmos específicos de procesado de señal. Esta tesis doctoral se centra en el procesado de señal, tanto en el transmisor como en el receptor, para sistemas TDT que implementan MIMO-BICM (Bit-Interleaved Coded Modulation) sin canal de retorno hacia el transmisor desde los receptores. En el transmisor esta tesis presenta investigaciones en precoding MIMO en sistemas TDT para superar las degradaciones del sistema debidas a diferentes condiciones del canal. En el receptor se presta especial atención al diseño y evaluación de receptores prácticos MIMO-BICM basados en información cuantificada y a su impacto tanto en la memoria del chip como en el rendimiento del sistema. Estas investigaciones se llevan a cabo en el contexto de estandarización de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), la evolución portátil de DVB-T2 (Second Generation Terrestrial), y ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporan MIMO-BICM como clave tecnológica para superar el límite de Shannon para comunicaciones con una única antena. No obstante, esta tesis doctoral emplea un método genérico tanto para el diseño, análisis y evaluación, por lo que los resultados e ideas pueden ser aplicados a otros sistemas de comunicación inalámbricos que empleen MIMO-BICM.[CA] La tecnologia de múltiples entrades i múltiples eixides (MIMO) en xarxes de Televisió Digital Terrestre (TDT) té el potencial d'incrementar l'eficiència espectral i millorar la cobertura de xarxa per a afrontar les demandes d'ús de l'escàs espectre electromagnètic (e.g., designació del dividend digital i la demanda d'espectre per part de les xarxes de comunicacions mòbils), l'aparició de nous continguts d'alta taxa de dades (e.g., ultra-high deffinition TV - UHDTV) i la ubiqüitat del contingut (e.g., fix, portàtil i mòbil). És àmpliament reconegut que MIMO pot proporcionar múltiples beneficis com: potència rebuda addicional gràcies als guanys de array, major robustesa contra esvaïments del senyal gràcies a la diversitat espacial i majors taxes de transmissió gràcies al guany per multiplexat del canal MIMO. Aquests beneficis es poden aconseguir sense incrementar la potència transmesa ni l'ample de banda, però normalment s'obtenen a costa d'una major complexitat del sistema tant en el transmissor com en el receptor. Els guanys de rendiment finals a causa de l'ús de MIMO depenen directament de les característiques físiques de l'entorn de propagació com: la correlació entre els canals espacials, l'orientació de les antenes, i/o els desequilibris de potència patits en les antenes transmissores. Addicionalment, a causa de restriccions en la complexitat i aritmètica de precisió finita en els receptors, és fonamental per al rendiment global del sistema un disseny acurat d'algorismes específics de processament de senyal. Aquesta tesi doctoral se centra en el processament de senyal tant en el transmissor com en el receptor per a sistemes TDT que implementen MIMO-BICM (Bit-Interleaved Coded Modulation) sense canal de tornada cap al transmissor des dels receptors. En el transmissor aquesta tesi presenta recerques en precoding MIMO en sistemes TDT per a superar les degradacions del sistema degudes a diferents condicions del canal. En el receptor es presta especial atenció al disseny i avaluació de receptors pràctics MIMO-BICM basats en informació quantificada i al seu impacte tant en la memòria del xip com en el rendiment del sistema. Aquestes recerques es duen a terme en el context d'estandardització de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), l'evolució portàtil de DVB-T2 (Second Generation Terrestrial), i ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporen MIMO-BICM com a clau tecnològica per a superar el límit de Shannon per a comunicacions amb una única antena. No obstant açò, aquesta tesi doctoral empra un mètode genèric tant per al disseny, anàlisi i avaluació, per la qual cosa els resultats i idees poden ser aplicats a altres sistemes de comunicació sense fils que empren MIMO-BICM.Vargas Paredero, DE. (2016). Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/66081TESISPremiad

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Generalized asset integrity games

    Get PDF
    Generalized assets represent a class of multi-scale adaptive state-transition systems with domain-oblivious performance criteria. The governance of such assets must proceed without exact specifications, objectives, or constraints. Decision making must rapidly scale in the presence of uncertainty, complexity, and intelligent adversaries. This thesis formulates an architecture for generalized asset planning. Assets are modelled as dynamical graph structures which admit topological performance indicators, such as dependability, resilience, and efficiency. These metrics are used to construct robust model configurations. A normalized compression distance (NCD) is computed between a given active/live asset model and a reference configuration to produce an integrity score. The utility derived from the asset is monotonically proportional to this integrity score, which represents the proximity to ideal conditions. The present work considers the situation between an asset manager and an intelligent adversary, who act within a stochastic environment to control the integrity state of the asset. A generalized asset integrity game engine (GAIGE) is developed, which implements anytime algorithms to solve a stochastically perturbed two-player zero-sum game. The resulting planning strategies seek to stabilize deviations from minimax trajectories of the integrity score. Results demonstrate the performance and scalability of the GAIGE. This approach represents a first-step towards domain-oblivious architectures for complex asset governance and anytime planning

    Multi-Level Multi-Objective Programming and Optimization for Integrated Air Defense System Disruption

    Get PDF
    The U.S. military\u27s ability to project military force is being challenged. This research develops and demonstrates the application of three respective sensor location, relocation, and network intrusion models to provide the mathematical basis for the strategic engagement of emerging technologically advanced, highly-mobile, Integrated Air Defense Systems. First, we propose a bilevel mathematical programming model for locating a heterogeneous set of sensors to maximize the minimum exposure of an intruder\u27s penetration path through a defended region. Next, we formulate a multi-objective, bilevel optimization model to relocate surviving sensors to maximize an intruder\u27s minimal expected exposure to traverse a defended border region, minimize the maximum sensor relocation time, and minimize the total number of sensors requiring relocation. Lastly, we present a trilevel, attacker-defender-attacker formulation for the heterogeneous sensor network intrusion problem to optimally incapacitate a subset of the defender\u27s sensors and degrade a subset of the defender\u27s network to ultimately determine the attacker\u27s optimal penetration path through a defended network

    QUANTITATIVE SAFETY ASSESSMENT OF AIR TRAFFIC CONTROL SYSTEMS THROUGH SYSTEM CONTROL CAPACITY

    Get PDF
    Quantitative Safety Assessments (QSA) are essential to safety benefit verification and regulations of developmental changes in safety critical systems like the Air Traffic Control (ATC) systems. Effectiveness of the assessments is particularly desirable today in the safe implementations of revolutionary ATC overhauls like NextGen and SESAR. QSA of ATC systems are however challenged by system complexity and lack of accident data

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented

    Qos-aware fine-grained power management in networked computing systems

    Get PDF
    Power is a major design concern of today\u27s networked computing systems, from low-power battery-powered mobile and embedded systems to high-power enterprise servers. Embedded systems are required to be power efficiency because most embedded systems are powered by battery with limited capacity. Similar concern of power expenditure rises as well in enterprise server environments due to cooling requirement, power delivery limit, electricity costs as well as environment pollutions. The power consumption in networked computing systems includes that on circuit board and that for communication. In the context of networked real-time systems, the power dissipation on wireless communication is more significant than that on circuit board. We focus on packet scheduling for wireless real-time systems with renewable energy resources. In such a scenario, it is required to transmit data with higher level of importance periodically. We formulate this packet scheduling problem as an NP-hard reward maximization problem with time and energy constraints. An optimal solution with pseudo polynomial time complexity is presented. In addition, we propose a sub-optimal solution with polynomial time complexity. Circuit board, especially processor, power consumption is still the major source of system power consumption. We provide a general-purposed, practical and comprehensive power management middleware for networked computing systems to manage circuit board power consumption thus to affect system-level power consumption. It has the functionalities of power and performance monitoring, power management (PM) policy selection and PM control, as well as energy efficiency analysis. This middleware includes an extensible PM policy library. We implemented a prototype of this middleware on Base Band Units (BBUs) with three PM policies enclosed. These policies have been validated on different platforms, such as enterprise servers, virtual environments and BBUs. In enterprise environments, the power dissipation on circuit board dominates. Regulation on computing resources on board has a significant impact on power consumption. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to conserve energy consumption. We investigate system-level power management in order to avoid system failures due to power capacity overload or overheating. This management needs to control the power consumption in an accurate and responsive manner, which cannot be achieve by the existing black-box feedback control. Thus we present a model-predictive feedback controller to regulate processor frequency so that power budget can be satisfied without significant loss on performance. In addition to providing power guarantee alone, performance with respect to service-level agreements (SLAs) is required to be guaranteed as well. The proliferation of virtualization technology imposes new challenges on power management due to resource sharing. It is hard to achieve optimization in both power and performance on shared infrastructures due to system dynamics. We propose vPnP, a feedback control based coordination approach providing guarantee on application-level performance and underlying physical host power consumption in virtualized environments. This system can adapt gracefully to workload change. The preliminary results show its flexibility to achieve different levels of tradeoffs between power and performance as well as its robustness over a variety of workloads. It is desirable for improve energy efficiency of systems, such as BBUs, hosting soft-real time applications. We proposed a power management strategy for controlling delay and minimizing power consumption using DVFS. We use the Robbins-Monro (RM) stochastic approximation method to estimate delay quantile. We couple a fuzzy controller with the RM algorithm to scale CPU frequency that will maintain performance within the specified QoS
    corecore