10 research outputs found
The Case for an Adaptive Integration Framework for Data Aggregation/Dissemination in Service-Oriented Architectures
Abstract The migration to Service Oriented Architectures (SOA
Eine Netzwerkarchitektur zum Einsatz des Material Exchange Formats für Live-Produktionen im professionellen Fernsehstudio
Der Bereich der Liveproduktion im Fernsehstudio ist geprägt von hohen
Anforderungen an Qualität, Zeitverhalten und Zuverlässigkeit bei der
Erstellung von Audio- und Videomaterial zur Distribution über
Broadcastkanäle. In der Vergangenheit konnten diese Anforderungen nur mit
spezieller und damit kostenintensiver Gerätetechnik bewältigt werden. Mit
der Entwicklung auf dem Elektroniksektor ist heute einerseits eine Vielzahl
von zusätzlichen Distributionswegen mit Inhalten zu versorgen. Andererseits
stehen leistungsfähige Geräte auf Basis von Standard-IT-Technologien zur
Verfügung, die senderseitig zur Produktion von Material eingesetzt werden
können und zusätzlich Datenverarbeitung leisten, welche Produktionsabläufe
effizienter gestaltet.Die vorliegende Dissertation beschäftigt sich vor
diesem Hintergrund mit der Anwendung von Standard-IT-Technologien im
echtzeitkritischen Bereich der Fernsehstudioproduktion. Dabei besteht
insbesondere das Ziel der Integration von Metadatenverarbeitung. Die Arbeit
kombiniert dazu Standard-IT-Technologien und ergänzt diese um Konzepte, die
die besonderen Anforderungen einer Liveproduktion im
Fernsehproduktionsstudio berücksichtigen. Im Rahmen dieser Arbeit wird eine
Übertragungstechnologie zum Datenaustausch im Studio aus
Standardkomponenten modelliert. Parameter zur Bewertung der
Netzwerkleistung und Strategien zur Ressourcenteilung werden diskutiert. Im
weiteren Verlauf der Arbeit werden Prozessoren zur Verarbeitung von
Essenzdaten verglichen und über die PC-Plattform in eine universelle
Einheit zur Datenverarbeitung integriert. Die Analyse von Komponenten und
Abläufen führt zu einer feingranularen Latenzbetrachtung, die eine
Grundlage für Optimierungsstrategien mit dem Ziel einer latenzarmen
Implementierung darstellt. Das Ziel der Metadatenintegration wird mit dem
Einbinden des Material Exchange Formats erreicht, das die synchronisierte
Übertragung von Essenz- und Metadaten erlaubt. Die Arbeit identifiziert
weiterhin Anwendungsszenarien, in denen Metadaten auch in
echtzeitkritischen Live-Produktionen genutzt werden können. Eine
prototypische Implementierung bildet abschließend die Grundlage zur
Verifikation getroffener Aussagen
Energy Saving in QoS Fog-supported Data Centers
One of the most important challenges that cloud providers face in the explosive growth of data is to reduce the energy consumption of their designed, modern data centers. The majority of current research focuses on energy-efficient resources management in the infrastructure as a service (IaaS) model through "resources virtualization" - virtual machines and physical machines consolidation. However, actual virtualized data centers are not supporting communication–computing intensive real-time applications, big data stream computing (info-mobility applications, real-time video co-decoding). Indeed, imposing hard-limits on the overall per-job computing-plus-communication delays forces the overall networked computing infrastructure to quickly adopt its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload.
Recently, Fog Computing centers are as promising commodities in Internet virtual computing platform that raising the energy consumption and making the critical issues on such platform. Therefore, it is expected to present some green solutions (i.e., support energy provisioning) that cover fog-supported delay-sensitive web applications. Moreover, the usage of traffic engineering-based methods dynamically keep up the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible, reliable technological paradigm and resource allocation algorithm to pay attention the consumed energy. Furthermore, these algorithms could automatically adapt themselves to time-varying workloads, joint reconfiguration, and orchestration of the virtualized computing-plus-communication resources available at the computing nodes. Besides, these methods facilitate things devices to operate under real-time constraints on the allowed computing-plus-communication delay and service latency.
The purpose of this thesis is: i) to propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, where we detail the main building blocks and services of the corresponding technological platform and protocol stack; ii) propose a dynamic and adaptive energy-aware algorithm that models and manages virtualized networked data centers Fog Nodes (FNs), to minimize the resulting networking-plus-computing average energy consumption; and, iii) propose a novel Software-as-a-Service (SaaS) Fog Computing platform to integrate the user applications over the FoE. The emerging utilization of SaaS Fog Computing centers as an Internet virtual computing commodity is to support delay-sensitive applications.
The main blocks of the virtualized Fog node, operating at the Middleware layer of the underlying protocol stack and comprises of: i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP connection.
The salient features of this algorithm are that: i) it is adaptive and admits distributed scalable implementation; ii) it has the capacity to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) it explicitly accounts for the dynamic interaction between computing and networking resources in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of: i) client mobility; ii) wireless fading; iii) reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv) abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared to the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces
Unified Role Assignment Framework For Wireless Sensor Networks
Wireless sensor networks are made possible by the continuing improvements in embedded sensor, VLSI, and wireless radio technologies. Currently, one of the important challenges in sensor networks is the design of a systematic network management framework that allows localized and collaborative resource control uniformly across all application services such as sensing, monitoring, tracking, data aggregation, and routing.
The research in wireless sensor networks is currently oriented toward a cross-layer network abstraction that supports appropriate fine or course grained resource controls for energy efficiency. In that regard, we have designed a unified role-based service paradigm for wireless sensor networks. We pursue this by first developing a Role-based Hierarchical Self-Organization (RBSHO) protocol that organizes a connected dominating set (CDS) of nodes called dominators. This is done by hierarchically selecting nodes that possess cumulatively high energy, connectivity, and sensing capabilities in their local neighborhood. The RBHSO protocol then assigns specific tasks such as sensing, coordination, and routing to appropriate dominators that end up playing a certain role in the network.
Roles, though abstract and implicit, expose role-specific resource controls by way of role assignment and scheduling. Based on this concept, we have designed a Unified Role-Assignment Framework (URAF) to model application services as roles played by local in-network sensor nodes with sensor capabilities used as rules for role identification. The URAF abstracts domain specific role attributes by three models: the role energy model, the role execution time model, and the role service utility model. The framework then generalizes resource management for services by providing abstractions for controlling the composition of a service in terms of roles, its assignment, reassignment, and scheduling. To the best of our knowledge, a generic role-based framework that provides a simple and unified network management solution for wireless sensor networks has not been proposed previously
Network Performance Management Using Application-centric Key Performance Indicators
The Internet and intranets are viewed as capable of supplying Anything, Anywhere, Anytime and e-commerce, e-government, e-community, and military C4I are now deploying many and varied applications to serve their needs. Network management is currently centralized in operations centers. To assure customer satisfaction with the network performance they typically plan, configure and monitor the network devices to insure an excess of bandwidth, that is overprovision. If this proves uneconomical or if complex and poorly understood interactions of equipment, protocols and application traffic degrade performance creating customer dissatisfaction, another more application-centric, way of managing the network will be needed. This research investigates a new qualitative class of network performance measures derived from the current quantitative metrics known as quality of service (QOS) parameters. The proposed class of qualitative indicators focuses on utilizing current network performance measures (QOS values) to derive abstract quality of experience (QOE) indicators by application class. These measures may provide a more user or application-centric means of assessing network performance even when some individual QOS parameters approach or exceed specified levels. The mathematics of functional analysis suggests treating QOS performance values as a vector, and, by mapping the degradation of the application performance to a characteristic lp-norm curve, a qualitative QOE value (good/poor) can be calculated for each application class. A similar procedure could calculate a QOE node value (satisfactory/unsatisfactory) to represent the service level of the switch or router for the current mix of application traffic. To demonstrate the utility of this approach a discrete event simulation (DES) test-bed, in the OPNET telecommunications simulation environment, was created modeling the topology and traffic of three semi-autonomous networks connected by a backbone. Scenarios, designed to degrade performance by under-provisioning links or nodes, are run to evaluate QOE for an access network. The application classes and traffic load are held constant. Future research would include refinement of the mathematics, many additional simulations and scenarios varying other independent variables. Finally collaboration with researchers in areas as diverse as human computer interaction (HCI), software engineering, teletraffic engineering, and network management will enhance the concepts modeled
Implementation of Host-based Overlay Multicast to Support of Web Based Services for RT-DVS
Growing demand for use of Internet/Web-based services in real-time distributed virtual simulation (RT-DVS) and other real-time applications is fueling extensive interest in overlay multicast protocols. These applications demand Quality of Service (QoS) and many-to-many multicast services that are not available in underlying Internet services today. This paper describes an early implementation of an overlay multicast protocol designed to support many-to-many multicast for RT-DVS applications called Extensible Modeling and Simulation Framework Overlay Multicast (XOM). We first describe the architecture and key design considerations of XOM. We then provide preliminary results from lab experiments with our prototype. Our results indicate that we can achieve performance objectives for support of web-based services for RT-DVS. 1
Wireless Sensor Networks
The aim of this book is to present few important issues of WSNs, from the application, design and technology points of view. The book highlights power efficient design issues related to wireless sensor networks, the existing WSN applications, and discusses the research efforts being undertaken in this field which put the reader in good pace to be able to understand more advanced research and make a contribution in this field for themselves. It is believed that this book serves as a comprehensive reference for graduate and undergraduate senior students who seek to learn latest development in wireless sensor networks
Calibración de un algoritmo de detección de anomalías marítimas basado en la fusión de datos satelitales
La fusión de diferentes fuentes de datos aporta una ayuda significativa en el proceso de toma de decisiones. El presente artículo describe el desarrollo de una plataforma que permite detectar anomalías marítimas por medio de la fusión de datos del Sistema de Información Automática (AIS) para seguimiento de buques y de imágenes satelitales de Radares de Apertura Sintética (SAR). Estas anomalías son presentadas al operador como un conjunto de detecciones que requieren ser monitoreadas para descubrir su naturaleza. El proceso de detección se lleva adelante primero identificando objetos dentro de las imágenes SAR a través de la aplicación de algoritmos CFAR, y luego correlacionando los objetos detectados con los datos reportados mediante el sistema AIS.
En este trabajo reportamos las pruebas realizadas con diferentes configuraciones de los parámetros para los algoritmos de detección y asociación, analizamos la respuesta de la plataforma y reportamos la combinación de parámetros que reporta mejores resultados para las imágenes utilizadas.
Este es un primer paso en nuestro objetivo futuro de desarrollar un sistema que ajuste los parámetros en forma dinámica dependiendo de las imágenes disponibles.XVI Workshop Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI
WICC 2017 : XIX Workshop de Investigadores en Ciencias de la Computación
Actas del XIX Workshop de Investigadores en Ciencias de la Computación (WICC 2017), realizado en el Instituto Tecnológico de Buenos Aires (ITBA), el 27 y 28 de abril de 2017.Red de Universidades con Carreras en Informática (RedUNCI