1,644 research outputs found
Leveraging Semantic Web Technologies for Managing Resources in a Multi-Domain Infrastructure-as-a-Service Environment
This paper reports on experience with using semantically-enabled network
resource models to construct an operational multi-domain networked
infrastructure-as-a-service (NIaaS) testbed called ExoGENI, recently funded
through NSF's GENI project. A defining property of NIaaS is the deep
integration of network provisioning functions alongside the more common storage
and computation provisioning functions. Resource provider topologies and user
requests can be described using network resource models with common base
classes for fundamental cyber-resources (links, nodes, interfaces) specialized
via virtualization and adaptations between networking layers to specific
technologies.
This problem space gives rise to a number of application areas where semantic
web technologies become highly useful - common information models and resource
class hierarchies simplify resource descriptions from multiple providers,
pathfinding and topology embedding algorithms rely on query abstractions as
building blocks.
The paper describes how the semantic resource description models enable
ExoGENI to autonomously instantiate on-demand virtual topologies of virtual
machines provisioned from cloud providers and are linked by on-demand virtual
connections acquired from multiple autonomous network providers to serve a
variety of applications ranging from distributed system experiments to
high-performance computing
An Enhanced Architecture for LARIISA: An Intelligent System for Decision Making and Service Provision for e-Health using the cloud
International audienceHealth care services can be scarce and expensive in some countries and especially in isolated regions. The lack of information can degrade health care services, for example, by ineffective resource allocation or failure in epidemiological prediction. This paper proposes an architecture for system of decision making and service provisioning in the health care context. It encompasses and integrates data produced by environmental sensors installed in the assisted homes, medical data sets, domain-specific and semantic enriched data sets, and all data generated and collected in applications installed on mobile phones, wearable devices, desktops, web servers, and smart television. LARIISA architecture is presented as a platform to manage, provide and launch services that monitor and analyze data to supply relevant information to decision makers and health care actors that participate in the health care supply chain
RESTful framework for collaborative internet of things based on IEC 61850
El contenido de los capítulos 2 y 3 está sujeto a confidencialidad
161 p.En 1991, Mark Weiser formuló el paradigma de Computación Ubicua definiendo el concepto de Entorno Inteligente como un espacio físico repleto de dispositivos, muy integrados en el entorno, y con capacidades de identificación, sensorización y actuación. Internet de las Cosas (IoT) expande el ámbito de localización de estos dispositivos y servicios ubicuos, representados como cosas, de un entorno local a internet como red global. Para la implementación de estos escenarios de aplicación, la colaboración entre las cosas es uno de los principales retos de investigación. El objetivo de esta colaboración es ser capaces de satisfacer necesidades globales mediante la combinación de servicios individuales. Esta Tesis propone una arquitectura colaborativa entre las cosas desplegadas en internet.Las tecnologías alrededor de los Servicios Web SOAP/XML, adecuadas para IoT, soportan aspectos claves para un sistema colaborativo como la publicación, descubrimiento, control y gestión de eventos de los dispositivos. Como alternativa, REST ha ganado terreno en este ámbito por ser considerada una opción más ligera, sencilla y natural para la comunicación en internet. Sin embargo, no existen protocolos para descubrimiento y gestión de eventos para recursos REST. Esta Tesis aborda dicha carencia proponiendo una especificación de estos protocolos para arquitecturas REST. Otro aspecto importante es la representación, a nivel de aplicación, de las cosas distribuidas. Entre las propuestas para la estandarización de los modelos de información y comunicación en este dominio que podrían aplicarse, de manera similar, a IoT, destaca el estándar IEC 61850. Sin embargo, los protocolos de comunicación definidos por el estándar no son adecuados para IoT. Esta Tesis analiza la idoneidad del IEC 61850 para escenarios IoT y propone un protocolo de comunicación REST para sus servicios.Por último, se trata la problemática asociada a la confiabilidad que debe proporcionar una arquitectura IoT para dominios de aplicación relacionados con la salud o sistemas de seguridad funcional (Safety)
Unified radio and network control across heterogeneous hardware platforms
Experimentation is an important step in the investigation of techniques for handling spectrum scarcity or the development of new waveforms in future wireless networks. However, it is impractical and not cost effective to construct custom platforms for each future network scenario to be investigated. This problem is addressed by defining Unified Programming Interfaces that allow common access to several platforms for experimentation-based prototyping, research, and development purposes. The design of these interfaces is driven by a diverse set of scenarios that capture the functionality relevant to future network implementations while trying to keep them as generic as possible. Herein, the definition of this set of scenarios is presented as well as the architecture for supporting experimentation-based wireless research over multiple hardware platforms. The proposed architecture for experimentation incorporates both local and global unified interfaces to control any aspect of a wireless system while being completely agnostic to the actual technology incorporated. Control is feasible from the low-level features of individual radios to the entire network stack, including hierarchical control combinations. A testbed to enable the use of the above architecture is utilized that uses a backbone network in order to be able to extract measurements and observe the overall behaviour of the system under test without imposing further communication overhead to the actual experiment. Based on the aforementioned architecture, a system is proposed that is able to support the advancement of intelligent techniques for future networks through experimentation while decoupling promising algorithms and techniques from the capabilities of a specific hardware platform
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Towards continuously programmable networks
While programmability has been a feature of network devices for a long time, the past decade has seen significant enhancement of programming capability for network functions and nodes, spearheaded by the ongoing trend towards softwarization and cloudification. In his context, new design principles and technology enablers are introduced (Section 7.2) which reside at: (i) service/application provisioning level, (ii) network and resource management level, as well as (iii) network deployment and connectivity level
- …