126,568 research outputs found
Designing Traceability into Big Data Systems
Providing an appropriate level of accessibility and traceability to data or
process elements (so-called Items) in large volumes of data, often
Cloud-resident, is an essential requirement in the Big Data era.
Enterprise-wide data systems need to be designed from the outset to support
usage of such Items across the spectrum of business use rather than from any
specific application view. The design philosophy advocated in this paper is to
drive the design process using a so-called description-driven approach which
enriches models with meta-data and description and focuses the design process
on Item re-use, thereby promoting traceability. Details are given of the
description-driven design of big data systems at CERN, in health informatics
and in business process management. Evidence is presented that the approach
leads to design simplicity and consequent ease of management thanks to loose
typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International
Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore
July 2015. arXiv admin note: text overlap with arXiv:1402.5764,
arXiv:1402.575
Proposal for an IMLS Collection Registry and Metadata Repository
The University of Illinois at Urbana-Champaign proposes to design, implement, and research a collection-level registry and item-level metadata repository service that will aggregate information about digital collections and items of digital content created using funds from Institute of Museum and Library Services (IMLS) National Leadership Grants. This work will be a collaboration by the University Library and the Graduate School of Library and Information Science. All extant digital collections initiated or augmented under IMLS aegis from 1998 through September 30, 2005 will be included in the proposed collection registry. Item-level metadata will be harvested from collections making such content available using the Open Archives Initiative Protocol for Metadata Harvesting (OAI PMH). As part of this work, project personnel, in cooperation with IMLS staff and grantees, will define and document appropriate metadata schemas, help create and maintain collection-level metadata records, assist in implementing OAI compliant metadata provider services for dissemination of item-level metadata records, and research potential benefits and issues associated with these activities. The immediate outcomes of this work will be the practical demonstration of technologies that have the potential to enhance the visibility of IMLS funded online exhibits and digital library collections and improve discoverability of items contained in these resources. Experience gained and research conducted during this project will make clearer both the costs and the potential benefits associated with such services. Metadata provider and harvesting service implementations will be appropriately instrumented (e.g., customized anonymous transaction logs, online questionnaires for targeted user groups, performance monitors). At the conclusion of this project we will submit a final report that discusses tasks performed and lessons learned, presents business plans for sustaining registry and repository services, enumerates and summarizes potential benefits of these services, and makes recommendations regarding future implementations of these and related intermediary and end user interoperability services by IMLS projects.unpublishednot peer reviewe
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
Survey on Additive Manufacturing, Cloud 3D Printing and Services
Cloud Manufacturing (CM) is the concept of using manufacturing resources in a
service oriented way over the Internet. Recent developments in Additive
Manufacturing (AM) are making it possible to utilise resources ad-hoc as
replacement for traditional manufacturing resources in case of spontaneous
problems in the established manufacturing processes. In order to be of use in
these scenarios the AM resources must adhere to a strict principle of
transparency and service composition in adherence to the Cloud Computing (CC)
paradigm. With this review we provide an overview over CM, AM and relevant
domains as well as present the historical development of scientific research in
these fields, starting from 2002. Part of this work is also a meta-review on
the domain to further detail its development and structure
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Inter-Domain Integration of Services and Service Management
The evolution of the global telecommunications industry into an open services market presents developers of telecommunication service and management systems with many new challenges. Increased competition, complex service provision chains and integrated service offerings require effective techniques for the rapid integration of service and management systems over multiple organisational domains. These integration issues have been examined in the ACTS project Prospect by developing a working set of integrated, managed telecommunications services for a user trial. This paper presents the initial results of this work detailing the technologies and standards used, the architectural approach taken and the application of this approach to specific services
Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"
According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient.
The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself.
Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners.
• The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another.
• The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion.
The behaviour of the entities may vary over time.
• The systems operate with incomplete information about the environment.
For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered.
The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems.
This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative.
We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration
London SynEx Demonstrator Site: Impact Assessment Report
The key ingredients of the SynEx-UCL software components are:
1. A comprehensive and federated electronic healthcare record that can be used to
reference or to store all of the necessary healthcare information acquired from a
diverse range of clinical databases and patient-held devices.
2. A directory service component to provide a core persons demographic database to
search for and authenticate staff users of the system and to anchor patient
identification and connection to their federated healthcare record.
3. A clinical record schema management tool (Object Dictionary Client) that enables
clinicians or engineers to define and export the data sets mapping to individual
feeder systems.
4. An expansible set of clinical management algorithms that provide prompts to the
patient or clinician to assist in the management of patient care.
CHIME has built up over a decade of experience within Europe on the requirements
and information models that are needed to underpin comprehensive multiprofessional
electronic healthcare records. The resulting architecture models have
influenced new European standards in this area, and CHIME has designed and built
prototype EHCR components based on these models. The demonstrator systems
described here utilise a directory service and object-oriented engineering approach,
and support the secure, mobile and distributed access to federated healthcare
records via web-based services.
The design and implementation of these software components has been founded on
a thorough analysis of the clinical, technical and ethico-legal requirements for
comprehensive EHCR systems, published through previous project deliverables and
in future planned papers.
The clinical demonstrator site described in this report has provided the solid basis
from which to establish "proof of concept" verification of the design approach, and a
valuable opportunity to install, test and evaluate the results of the component
engineering undertaken during the EC funded project. Inevitably, a number of
practical implementation and deployment obstacles have been overcome through
this journey, each of those having contributed to the time taken to deliver the
components but also to the richness of the end products.
UCL is fortunate that the Whittington Hospital, and the department of cardiovascular
medicine in particular, is committed to a long-term vision built around this work. That
vision, outlined within this report, is shared by the Camden and Islington Health
Authority and by many other purchaser and provider organisations in the area, and
by a number of industrial parties. They are collectively determined to support the
Demonstrator Site as an ongoing project well beyond the life of the EC SynEx
Project.
This report, although a final report as far as the EC project is concerned, is really a
description of the first phase in establishing a centre of healthcare excellence. New
EC Fifth Framework project funding has already been approved to enable new and
innovative technology solutions to be added to the work already established in north
London
Analysis domain model for shared virtual environments
The field of shared virtual environments, which also
encompasses online games and social 3D environments, has a
system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model
Towards a new generation of transport services adapted to multimedia application
Une connexion d'ordre et de fiabilité partiels (POC, partial order connection) est une connexion de transport autorisée à perdre certains objets mais également à les délivrer dans un ordre éventuellement différent de celui d'émission. L'approche POC établit un lien conceptuel entre les protocoles sans connexion au mieux et les protocoles fiables avec connexion. Le concept de POC est motivé par le fait que dans les réseaux hétérogènes sans connexion tels qu'Internet, les paquets transmis sont susceptibles de se perdre et d'arriver en désordre, entraînant alors une réduction des performances des protocoles usuels. De plus, on montre qu'un protocole associé au transport d'un flux multimédia permet une réduction très sensible de l'utilisation des ressources de communication et de mémorisation ainsi qu'une diminution du temps de transit moyen. Dans cet article, une extension temporelle de POC, nommée TPOC (POC temporisé), est introduite. Elle constitue un cadre conceptuel permettant la prise en compte des exigences de qualité de service des applications multimédias réparties. Une architecture offrant un service TPOC est également introduite et évaluée dans le cadre du transport de vidéo MPEG. Il est ainsi démontré que les connexions POC comblent, non seulement le fossé conceptuel entre les protocoles sans connexion et avec connexion, mais aussi qu'ils surpassent les performances des ces derniers lorsque des données multimédias (telles que la vidéo MPEG) sont transportées
- …