75 research outputs found
Interval tree clocks: a logical clock for dynamic systems
Lecture Notes in Computer Science 5401, 2008Causality tracking mechanisms, such as vector clocks and version vectors, rely on mappings from globally unique identifiers to integer counters. In a system with a well known set of entities these ids can be preconfigured and given distinct positions in a vector or distinct names in a mapping. Id management is more problematic in dynamic systems, with large and highly variable number of entities, being worsened when network partitions occur. Present solutions for causality tracking are not appropriate to these increasingly common scenarios. In this paper we introduce Interval Tree Clocks, a novel causality tracking mechanism that can be used in scenarios with a dynamic number of entities, allowing a completely decentralized creation of processes/replicas without need for global identifiers or global coordination. The mechanism has a variable size representation that adapts automatically to the number of existing entities, growing or shrinking appropriately. The representation is so compact that the mechanism can even be considered for scenarios with a fixed number of entities, which makes it a general substitute for vector clocks and version vectors
Tuchola County Broadband Network (TCBN)
Abstract
In the paper the designing project (plan) of Tuchola City broadband IP optical network has been presented. The extended version of network plan constitute technical part of network Feasibility Study, that it is expected to be implemented in Tuchola and be financed from European Regional Development Funds. The network plan presented in the paper contains both topological structure of fiber optic network as well as the active equipment for the network. In the project described in the paper it has been suggested to use Modular Cable System - MCS for passive infrastructure and Metro Ethernet technology for active equipment. The presented solution provides low cost of construction (CAPEX), ease of implementation of the network and low operating cost (OPEX). Moreover the parameters of installed Metro Ethernet switches in the network guarantee the scalability of the network for at least 10 years.</jats:p
Context Aware Middleware Architectures: Survey and Challenges
Abstract: Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of
developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness,
context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected
middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work
Interim research assessment 2003-2005 - Computer Science
This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities
A heterogeneous network management approach to wireless sensor networks in personal healthcare environments
University of Technology, Sydney. Faculty of Science.Many countries are facing problems caused by a rapid surge in
numbers of people over sixty-five. This aging population cohort will
place a strain on the existing health systems because the elderly are
prone to falls, chronic illnesses, dementia and general frailty. At the
same time governments are struggling to attract more people into the
health systems and there are already shortages of qualified nurses and
care givers.
This thesis represents a multi disciplinary approach to trying to solve
some of the above issues. In the first instance the researcher has
established the validity of the health crisis and then examined ways in
which Information Technology could help to alleviate some of the
issues. The nascent technology called Wireless Sensor Networks was
examined as a way of providing remote health monitoring for the
elderly, the infirm and the ill. The researcher postulated that Network
Management models and tools that are used to monitor huge networks
of computers could be adapted to monitor the health of persons in
their own homes, in aged care facilities and hospitals.
Wireless Sensor Network (WNS) Personal Healthcare can monitor such
vital signs as a patient’s temperature, heart rate and blood oxygen
level. WSNs (often referred to as Motes) use wireless transceivers that
can do remote sensing. The researcher aimed to assist all stakeholders
in the personal healthcare arena to use WSNs to improve monitoring.
The researcher provided a solution architecture and framework for
healthcare sensor monitoring systems, based on network management
techniques. This architecture generalises to heterogeneous and
autonomous data acquisition systems.
Future directions from this research point towards new areas of
knowledge from the development or creation of new technologies to
support the exponential growth of ubiquitous, just-in-time WSN health
informational services and applications such as the preventive and
proactive personal care health management and services around it.
The affordable and ubiquitous distributed access to remote personal
health care technologies in the future could have an important impact
in the society, by allowing the individuals to take immediate preventive
actions over their overall health condition. These systems could
potentially prevent death as well as improve national health budgets
by limiting costly medical interventions that could have been avoided
by individual, easy-action early prevention
Wireless remote patient monitoring on general hospital wards.
A novel approach which has potential to improve quality of patient care on general hospital wards is proposed. Patient care is a labour-intensive task that requires high input of human resources. A Remote Patient Monitoring (RPM) system is proposed which can go some way towards improving patient monitoring on general hospital wards. In this system vital signs are gathered from patients and sent to a control unit for centralized monitoring. The RPM system can complement the role of nurses in monitoring patients’ vital signs. They will be able to focus on holistic needs of patients thereby providing better personal care. Wireless network technologies, ZigBee and Wi-Fi, are utilized for transmission of vital signs in the proposed RPM system. They provide flexibility and mobility to patients. A prototype system for RPM is designed and simulated. The results illustrated the capability, suitability and limitation of the chosen technology
Workflow models for heterogeneous distributed systems
The role of data in modern scientific workflows becomes more and more crucial. The unprecedented amount of data available in the digital era, combined with the recent advancements in Machine Learning and High-Performance Computing (HPC), let computers surpass human performances in a wide range of fields, such as Computer Vision, Natural Language Processing and Bioinformatics. However, a solid data management strategy becomes crucial for key aspects like performance optimisation, privacy preservation and security.
Most modern programming paradigms for Big Data analysis adhere to the principle of data locality: moving computation closer to the data to remove transfer-related overheads and risks. Still, there are scenarios in which it is worth, or even unavoidable, to transfer data between different steps of a complex workflow.
The contribution of this dissertation is twofold. First, it defines a novel methodology for distributed modular applications, allowing topology-aware scheduling and data management while separating business logic, data dependencies, parallel patterns and execution environments. In addition, it introduces computational notebooks as a high-level and user-friendly interface to this new kind of workflow, aiming to flatten the learning curve and improve the adoption of such methodology.
Each of these contributions is accompanied by a full-fledged, Open Source implementation, which has been used for evaluation purposes and allows the interested reader to experience the related methodology first-hand. The validity of the proposed approaches has been demonstrated on a total of five real scientific applications in the domains of Deep Learning, Bioinformatics and Molecular Dynamics Simulation, executing them on large-scale mixed cloud-High-Performance Computing (HPC) infrastructures
- …