14 research outputs found

    A Community-Based Event Delivery Protocol in Publish/Subscribe Systems for Delay Tolerant Sensor Networks

    Get PDF
    The basic operation of a Delay Tolerant Sensor Network (DTSN) is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies

    Achieving bounded delay on message delivery in publish/subscribe systems

    Get PDF
    2006-2007 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    AMUSE: autonomic management of ubiquitous e-Health systems

    No full text
    Future e-Health systems will consist of low-power on-body wireless sensors attached to mobile users that interact with an ubiquitous computing environment to monitor the health and well being of patients in hospitals or at home. Patients or health practitioners have very little technical computing expertise so these systems need to be self-configuring and self-managing with little or no user input. More importantly, they should adapt autonomously to changes resulting from user activity, device failure, and the addition or loss of services. We propose the Self-Managed Cell (SMC) as an architectural pattern for all such types of ubiquitous computing applications and use an e-Health application in which on-body sensors are used to monitor a patient living in their home as an exemplar. We describe the services comprising the SMC and discuss cross-SMC interactions as well as the composition of SMCs into larger structures

    Relational subscription middleware for Internet-scale publish-subscribe

    Get PDF
    The nonlinear inverse problem of electromagnetic induction to recover electrical conductivity is examined. As this is an ill-posed problem based on inaccurate data, there is a strong need to find the reliable features of the models of electrical conductivity. By using optimization theory for an all-at-once approach to inverting frequency-domain electromagnetic data, we attempt to make conclusions about Earth structure under assumptions of one-dimensional and two-dimensional structure. The forward modeling equations are constraints in an optimization problem solving for the fields and the conductivity simultaneously. The computational framework easily allows additional inequality constraints to be imposed.Under the one-dimensional assumption, we develop the optimization approach for use on the magnetotelluric inverse problem. After verifying its accuracy, we use our method to obtain bounds on Earth's average conductivity that all conductivity profiles must obey. There is no regularization required to solve the problem. With the emplacement of additional inequality constraints, we further narrow the bounds. We draw conclusions from a global geomagnetic depth sounding data set and compare with laboratory results, inferring temperature and water content through published Boltzmann-Arrhenius conductivity models.We take the lessons from the 1-D inverse problem and apply them to the 2-D inverse problem. The difficulty of the 2-D inverse problem requires that we first examine our ability to solve the forward problem, where the conductivity structure is known and the fields are unknown. Our forward problem is designed such that we are able to directly transfer it into the optimization approach used for the inversion. With the successful 2-D forward problem as the constraints, a one-dimensional 2-D inverse problem is stepped into a fully 2-D inverse problem for testing purposes. The computational machinery is incrementally modified to meet the challenge of the realistic two-dimensional magnetotelluric inverse problem. We then use two shallow-Earth data sets from different conductivity regimes and invert them for bounded and regularized structure

    Exactly-once Delivery in a Content-based Publish-Subscribe System

    No full text
    This paper presents a general knowledge model for propagating information in a content-based publish-subscribe system. The model is used to derive an efficient and scalable protocol for exactly-once delivery to large numbers (tens of thousands per broker) of content-based subscribers in either publisher order or uniform total order. Our protocol allows intermediate content filtering at each hop, but requires persistent storage only at the publishing site. It is tolerant of message drops, message reorderings, node failures, and link failures, and maintains only “soft ” state at intermediate nodes. We evaluate the performance of our implementation both under failure-free conditions and with fault injection.

    Ordering, timeliness and reliability for publish/subscribe systems over WAN

    Get PDF
    In the last few years, the increasing use of the Internet and geo-political, sociological and financial changes induced by globalization, are paving the way for a connected world where the information is always available at the right place and the right time. As such, applications previously deployed for ``closed'' environmets, are now federating into geographically distributed systems connected through a Wide Area Network (WAN). By this evolution, in the near future no system will be isolated: every system will be composed by interconnected systems, i.e., it will be a System of Systems (SoS). Example of SoS are the Large-scale Complex Critical Infrastructure (LCCIs), such as power grids, transport infrastructures (airports and seaports), financial infrastructures, next generation intelligence platforms, to cite a few. In these systems, multiple sources of information generate a high volume of events that need to be delivered to all intended destinations by respecting several Quality of Service (QoS) constraints imposed by the critical nature of LCCIs. As such, particular attention is devoted to the middleware solution used to disseminate information in the SoS. Due to its inherently scalability provided by space, time and synchronization decoupling properties, the publish/subscribe paradigm is becoming attractive for the implementation of a middleware service for LCCIs. However, scalability is not the only requirement exhibited by SoS. Several services need to control a broader set of QoS requirements, such as timeliness, ordering and reliability. Unfortunately, current middleware solutions do not address QoS constraints required by SoS. Current publish/subscribe middleware solutions for the WAN environment offer only a best effort event dissemination, with no additional control on QoS. Just a few implementations try to address some isolated QoS policy, making them not suitable for a SoS scenario. The contribution of this thesis is to devise a QoS layer that can be posed on top of a generic publish/subscribe middleware that enriches its service by addressing: (i) ordering, (ii) reliability and (iii) timeliness in event dissemination in SoS over WAN. Specifically, we first analyze several real case studies, by highlighting their QoS requirements in terms of ordering, reliability and timeliness, and compare these requirements with both current research prototypes and commercial systems. Then, we fill the gap by proposing novel algorithms to address those requirements. The proposed protocols can also be combined together in order to provide the QoS level required by the particular application. In this way, QoS issues do not need to be addressed at application level, so as to leave applications to implement just their native functionalities

    Ordering, timeliness and reliability for publish/subscribe systems over WAN

    Get PDF
    In the last few years, the increasing use of the Internet and geo-political, sociological and financial changes induced by globalization, are paving the way for a connected world where the information is always available at the right place and the right time. As such, applications previously deployed for ``closed'' environmets, are now federating into geographically distributed systems connected through a Wide Area Network (WAN). By this evolution, in the near future no system will be isolated: every system will be composed by interconnected systems, i.e., it will be a System of Systems (SoS). Example of SoS are the Large-scale Complex Critical Infrastructure (LCCIs), such as power grids, transport infrastructures (airports and seaports), financial infrastructures, next generation intelligence platforms, to cite a few. In these systems, multiple sources of information generate a high volume of events that need to be delivered to all intended destinations by respecting several Quality of Service (QoS) constraints imposed by the critical nature of LCCIs. As such, particular attention is devoted to the middleware solution used to disseminate information in the SoS. Due to its inherently scalability provided by space, time and synchronization decoupling properties, the publish/subscribe paradigm is becoming attractive for the implementation of a middleware service for LCCIs. However, scalability is not the only requirement exhibited by SoS. Several services need to control a broader set of QoS requirements, such as timeliness, ordering and reliability. Unfortunately, current middleware solutions do not address QoS constraints required by SoS. Current publish/subscribe middleware solutions for the WAN environment offer only a best effort event dissemination, with no additional control on QoS. Just a few implementations try to address some isolated QoS policy, making them not suitable for a SoS scenario. The contribution of this thesis is to devise a QoS layer that can be posed on top of a generic publish/subscribe middleware that enriches its service by addressing: (i) ordering, (ii) reliability and (iii) timeliness in event dissemination in SoS over WAN. Specifically, we first analyze several real case studies, by highlighting their QoS requirements in terms of ordering, reliability and timeliness, and compare these requirements with both current research prototypes and commercial systems. Then, we fill the gap by proposing novel algorithms to address those requirements. The proposed protocols can also be combined together in order to provide the QoS level required by the particular application. In this way, QoS issues do not need to be addressed at application level, so as to leave applications to implement just their native functionalities
    corecore