1,089,526 research outputs found

    Low-latency privacy-enabled Context Distribution Architecture

    Get PDF
    As personal information and context sharing applications gain traction more attention is drawn to the associated privacy issues. These applications address privacy using an unsatisfactory {"}whitelist{"} approach [1] [2], similar to social networks {"}friends{"}. Some of them also link location publishing with user interaction which is also a form of privacy control - the user has to explicitly say where he is. There are a few automatic location based-services (LBS) that track the user [3], but without more adequate privacy protection mechanisms they enable even bigger threats to the user. On previous work, an XMPP-based Context Distribution Architecture was defined [4], more suitable for the distribution of frequently changing context than other systems because it is based on the publish-subscribe pattern. In this paper the authors present an extension to this architecture that allows for the introduction of a complex degree of access control in context distribution. The devised changes enable the system to consider a number of interesting context privacy settings [1] for context distribution control. Also, this control must be enforced in a way that it doesn't interfere with the real-time nature of the distribution process. After describing the enhancements to the architecture, a prototype of the system is presented. Finally, the delivery latency and additional processing introduced by the access control components is estimated by testing it against the existing system

    Community-based care models for arterial hypertension management in non-pregnant adults in sub-Saharan Africa: a literature scoping review and framework for designing chronic services

    Get PDF
    BACKGROUND: Arterial hypertension (aHT) is the leading cardiovascular disease (CVD) risk factor in sub-Saharan Africa; it remains, however, underdiagnosed, and undertreated. Community-based care services could potentially expand access to aHT diagnosis and treatment in underserved communities. In this scoping review, we catalogued, described, and appraised community-based care models for aHT in sub-Saharan Africa, considering their acceptability, engagement in care and clinical outcomes. Additionally, we developed a framework to design and describe service delivery models for long-term aHT care. METHODS: We searched relevant references in Embase Elsevier, MEDLINE Ovid, CINAHL EBSCOhost and Scopus. Included studies described models where substantial care occurred outside a formal health facility and reported on acceptability, blood pressure (BP) control, engagement in care, or end-organ damage. We summarized the interventions' characteristics, effectiveness, and evaluated the quality of included studies. Considering the common integrating elements of aHT care services, we conceptualized a general framework to guide the design of service models for aHT. RESULTS: We identified 18,695 records, screened 4,954 and included twelve studies. Four types of aHT care models were identified: services provided at community pharmacies, out-of-facility, household services, and aHT treatment groups. Two studies reported on acceptability, eleven on BP control, ten on engagement in care and one on end-organ damage. Most studies reported significant reductions in BP values and improved access to comprehensive CVDs services through task-sharing. Major reported shortcomings included high attrition rates and their nature as parallel, non-integrated models of care. The overall quality of the studies was low, with high risk of bias, and most of the studies did not include comparisons with routine facility-based care. CONCLUSIONS: The overall quality of available evidence on community-based aHT care is low. Published models of care are very heterogeneous and available evidence is insufficient to recommend or refute further scale up in sub-Sahara Africa. We propose that future projects and studies implementing and assessing community-based models for aHT care are designed and described according to six building blocks: providers, target groups, components, location, time of service delivery, and their use of information systems

    An integrated bandwidth allocation and admission control framework for the support of heterogeneous real-time traffic in class-based IP networks

    Get PDF
    The support of real-time traffic in class-based IP networks requires the reservation of resources in all the links along the end-to-end paths through appropriate queuing and forwarding mechanisms. This resource allocation should be accompanied by appropriate admission control procedures in order to guarantee that newly admitted real-time traffic flows do not cause any violation to the Quality of Service (QoS) experienced by the already established real-time traffic flows. In this paper we initially aim to highlight certain issues with respect to the areas of bandwidth allocation and admission control for the support of real-time traffic in class-based IP networks. We investigate the implications of topological placement of both the bandwidth allocation and admission control schemes. We show that the performance of bandwidth allocation and admission control schemes depends highly on the location of the employed procedures with respect to the end-users requesting the services and the various network boundaries (access, metro, core, etc.). Based on our results we conclude that the strategies for applying these schemes should be location-aware, because the performance of bandwidth allocation and admission control at different points in a class-based IP network, and for the same traffic load, can be quite different and can deviate greatly from the expected performance. Through simulations we also try to provide a quantitative view of the aforementioned deviations. Taking the implications of this “location-awareness” into account, we subsequently present a new Measurement-based Admission Control (MBAC) scheme for real-time traffic that uses measurements of aggregate bandwidth only, without keeping the state of any per-flow information. In this scheme there is no assumption made on the nature of the traffic characteristics of the real-time traffic flows, which can be of heterogeneous nature. Through simulations we show that the admission control scheme is robust with respect to traffic heterogeneity and measurement errors. We also show that our scheme compares favorably against other admission control schemes in the literature

    Net Neutrality Value Pack using Network Data Analytics

    Get PDF
    The advent of mobile internet and the phenomenal growth of the use of smart phones has brought data onto the forefront, creating newer revenue streams for the operators. The data/Internet connection now needs to cater to diverse traffic, just as a city must manage the flow of various vehicles and pedestrians on its streets. In the data world, usage of data ranges across various applications like streaming-video, real time gaming, B2B & M2M applications. Such diverse customers often blame their operators for throttling data flows to the phones or computers. This causes significant delays and losses in data transmission. Any lapses of providing connectivity and continuity to network will create a large number of dissatisfied customers and unwarranted reduction of customer base. Network neutrality is an idea, that all operators should treat all data that travel over their networks fairly, without improper discrimination in favor of particular apps, sites or services. However it is a complex, controversial topic and is an important part of a free and open Internet. It aims at enabling access, choice, and transparency of Internet offerings, there by empowering users to benefit from full access to services, applications, and content available on the Internet. Implementing network neutrality legitimately without discrimination in favor of particular applications, sites or services have been a challenge faced by operators globally. This paper describes a Net Neutrality value pack using the Smart Profile Server (SPS). SPS is an enterprise application which forms the middleware to collect & analyze the network data to build and expose a data model having network traffic info w.r.t. session throughput, speed classification, page reloads etc. for a given customer/subscriber at a given time & location using the analytic database (DB). This data model can be either exposed as a REST [1] based interface as a smart profile view with fine grain access control or tied to 3rd party dashboard tools to act as a window to subscribers & regulation agencies to determine if the operator is truly net neutral

    Using Social Media for Crisis Response : The ATHENA System

    Get PDF
    Social media is now prevalent in all aspects of society. Any major news event is now accompanied by a stream of real-time social media posts. The ATHENA system turns this stream of information into a vital resource in crisis and disaster response for Law Enforcement agencies (LEAs). The ATHENA system scans the social media environment during a crisis, recognises and collects information relevant to the crisis, and synthesises that information into credible and actionable reports. Via an automated process of classification, these reports are delivered by ATHENA to the stakeholders that most need the information: from the LEA Command and Control Centre managing the crisis, to the first responders on the ground, and to the citizens themselves via a mobile application. The automatic extraction of location data from social media posts allows ATHENA to pin-point crisis activity and resources on a map-based user interface. The citizen, via a mobile device, is provided with fast and reliable alerts of danger, the location of medical help and vital supplies, and direct communication with emergency services. The first responder is given the same intelligence along with additional information pertinent to their search and rescue actions. Command and Control have the ultimate access to all information being processed by the system, where their decision making is supported by computer generated estimates of priority and credibility. Command and Control have the responsibility of validating crisis information before it is disseminated to the public. Social media are also key to the dissemination of crisis information. Dedicated social media entities on the most popular sites are maintained by Command and Control to provide a focal information, advice and instruction broadcasting presence as a trusted source. These social media presences are designed to encourage collaboration between the public and first responders and to provide a channel for communication between all the crisis stakeholders. Thus ATHENA empowers the LEA and the public with a collective intelligence, enabling both to safeguard themselves and others during a crisis

    Virtualization services: scalable methods for virtualizing multicore systems

    Get PDF
    Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim

    Constructing Dynamic Ad-hoc Emergency Networks using Software-Defined Wireless Mesh Networks

    Get PDF
    Natural disasters and other emergency situations have the potential to destroy a whole network infrastructure needed for communication critical to emergency rescue, evacuation, and initial rehabilitation. Hence, the research community has begun to focus attention on rapid network reconstruction in such emergencies; however, research has tried to create or improve emergency response systems using traditional radio and satellite communications, which face high operation costs and frequent disruptions. This thesis proposes a centralized monitoring and control system to reconstruct ad-hoc networks in emergencies by using software-defined wireless mesh networks (SDWMN). The proposed framework utilizes wireless mesh networks and software-defined networking to provide real-time network monitoring services to restore Internet access in a targeted disaster zone. It dispatches mobile devices including unmanned aerial vehicles and self-driving cars to the most efficient location aggregation to recover impaired network connections by using a new GPS position finder (GPS-PF) algorithm. The algorithm is based on density-based spatial clustering that calculates the best position to deploy one of the mobile devices. The proposed system is evaluated using the common open research emulator to demonstrate its efficiency and high accessibility in emergency situations. The results obtained from the evaluation show that the performance of the emergency communication system is improved considerably with the incorporation of the framework

    Patterns of Land Use Change Around a Large Reservoir

    Get PDF
    Reservoirs are built to control floods, provide water for irrigation and municipal supply, generate electric power, augment low flows for navigation and water quality control, and provide improved fishing and recreation opportunity. A reservoir is justified if the benefit it provides to society exceeds the cost to develop it. Much research has been done to determine the benefit of a water resources development to society as a whole. Some research has explored the benefit of such a facility to a region. Very little research exists on the effects of a reservoir on the immediately surrounding area. It seems reasonable that effects caused by the proximity of a reservoir intensify as one draws closer to the lake. Demand for land shifts from uses unrelated to the project to project oriented uses. Property value changes, and some landowners are able to reap large profits. Others, forced to sell all their land for construction of the reservoirs are not so fortunate. Simultaneously, land use change affect the environmental quality experienced by third parties, adjacent land owners and visitors to the area. By examining the spatial patterns of land use changes around a reservoir, this study hopes to aid planners anticipate wind fall profits to landowners, improve environmental quality control, guide the land use planning of surrounding communities, and project future demands for increased services placed on local governments. The general hypothesis of this study is that the spatial patterns of land use change are influenced by economic and geographic characteristics of the reservoir and reservoir area. Several hypotheses concerning the effects of relative location around the reservoir, the effects of relative location on a peninsula, the effects of the characteristics of an individual site, and the effects of road access are tested using analysis of variance and multiple regression. The data used for the analysis is based on Lake Cumberland, a reservoir in Southern Kentucky. The area immediately surrounding the lake is divided into 19 peninsulas, and each of these is subdivided into 100 quadralaterals. For each of these quadralaterals data such as slope, water frontage, and land use changes are obtained, This method of subdivision allows comparison of the patterns of land use changes on peninsulas as well as around the lake. Land use for the four years - 1938, 1951, 1960, and 1967 - provide the basis for computing the land use changes. All areas for each date are classified as residential, commercial, public, or agricultural. Any location shifting among these categories is defined as a land use change. The analysis indicates patterns of land use change surrounding the lake. Factors such as road access, slope, view, and location on a peninsula proved to be significantly associated with different patterns of land use change. Both the patterns and their degree of association with other variables have shifted over time. The probability of experiencing land use change for each observed combination of the significant factors is calculated for three periods in project time. From such information, it is possible to simulate land use change around other reservoirs
    • …
    corecore