14 research outputs found

    Assessment of acute myocardial infarction: current status and recommendations from the North American society for cardiovascular imaging and the European society of cardiac radiology

    Get PDF
    There are a number of imaging tests that are used in the setting of acute myocardial infarction and acute coronary syndrome. Each has their strengths and limitations. Experts from the European Society of Cardiac Radiology and the North American Society for Cardiovascular Imaging together with other prominent imagers reviewed the literature. It is clear that there is a definite role for imaging in these patients. While comparative accuracy, convenience and cost have largely guided test decisions in the past, the introduction of newer tests is being held to a higher standard which compares patient outcomes. Multicenter randomized comparative effectiveness trials with outcome measures are required

    B-Tracker: Improving load balancing and efficiency in distributed P2P trackers

    Full text link
    Trackers are used in peer-to-peer (P2P) networks for provider discovery, that is, mapping resources to potential providers. Centralized trackers, e.g., as in the original BitTorrent protocol, do not benefit from P2P properties, such as no single point of failure, scalability, and load balancing. Decentralized mechanisms have thus been proposed, based on distributed hash tables (DHTs) and gossiping, such as BitTorrent's Peer Exchange (PEX). While DHT-based trackers suffer from load balancing problems, gossip-based ones cannot deliver new mappings quickly. This paper presents B-Tracker, a fully-distributed, pull-based tracker. B-Tracker extends DHT functionality by distributing the tracker load among all providers in a swarm. Bloom filters are used to avoid redundant mappings to be transmitted. This results in the important properties of load balancing and scalability, while adding the ability for peers to fetch new mappings instantly. B-Tracker shows, through simulations, improved load balancing and better efficiency when compared to pure DHTs and PEX

    Bypassing cloud providers data validation to store arbitrary data

    Full text link
    A fundamental Software-as-a-Service (SaaS) characteristic in Cloud Computing is to be application-specific; depending on the application, Cloud Providers (CPs) restrict data formats and attributes allowed into their servers via a data validation process. An ill-defined data validation process may directly impact both security (e.g. application failure, legal issues) and accounting and charging (e.g. trusting metadata in file headers). Therefore, this paper investigates, evaluates (by means of tests), and discusses data validation processes of popular CPs. A proof of concept system was thus built, implementing encoders carefully crafted to circumvent data validation processes, ultimately demonstrating how large amounts of unaccounted, arbitrary data can be stored into CPs

    This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE P2P 2011 proceedings B-Tracker: Improving Load Balancing and Efficiency in Distributed P2P Trackers

    No full text
    Abstract-Trackers are used in peer-to-peer (P2P) networks for provider discovery, that is, mapping resources to potential providers. Centralized trackers, e.g., as in the original BitTorrent protocol, do not benefit from P2P properties, such as no single point of failure, scalability, and load balancing. Decentralized mechanisms have thus been proposed, based on distributed hash tables (DHTs) and gossiping, such as BitTorrent's Peer Exchange (PEX). While DHT-based trackers suffer from load balancing problems, gossip-based ones cannot deliver new mappings quickly. This paper presents B-Tracker, a fully-distributed, pull-based tracker. B-Tracker extends DHT functionality by distributing the tracker load among all providers in a swarm. Bloom filters are used to avoid redundant mappings to be transmitted. This results in the important properties of load balancing and scalability, while adding the ability for peers to fetch new mappings instantly. B-Tracker shows, through simulations, improved load balancing and better efficiency when compared to pure DHTs and PEX

    Playback policies for live and on-demand P2P video streaming

    Full text link
    Peer-to-peer (P2P) has become a popular mechanism for video distribution over the Internet, by allowing users to collaborate on locating and exchanging video blocks. The approach LiveShift supports further collaboration by enabling storage and a later redistribution of received blocks, thus, enabling time shifting and video-on-demand in anintegrated manner. Video blocks, however, are not always downloaded quickly enough to be played back without interruptions. In such situations, the playback policy defines whether peers (a) stall the playback, waiting for blocks to be found and downloaded, or (b) skip them, losing information. Thus, for the fist time this paper investigates in a reproducible manner playback policies for P2P video streaming systems. A survey on currently-used playback policies shows that existing playback policies, required by any streaming system, have been defined almost arbitrarily, with a minimal scientific methodology applied. Based on thissurvey and on major characteristics of video streaming, a set of five distinct playback policies is formalized and implemented in LiveShift. Comparative evaluations outline the behavior of those policies under both under- and over-provisioned networks with respect to the playback lag experienced by users, the share of skipped blocks, and the share of sessions that fail. Finally, playback policies with most suitable characteristics for either live or on-demand scenarios are derived

    Attacks on internet names

    Full text link
    The Domain Name System (DNS) determines the major component in today's Internet, as it maps memorable names, such as www.uzh.ch into routable Internet Protocol addresses, such as 136.105.200.244. Since the early days of trusted hosts in the Internet have passed, the potential of severe attacks on DNS has reached a level of higher risk, e.g. DNS Spoofing or Cache Poisoning, such that work on DNS Security Extensions (DNSSEC) did commence. However, the deployment of DNSSEC has not reached that large attention needed to safeguard fully future Internet communications for all services

    Economic Traffic Management for Overlay Networks

    Full text link
    Economic Traffic Management (ETM) determines an innovative approach to manage application traffic flows in overlay networks. ETM inter-relates traditional mechanisms of network management with economic incentives. To enable a suitable theoretical understanding of this approach, relevant ETM approaches are studied and classified. The key outcome of these investigations shows that a large potential exists for ETM’s applicability in Internet Service Provider (ISP) networks. Therefore, a dedicated architecture for integrating ETM into ISP networks is developed by the EU FP7 SmoothIT project (Simple Economic Management Approaches of Overlay Traffic in Heterogeneous Internet Topologies). This architecture enables the integration of all identified ETM approaches due to its flexible and modular nature

    Slowing down to speed up: Mitigating collusion attacks in content distribution systems

    Full text link
    Content Distribution Systems (CDS) are those designed to efficiently deliver (to interested parties) a variety of contents. CDS may be classified in two groups. The first group (moderated) comprises the set of systems in which contents are checked against their descriptions before being published. The second group (non-moderated) is the set of systems without any kind of moderation. Since descriptions are of paramount importance to enable users to find contents, non-moderated CDS are clearly vulnerable to malicious interferences and susceptible to content pollution. Furthermore, colluding attackers may flood the system with imprecise metadata and turn the system into a useless content distribution platform. To protect the system from massive malicious behaviors and provide better Qualityof-Experience (QoE) to users, this paper presents a novel conservative strategy to mitigate collusion attacks in non-moderated CDS. The rationale behind this simple, yet very effective strategy, is to delay user’s actions and randomly authorize them. Results indicate that this “artificial delay” reduces the effect of attackers in the system and, hence, increases user’s QoE
    corecore