307 research outputs found

    The appearance of a compact jet in the soft-intermediate state of 4U 1543-47

    Get PDF
    Recent advancements in the understanding of jet-disc coupling in black hole candidate X-ray binaries (BHXBs) have provided close links between radio jet emission and X-ray spectral and variability behaviour. In 'soft' X-ray states the jets are suppressed, but the current picture lacks an understanding of the X-ray features associated with the quenching or recovering of these jets. Here we show that a brief, ~4 day infrared (IR) brightening during a predominantly soft X-ray state of the BHXB 4U 1543-47 is contemporaneous with a strong X-ray Type B quasi-periodic oscillation (QPO), a slight spectral hardening and an increase in the rms variability, indicating an excursion to the soft-intermediate state (SIMS). This IR 'flare' has a spectral index consistent with optically thin synchrotron emission and most likely originates from the steady, compact jet. This core jet emitting in the IR is usually only associated with the hard state, and its appearance during the SIMS places the 'jet line' between the SIMS and the soft state in the hardness-intensity diagram for this source. IR emission is produced in a small region of the jets close to where they are launched (~ 0.1 light-seconds), and the timescale of the IR flare in 4U 1543-47 is far too long to be caused by a single, discrete ejection. We also present a summary of the evolution of the jet and X-ray spectral/variability properties throughout the whole outburst, constraining the jet contribution to the X-ray flux during the decay.Comment: Accepted to MNRAS. 11 pages, 6 figure

    Programming Protocol-Independent Packet Processors

    Full text link
    P4 is a high-level language for programming protocol-independent packet processors. P4 works in conjunction with SDN control protocols like OpenFlow. In its current form, OpenFlow explicitly specifies protocol headers on which it operates. This set has grown from 12 to 41 fields in a few years, increasing the complexity of the specification while still not providing the flexibility to add new headers. In this paper we propose P4 as a strawman proposal for how OpenFlow should evolve in the future. We have three goals: (1) Reconfigurability in the field: Programmers should be able to change the way switches process packets once they are deployed. (2) Protocol independence: Switches should not be tied to any specific network protocols. (3) Target independence: Programmers should be able to describe packet-processing functionality independently of the specifics of the underlying hardware. As an example, we describe how to use P4 to configure a switch to add a new hierarchical label

    Association of serum uric acid with high-sensitivity C-reactive protein in postmenopausal women.

    Get PDF
    OBJECTIVES: To explore the independent correlation between serum uric acid and low-grade inflammation (measured by high-sensitivity C-reactive protein, hs-CRP) in postmenopausal women. METHODS: A total of 378 healthy Iranian postmenopausal women were randomly selected in a population-based study. Circulating hs-CRP levels were measured by highly specific enzyme-linked immunosorbent assay method and an enzymatic calorimetric method was used to measure serum levels of uric acid. Pearson correlation coefficient, multiple linear regression and logistic regression models were used to analyze the association between uric acid and hs-CRP levels. RESULTS: A statistically significant correlation was seen between serum levels of uric acid and log-transformed circulating hs-CRP (r = 0.25, p < 0.001). After adjustment for age and cardiovascular risk factors (according to NCEP ATP III criteria), circulating hs-CRP levels were significantly associated with serum uric acid levels (β = 0.20, p < 0.001). After adjustment for age and cardiovascular risk factors, hs-CRP levels ≥3 mg/l were significantly associated with higher uric acid levels (odds ratio =1.52, 95% confidence interval 1.18-1.96). CONCLUSION: Higher serum uric acid levels were positively and independently associated with circulating hs-CRP in healthy postmenopausal women. KEYWORDS: C-reactive protein; Uric acid; inflammation; postmenopaus

    Hyperbolic Geometry of Complex Networks

    Full text link
    We develop a geometric framework to study the structure and function of complex networks. We assume that hyperbolic geometry underlies these networks, and we show that with this assumption, heterogeneous degree distributions and strong clustering in complex networks emerge naturally as simple reflections of the negative curvature and metric property of the underlying hyperbolic geometry. Conversely, we show that if a network has some metric structure, and if the network degree distribution is heterogeneous, then the network has an effective hyperbolic geometry underneath. We then establish a mapping between our geometric framework and statistical mechanics of complex networks. This mapping interprets edges in a network as non-interacting fermions whose energies are hyperbolic distances between nodes, while the auxiliary fields coupled to edges are linear functions of these energies or distances. The geometric network ensemble subsumes the standard configuration model and classical random graphs as two limiting cases with degenerate geometric structures. Finally, we show that targeted transport processes without global topology knowledge, made possible by our geometric framework, are maximally efficient, according to all efficiency measures, in networks with strongest heterogeneity and clustering, and that this efficiency is remarkably robust with respect to even catastrophic disturbances and damages to the network structure

    Opus: an overlay peer utility service

    Get PDF
    Today, an increasing number of important network services, such as content distribution, replicated services, and storage systems, are deploying overlays across multiple Internet sites to deliver better performance, reliability and adaptability. Currently however, such network services must individually reimplement substantially similar functionality. For example, applications must configure the overlay to meet their specific demands for scale, service quality and reliability. Further, they must dynamically map data and functions onto network resources-including servers, storage, and network paths-to adapt to changes in load or network conditions. In this paper, we present Opus, a large-scale overlay utility service that provides a common platform and the necessary abstractions for simultaneously hosting multiple distributed applications. In our utility model, wide-area resource mapping is guided by an application's specification of performance and availability targets. Opus then allocates available nodes to meet the requirements of competing applications based on dynamically changing system characteristics. Specifically, we describe issues and initial results associated with: i) developing a general architecture that enables a broad range of applications to push their functionality across the network, ii) constructing overlays that match both the performance and reliability characteristics of individual applications and scale to thousands of participating nodes, iii) using Service Level Agreements to dynamically allocate utility resources among competing applications, and iv) developing decentralized techniques for tracking global system characteristics through the use of hierarchy, aggregation, and approximationNSLDept. of Comput. Sci., Duke Univ., Durham, NC, US

    MACEDON: methodology for automatically creating, evaluating, and designing overlay networks

    Get PDF
    Currently, researchers designing and implementing large-scale overlay services employ disparate techniques at each stage in the production cycle: design, implementation, experimentation, and evaluation. As a result, complex and tedious tasks are often duplicated leading to ineffective resource use and difficulty in fairly comparing competing algorithms. In this paper, we present MACEDON, an infrastructure that provides facilities to: i) specify distributed algorithms in a concise domain-specific language; ii) generate code that executes in popular evaluation infrastructures and in live networks; iii) leverage an overlay-generic API to simplify the interoperability of algorithm implementations and applications; and iv) enable consistent experimental evaluation. We have used MACEDON to implement and evaluate a number of algorithms, including AMMO, Bullet, Chord, NICE, Overcast, Pastry, Scribe, and SplitStream, typically with only a few hundred lines of MACEDON code. Using our infrastructure, we are able to accurately reproduce or exceed published results and behavior demonstrated by current publicly available implementation

    Self-Organizing Subsets: From Each According to His Abilities, To Each According to His Needs

    Get PDF
    The key principles behind current peer-to-peer research include fully distributing service functionality among all nodes participating in the system and routing individual requests based on a small amount of locally maintained state. The goals extend much further than just improving raw system performance: such systems must survive massive concurrent failures, denial of service attacks, etc. These efforts are uncovering fundamental issues in the design and deployment of distributed services. However, the work ignores a number of practical issues with the deployment of general peer-to-peer systems, including i) the overhead of maintaining consistency among peers replicating mutable data and ii) the resource waste incurred by the replication necessary to counteract the loss in locality that results from random content distribution. This position paper argues that the key challenge in peer-to-peer research is not to distribute service functions among all participants, but rather to distribute functions to meet target levels of availability, survivability, and performance. In many cases, only a subset of participating hosts should take on server roles. The benefit of peerto- peer architectures then comes from massive diversity rather than massive decentralization: with high probability, there is always some node available to provide the required functionality should the need arise

    Using Random Subsets to Build Scalable Network Services

    Get PDF
    In this paper, we argue that a broad range of large-scale network services would benefit from a scalable mechanism for delivering state about a random subset of global participants. Key to this approach is ensuring that membership in the subset changes periodically and with uniform representation over all participants. Random subsets could help overcome inherent scaling limitations to services that maintain global state and perform global network probing. It could further improve the routing performance of peer-to-peer distributed hash tables by locating topologically-close nodes. This paper presents the design, implementation, and evaluation of RanSub, a scalable protocol for delivering such state. As a first demonstration of the RanSub utility, we construct SARO, a scalable and adaptive application-layer overlay tree. SARO uses RanSub state information tolocate appropriate peers for meeting application-specific delay and bandwidth targets and to dynamically adapt to changing network conditions. A large-scale evaluation of 1000 overlay nodes participating in an emulated 20,000- node wide-area network topology demonstrate both the adaptivity and scalability (in terms of per-node state and network overhead) of both RanSub and SARO. Finally, we use an existing streaming media server to distribute content through SARO running on top of the PlanetLab Internet testbed

    Scalability and accuracy in a large-scale network emulator

    Get PDF
    This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS (operating system) and application software are configured to route their packets through a set of ModelNet core nodes, which cooperate to subject the traffic to the bandwidth, congestion constraints, latency, and loss profile of a target network topology. This paper describes and evaluates the ModelNet architecture and its implementation, including novel techniques to balance emulation accuracy against scalability. The current ModelNet prototype is able to accurately subject thousands of instances of a distributed application to Internet-like conditions with gigabits of bisection bandwidth. Experiments with several large-scale distributed services demonstrate the generality and effectiveness of the infrastructur

    Phenol removal from industrial wastewater using chitosan- immobilized Pseudomonas putida

    Get PDF
    The present study deals with degradation of phenol in industrial wastewater using Pseudomonas putida. Biodegradation process at various initial phenol concentrations ranging from 50 to 200 mg/l was evaluated at different conditions. Phenol removal as single source of carbon at initial phenol concentration of 200 mg/l took place within 22 days. Phenol/ glucose mixture used as dual system to improve phenol degradation. The presence of glucose as supplementary substrate degraded phenol at initial concentration of 200 mg/l within 19 days. Acclimated Pseudomonas putida was able to degrade phenol at initial phenol concentration of 200 mg/l within 15 days. It was also revealed that phenol degradation using acclimated Pseudomonas putida immobilized on chitosan was carried out at the shortest period of time in contrast to the other conditions. The obtained results represented that microorganism was able to consume phenol as a substrate
    corecore