463 research outputs found

    Adrenocortical status in infants and children with sepsis and septic shock

    Get PDF
    AbstractBackgroundThe benefit from corticosteroids remains controversial in sepsis and septic shock and the presence of adrenal insufficiency (AI) has been proposed to justify steroid use.AimTo determine adrenal state and its relation with outcome in critical children admitted with sepsis to PICU of Cairo University, Children Hospital.MethodsThirty cases with sepsis and septic shock were studied. Cortisol levels (CL) were estimated at baseline and after high-dose short ACTH stimulation in those patients and in 30 matched controls. Absolute AI was defined as basal CL<7μg/dl and peak CL<18μg/dl. Relative AI was diagnosed if cortisol increment after stimulation is <9μg/dl.ResultsOverall mortality of cases was 50%. The mean CL at baseline in cases was higher than that of controls (51.39μg/dl vs. 12.83μg/dl, p=0.000). The mean CL 60min after ACTH stimulation was higher than that of controls (73.38μg/dl vs. 32.80μg/dl, p=0.000). The median of %rise in cases was lower than that of controls (45.3% vs. 151.7%). There was a positive correlation between basal and post-stimulation cortisol with number of system failure, inotropic support duration, mechanical ventilation days, and CO2 level in blood. There was a negative correlation between basal and post stimulation cortisol with blood pH and HCO3.ConclusionRAI is common with severe sepsis/septic shock. It is associated with more inotropic support and has higher mortality. Studies are warranted to determine whether corticosteroid therapy has a survival benefit in children with RAI and catecholamine resistant septic shock

    Privacy-Preserving Outsourcing of Large-Scale Nonlinear Programming to the Cloud

    Full text link
    The increasing massive data generated by various sources has given birth to big data analytics. Solving large-scale nonlinear programming problems (NLPs) is one important big data analytics task that has applications in many domains such as transport and logistics. However, NLPs are usually too computationally expensive for resource-constrained users. Fortunately, cloud computing provides an alternative and economical service for resource-constrained users to outsource their computation tasks to the cloud. However, one major concern with outsourcing NLPs is the leakage of user's private information contained in NLP formulations and results. Although much work has been done on privacy-preserving outsourcing of computation tasks, little attention has been paid to NLPs. In this paper, we for the first time investigate secure outsourcing of general large-scale NLPs with nonlinear constraints. A secure and efficient transformation scheme at the user side is proposed to protect user's private information; at the cloud side, generalized reduced gradient method is applied to effectively solve the transformed large-scale NLPs. The proposed protocol is implemented on a cloud computing testbed. Experimental evaluations demonstrate that significant time can be saved for users and the proposed mechanism has the potential for practical use.Comment: Ang Li and Wei Du equally contributed to this work. This work was done when Wei Du was at the University of Arkansas. 2018 EAI International Conference on Security and Privacy in Communication Networks (SecureComm

    Passive decoy state quantum key distribution with practical light sources

    Full text link
    Decoy states have been proven to be a very useful method for significantly enhancing the performance of quantum key distribution systems with practical light sources. While active modulation of the intensity of the laser pulses is an effective way of preparing decoy states in principle, in practice passive preparation might be desirable in some scenarios. Typical passive schemes involve parametric down-conversion. More recently, it has been shown that phase randomized weak coherent pulses (WCP) can also be used for the same purpose [M. Curty {\it et al.}, Opt. Lett. {\bf 34}, 3238 (2009).] This proposal requires only linear optics together with a simple threshold photon detector, which shows the practical feasibility of the method. Most importantly, the resulting secret key rate is comparable to the one delivered by an active decoy state setup with an infinite number of decoy settings. In this paper we extend these results, now showing specifically the analysis for other practical scenarios with different light sources and photo-detectors. In particular, we consider sources emitting thermal states, phase randomized WCP, and strong coherent light in combination with several types of photo-detectors, like, for instance, threshold photon detectors, photon number resolving detectors, and classical photo-detectors. Our analysis includes as well the effect that detection inefficiencies and noise in the form of dark counts shown by current threshold detectors might have on the final secret ket rate. Moreover, we provide estimations on the effects that statistical fluctuations due to a finite data size can have in practical implementations.Comment: 17 pages, 14 figure

    Implementation of two-party protocols in the noisy-storage model

    Get PDF
    The noisy-storage model allows the implementation of secure two-party protocols under the sole assumption that no large-scale reliable quantum storage is available to the cheating party. No quantum storage is thereby required for the honest parties. Examples of such protocols include bit commitment, oblivious transfer and secure identification. Here, we provide a guideline for the practical implementation of such protocols. In particular, we analyze security in a practical setting where the honest parties themselves are unable to perform perfect operations and need to deal with practical problems such as errors during transmission and detector inefficiencies. We provide explicit security parameters for two different experimental setups using weak coherent, and parametric down conversion sources. In addition, we analyze a modification of the protocols based on decoy states.Comment: 41 pages, 33 figures, this is a companion paper to arXiv:0906.1030 considering practical aspects, v2: published version, title changed in accordance with PRA guideline

    Optimal Fleet Composition via Dynamic Programming and Golden Section Search

    Get PDF
    In this paper, we consider an optimization problem arising in vehicle fleet management. The problem is to construct a heterogeneous vehicle fleet in such a way that cost is minimized subject to a constraint on the overall fleet size. The cost function incorporates fixed and variable costs associated with the fleet, as well as hiring costs that are incurred when vehicle requirements exceed fleet capacity. We first consider the simple case when there is only one type of vehicle. We show that in this case the cost function is convex, and thus the problem can be solved efficiently using the well-known golden section method. We then devise an algorithm, based on dynamic programming and the golden section method, for solving the general problem in which there are multiple vehicle types. We conclude the paper with some simulation results

    A Regularized Graph Layout Framework for Dynamic Network Visualization

    Full text link
    Many real-world networks, including social and information networks, are dynamic structures that evolve over time. Such dynamic networks are typically visualized using a sequence of static graph layouts. In addition to providing a visual representation of the network structure at each time step, the sequence should preserve the mental map between layouts of consecutive time steps to allow a human to interpret the temporal evolution of the network. In this paper, we propose a framework for dynamic network visualization in the on-line setting where only present and past graph snapshots are available to create the present layout. The proposed framework creates regularized graph layouts by augmenting the cost function of a static graph layout algorithm with a grouping penalty, which discourages nodes from deviating too far from other nodes belonging to the same group, and a temporal penalty, which discourages large node movements between consecutive time steps. The penalties increase the stability of the layout sequence, thus preserving the mental map. We introduce two dynamic layout algorithms within the proposed framework, namely dynamic multidimensional scaling (DMDS) and dynamic graph Laplacian layout (DGLL). We apply these algorithms on several data sets to illustrate the importance of both grouping and temporal regularization for producing interpretable visualizations of dynamic networks.Comment: To appear in Data Mining and Knowledge Discovery, supporting material (animations and MATLAB toolbox) available at http://tbayes.eecs.umich.edu/xukevin/visualization_dmkd_201
    • …
    corecore