4,162 research outputs found

    Ethical Challenges of Preexposure Prophylaxis for HIV

    Get PDF
    On July 16, 2012, emtricitabine/tenofovir (Truvada) became the first drug approved by the US Food and Drug Administration for preexposure prophylaxis (PrEP) of human immunodeficiency virus (HIV) for adults at high risk. While PrEP appears highly effective with consistent adherence, effective implementation poses ethical challenges for the medical and public health community. For PrEP users, it is necessary to maintain adherence, safe sex practices, and routine HIV testing and medical monitoring, to maximize benefits and reduce risks. On a population level, comparative cost-effectiveness should guide priority-setting, while safety measures must address drug resistance concerns without burdening patients\u27 access. Equitable distribution will require addressing the needs of underserved populations, women (for whom efficacy data are mixed) and people living in developing countries with high HIV incidence; meanwhile, it is necessary to consider the fair use of drugs for treatment vs. prevention and the appropriate design of new HIV prevention studies

    The Long-term Coercive Effect of State Community Benefit Laws on Hospital Community Health Orientation

    Full text link
    This study is an examination of the long-term coercive effect of state community benefit laws (CB Laws) on the provision of community health activities in U.S. acute care hospitals. The sample included all the not-for-profit and investor owned acute care hospitals for which 1994 and 2006 AHA Annual Survey data were available. A panel design was used to longitudinally examine the effect that state CB Laws had on hospital community health orientation activities and the provision of health promotion services, after controlling for the influence of other organizational and environmental variables that might affect these activities and services. The authors found that both CB Law state and non CB Law state hospitals increased their number of orientation activities and promotion services from 1994 to 2006. However, there was no significant difference in the gains in these activities and services between these two groups of hospitals. These results suggest that other environmental and organizational factors may mediate the effect of the state CB Laws over time

    Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

    Full text link
    While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights is likely to break existing functionality, providing no learning signal even if some individual weight changes were beneficial. This paper proposes a solution by introducing a family of safe mutation (SM) operators that aim within the mutation operator itself to find a degree of change that does not alter network behavior too much, but still facilitates exploration. Importantly, these SM operators do not require any additional interactions with the environment. The most effective SM variant capitalizes on the intriguing opportunity to scale the degree of mutation of each individual weight according to the sensitivity of the network's outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks (which tend to be particularly brittle to mutation), including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution

    ES Is More Than Just a Traditional Finite-Difference Approximator

    Full text link
    An evolution strategy (ES) variant based on a simplification of a natural evolution strategy recently attracted attention because it performs surprisingly well in challenging deep reinforcement learning domains. It searches for neural network parameters by generating perturbations to the current set of parameters, checking their performance, and moving in the aggregate direction of higher reward. Because it resembles a traditional finite-difference approximation of the reward gradient, it can naturally be confused with one. However, this ES optimizes for a different gradient than just reward: It optimizes for the average reward of the entire population, thereby seeking parameters that are robust to perturbation. This difference can channel ES into distinct areas of the search space relative to gradient descent, and also consequently to networks with distinct properties. This unique robustness-seeking property, and its consequences for optimization, are demonstrated in several domains. They include humanoid locomotion, where networks from policy gradient-based reinforcement learning are significantly less robust to parameter perturbation than ES-based policies solving the same task. While the implications of such robustness and robustness-seeking remain open to further study, this work's main contribution is to highlight such differences and their potential importance

    You Could Have Told Me That in the First Place: Five Tips That Might Have Saved a Young Lawyer a Lot of Trouble

    Get PDF
    I will open with a confession: I have very, very little to contribute to legal scholarship. My day-to-day work as a lawyer and a parent keeps me busy. My career to date as a generalist has not led me to develop any great substantive expertise in a particular area of the law. Even my war stories are boring because they cluster around briefs, procedural defaults, and oral arguments. But I do have one thing to offer. I have been lucky in my career to work in “Biglaw,” then at a medium-sized firm of about fifty lawyers, and most recently at a small firm of just three lawyers. I made my share of mistakes at each stop—some routine, some painful, and almost all avoidable. For the most part, I have been paying attention along the way. And so what I have to share with you is a set of five tips, in no particular order, that could have prevented about eighty percent of my missteps as a young lawyer

    Universal quantum computation on a semiconductor quantum wire network

    Full text link
    Universal quantum computation (UQC) using Majorana fermions on a 2D topological superconducting (TS) medium remains an outstanding open problem. This is because the quantum gate set that can be generated by braiding of the Majorana fermions does not include \emph{any} two-qubit gate and also the single-qubit π/8\pi/8 phase gate. In principle, it is possible to create these crucial extra gates using quantum interference of Majorana fermion currents. However, it is not clear if the motion of the various order parameter defects (vortices, domain walls, \emph{etc.}), to which the Majorana fermions are bound in a TS medium, can be quantum coherent. We show that these obstacles can be overcome using a semiconductor quantum wire network in the vicinity of an ss-wave superconductor, by constructing topologically protected two-qubit gates and any arbitrary single-qubit phase gate in a topologically unprotected manner, which can be error corrected using magic state distillation. Thus our strategy, using a judicious combination of topologically protected and unprotected gate operations, realizes UQC on a quantum wire network with a remarkably high error threshold of 0.140.14 as compared to 10310^{-3} to 10410^{-4} in ordinary unprotected quantum computation.Comment: 7 pages, 2 figure

    On mining complex sequential data by means of FCA and pattern structures

    Get PDF
    Nowadays data sets are available in very complex and heterogeneous ways. Mining of such data collections is essential to support many real-world applications ranging from healthcare to marketing. In this work, we focus on the analysis of "complex" sequential data by means of interesting sequential patterns. We approach the problem using the elegant mathematical framework of Formal Concept Analysis (FCA) and its extension based on "pattern structures". Pattern structures are used for mining complex data (such as sequences or graphs) and are based on a subsumption operation, which in our case is defined with respect to the partial order on sequences. We show how pattern structures along with projections (i.e., a data reduction of sequential structures), are able to enumerate more meaningful patterns and increase the computing efficiency of the approach. Finally, we show the applicability of the presented method for discovering and analyzing interesting patient patterns from a French healthcare data set on cancer. The quantitative and qualitative results (with annotations and analysis from a physician) are reported in this use case which is the main motivation for this work. Keywords: data mining; formal concept analysis; pattern structures; projections; sequences; sequential data.Comment: An accepted publication in International Journal of General Systems. The paper is created in the wake of the conference on Concept Lattice and their Applications (CLA'2013). 27 pages, 9 figures, 3 table

    HSCT materials and structures: An MDC perspective

    Get PDF
    The key High Speed Civil Transport (HSCT) features which control the materials selection are discussed. Materials are selected based on weight and production economics. The top-down and bottoms-up approaches to material selection are compared for the Mach 2.4 study baseline aircraft. The key materials and structures related tasks which remain to be accomplished prior to proceeding with the building of the HSCT aircraft are examined
    corecore