5,441 research outputs found

    Aerodynamic Design of the Hybrid Wing Body Propulsion-Airframe Integration

    Get PDF
    A hybrid wingbody (HWB) concept is being considered by NASA as a potential subsonic transport aircraft that meets aerodynamic, fuel, emission, and noise goals in the time frame of the 2030s. While the concept promises advantages over conventional wing-and-tube aircraft, it poses unknowns and risks, thus requiring in-depth and broad assessments. Specifically, the configuration entails a tight integration of the airframe and propulsion geometries; the aerodynamic impact has to be carefully evaluated. With the propulsion nacelle installed on the (upper) body, the lift and drag are affected by the mutual interference effects between the airframe and nacelle. The static margin for longitudinal stability is also adversely changed. We develop a design approach in which the integrated geometry of airframe (HWB) and propulsion is accounted for simultaneously in a simple algebraic manner, via parameterization of the planform and airfoils at control sections of the wingbody. In this paper, we present the design of a 300-passenger transport that employs distributed electric fans for propulsion. The trim for stability is achieved through the use of the wingtip twist angle. The geometric shape variables are determined through the adjoint optimization method by minimizing the drag while subject to lift, pitch moment, and geometry constraints. The design results clearly show the influence on the aerodynamic characteristics of the installed nacelle and trimming for stability. A drag minimization with the trim constraint yields a reduction of 10 counts in the drag coefficient

    Statistical consideration when adding new arms to ongoing clinical trials: the potentials and the caveats.

    Get PDF
    BACKGROUND: Platform trials improve the efficiency of the drug development process through flexible features such as adding and dropping arms as evidence emerges. The benefits and practical challenges of implementing novel trial designs have been discussed widely in the literature, yet less consideration has been given to the statistical implications of adding arms. MAIN: We explain different statistical considerations that arise from allowing new research interventions to be added in for ongoing studies. We present recent methodology development on addressing these issues and illustrate design and analysis approaches that might be enhanced to provide robust inference from platform trials. We also discuss the implication of changing the control arm, how patient eligibility for different arms may complicate the trial design and analysis, and how operational bias may arise when revealing some results of the trials. Lastly, we comment on the appropriateness and the application of platform trials in phase II and phase III settings, as well as publicly versus industry-funded trials. CONCLUSION: Platform trials provide great opportunities for improving the efficiency of evaluating interventions. Although several statistical issues are present, there are a range of methods available that allow robust and efficient design and analysis of these trials

    Structural and functional basis for inhibition of erythrocyte invasion by antibodies that target Plasmodium falciparum EBA-175

    Get PDF
    Disrupting erythrocyte invasion by Plasmodium falciparum is an attractive approach to combat malaria. P. falciparum EBA-175 (PfEBA-175) engages the host receptor Glycophorin A (GpA) during invasion and is a leading vaccine candidate. Antibodies that recognize PfEBA-175 can prevent parasite growth, although not all antibodies are inhibitory. Here, using x-ray crystallography, small-angle x-ray scattering and functional studies, we report the structural basis and mechanism for inhibition by two PfEBA-175 antibodies. Structures of each antibody in complex with the PfEBA-175 receptor binding domain reveal that the most potent inhibitory antibody, R217, engages critical GpA binding residues and the proposed dimer interface of PfEBA-175. A second weakly inhibitory antibody, R218, binds to an asparagine-rich surface loop. We show that the epitopes identified by structural studies are critical for antibody binding. Together, the structural and mapping studies reveal distinct mechanisms of action, with R217 directly preventing receptor binding while R218 allows for receptor binding. Using a direct receptor binding assay we show R217 directly blocks GpA engagement while R218 does not. Our studies elaborate on the complex interaction between PfEBA-175 and GpA and highlight new approaches to targeting the molecular mechanism of P. falciparum invasion of erythrocytes. The results suggest studies aiming to improve the efficacy of blood-stage vaccines, either by selecting single or combining multiple parasite antigens, should assess the antibody response to defined inhibitory epitopes as well as the response to the whole protein antigen. Finally, this work demonstrates the importance of identifying inhibitory-epitopes and avoiding decoy-epitopes in antibody-based therapies, vaccines and diagnostics

    The Personalised Randomized Controlled Trial: Evaluation of a new trial design

    Get PDF
    In some clinical scenarios, for example, severe sepsis caused by extensively drug resistant bacteria, there is uncertainty between many common treatments, but a conventional multiarm randomized trial is not possible because individual participants may not be eligible to receive certain treatments. The Personalised Randomized Controlled Trial design allows each participant to be randomized between a “personalised randomization list” of treatments that are suitable for them. The primary aim is to produce treatment rankings that can guide choice of treatment, rather than focusing on the estimates of relative treatment effects. Here we use simulation to assess several novel analysis approaches for this innovative trial design. One of the approaches is like a network meta-analysis, where participants with the same personalised randomization list are like a trial, and both direct and indirect evidence are used. We evaluate this proposed analysis and compare it with analyses making less use of indirect evidence. We also propose new performance measures including the expected improvement in outcome if the trial's rankings are used to inform future treatment rather than random choice. We conclude that analysis of a personalized randomized controlled trial can be performed by pooling data from different types of participants and is robust to moderate subgroup-by-intervention interactions based on the parameters of our simulation. The proposed approach performs well with respect to estimation bias and coverage. It provides an overall treatment ranking list with reasonable precision, and is likely to improve outcome on average if used to determine intervention policies and guide individual clinical decisions

    A Review of Bayesian Perspectives on Sample Size Derivation for Confirmatory Trials.

    Get PDF
    Funder: Biometrika TrustSample size derivation is a crucial element of planning any confirmatory trial. The required sample size is typically derived based on constraints on the maximal acceptable Type I error rate and minimal desired power. Power depends on the unknown true effect and tends to be calculated either for the smallest relevant effect or a likely point alternative. The former might be problematic if the minimal relevant effect is close to the null, thus requiring an excessively large sample size, while the latter is dubious since it does not account for the a priori uncertainty about the likely alternative effect. A Bayesian perspective on sample size derivation for a frequentist trial can reconcile arguments about the relative a priori plausibility of alternative effects with ideas based on the relevance of effect sizes. Many suggestions as to how such "hybrid" approaches could be implemented in practice have been put forward. However, key quantities are often defined in subtly different ways in the literature. Starting from the traditional entirely frequentist approach to sample size derivation, we derive consistent definitions for the most commonly used hybrid quantities and highlight connections, before discussing and demonstrating their use in sample size derivation for clinical trials

    A review of Bayesian perspectives on sample size derivation for confirmatory trials

    Full text link
    Sample size derivation is a crucial element of the planning phase of any confirmatory trial. A sample size is typically derived based on constraints on the maximal acceptable type I error rate and a minimal desired power. Here, power depends on the unknown true effect size. In practice, power is typically calculated either for the smallest relevant effect size or a likely point alternative. The former might be problematic if the minimal relevant effect is close to the null, thus requiring an excessively large sample size. The latter is dubious since it does not account for the a priori uncertainty about the likely alternative effect size. A Bayesian perspective on the sample size derivation for a frequentist trial naturally emerges as a way of reconciling arguments about the relative a priori plausibility of alternative effect sizes with ideas based on the relevance of effect sizes. Many suggestions as to how such `hybrid' approaches could be implemented in practice have been put forward in the literature. However, key quantities such as assurance, probability of success, or expected power are often defined in subtly different ways in the literature. Starting from the traditional and entirely frequentist approach to sample size derivation, we derive consistent definitions for the most commonly used `hybrid' quantities and highlight connections, before discussing and demonstrating their use in the context of sample size derivation for clinical trials

    Analyzing Norm Violations in Live-Stream Chat

    Full text link
    Toxic language, such as hate speech, can deter users from participating in online communities and enjoying popular platforms. Previous approaches to detecting toxic language and norm violations have been primarily concerned with conversations from online forums and social media, such as Reddit and Twitter. These approaches are less effective when applied to conversations on live-streaming platforms, such as Twitch and YouTube Live, as each comment is only visible for a limited time and lacks a thread structure that establishes its relationship with other comments. In this work, we share the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms. We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch. We articulate several facets of live-stream data that differ from other forums, and demonstrate that existing models perform poorly in this setting. By conducting a user study, we identify the informational context humans use in live-stream moderation, and train models leveraging context to identify norm violations. Our results show that appropriate contextual information can boost moderation performance by 35\%.Comment: 17 pages, 8 figures, 15 table
    • …
    corecore