12,680 research outputs found

    A Wake-Up Call: Lessons from Ebola for the World's Health Systems

    Get PDF
    The report ranks the world's poorest countries on the state of their public health systems, finding that 28 have weaker defenses in place than Sierra Leone where, alongside Liberia and Guinea, the current Ebola crisis has already claimed more than 9,500 lives. The report also advises that prevention is better than cure, finding that the international Ebola relief effort in West Africa has cost 4.3bn,whereasstrengtheningthehealthsystemsofthosecountriesinthefirstplacewouldhavecostjust4.3bn, whereas strengthening the health systems of those countries in the first place would have cost just 1.58bn. Ahead of an Ebola summit attended by world leaders in Brussels today, the charity warns that alongside immediate much needed support to Sierra Leone, Liberia and Guinea, lessons need to be learned and applied to other vulnerable countries around the world

    Trends and tradition: Negotiating different cultural models in relation to sustainable craft and artisan production

    Full text link
    If the identity of ‘design’ as a practice is contested then the relationship of design and designers to craft and craft practices can be hugely confused. This lack of clarity can encourage non-design based organisations to promote the use of ‘trend forecasting’ as a panacea to the design dilemma associated with craft production for non-traditional markets. Consequently fashion sensitive trends become perceived as the driving force of design-led consumption. In this context how do we understand what ‘trend forecasting’ is and becomes when used in this manner? How does it contribute or not to the sustainability of local design cultures? This paper examines how these challenges have been interrogated and experienced through practice at Masters Level at Central Saint Martins College of Art and Design. It seeks sustainable strategies for design and craft drawing on a diverse range of examples to illustrate contemporary artefacts realised from a diverse range of projects, sources and geographical locations

    Surrogate Accelerated Bayesian Inversion for the Determination of the Thermal Diffusivity of a Material

    Full text link
    Determination of the thermal properties of a material is an important task in many scientific and engineering applications. How a material behaves when subjected to high or fluctuating temperatures can be critical to the safety and longevity of a system's essential components. The laser flash experiment is a well-established technique for indirectly measuring the thermal diffusivity, and hence the thermal conductivity, of a material. In previous works, optimization schemes have been used to find estimates of the thermal conductivity and other quantities of interest which best fit a given model to experimental data. Adopting a Bayesian approach allows for prior beliefs about uncertain model inputs to be conditioned on experimental data to determine a posterior distribution, but probing this distribution using sampling techniques such as Markov chain Monte Carlo methods can be incredibly computationally intensive. This difficulty is especially true for forward models consisting of time-dependent partial differential equations. We pose the problem of determining the thermal conductivity of a material via the laser flash experiment as a Bayesian inverse problem in which the laser intensity is also treated as uncertain. We introduce a parametric surrogate model that takes the form of a stochastic Galerkin finite element approximation, also known as a generalized polynomial chaos expansion, and show how it can be used to sample efficiently from the approximate posterior distribution. This approach gives access not only to the sought-after estimate of the thermal conductivity but also important information about its relationship to the laser intensity, and information for uncertainty quantification. We also investigate the effects of the spatial profile of the laser on the estimated posterior distribution for the thermal conductivity

    Ending Newborn Deaths: Ensuring Every Baby Survives

    Get PDF
    The first 24 hours of a child's life are the most dangerous, with more than one million babies dying each year on their first and only day of life. One half of first day deaths around the world could be prevented if the mother and baby had access to free health care and a skilled midwife

    Neural-Attention-Based Deep Learning Architectures for Modeling Traffic Dynamics on Lane Graphs

    Full text link
    Deep neural networks can be powerful tools, but require careful application-specific design to ensure that the most informative relationships in the data are learnable. In this paper, we apply deep neural networks to the nonlinear spatiotemporal physics problem of vehicle traffic dynamics. We consider problems of estimating macroscopic quantities (e.g., the queue at an intersection) at a lane level. First-principles modeling at the lane scale has been a challenge due to complexities in modeling social behaviors like lane changes, and those behaviors' resultant macro-scale effects. Following domain knowledge that upstream/downstream lanes and neighboring lanes affect each others' traffic flows in distinct ways, we apply a form of neural attention that allows the neural network layers to aggregate information from different lanes in different manners. Using a microscopic traffic simulator as a testbed, we obtain results showing that an attentional neural network model can use information from nearby lanes to improve predictions, and, that explicitly encoding the lane-to-lane relationship types significantly improves performance. We also demonstrate the transfer of our learned neural network to a more complex road network, discuss how its performance degradation may be attributable to new traffic behaviors induced by increased topological complexity, and motivate learning dynamics models from many road network topologies.Comment: To appear at 2019 IEEE Conference on Intelligent Transportation System

    On the acceleration of wavefront applications using distributed many-core architectures

    Get PDF
    In this paper we investigate the use of distributed graphics processing unit (GPU)-based architectures to accelerate pipelined wavefront applications—a ubiquitous class of parallel algorithms used for the solution of a number of scientific and engineering applications. Specifically, we employ a recently developed port of the LU solver (from the NAS Parallel Benchmark suite) to investigate the performance of these algorithms on high-performance computing solutions from NVIDIA (Tesla C1060 and C2050) as well as on traditional clusters (AMD/InfiniBand and IBM BlueGene/P). Benchmark results are presented for problem classes A to C and a recently developed performance model is used to provide projections for problem classes D and E, the latter of which represents a billion-cell problem. Our results demonstrate that while the theoretical performance of GPU solutions will far exceed those of many traditional technologies, the sustained application performance is currently comparable for scientific wavefront applications. Finally, a breakdown of the GPU solution is conducted, exposing PCIe overheads and decomposition constraints. A new k-blocking strategy is proposed to improve the future performance of this class of algorithm on GPU-based architectures

    Participatory healthcare service design and innovation

    Get PDF
    This paper describes the use of Experience Based Design (EBD), a participatory methodology for healthcare service design, to improve the outpatient service for older people at Sheffield Teaching Hospitals. The challenges in moving from stories to designing improvements, co-designing for wicked problems, and the effects of participants' limited scopes of action are discussed. It concludes by proposing that such problems are common to participatory service design in large institutions and recommends that future versions of EBD incorporate more tools to promote divergent thinking

    Lyophilisation of lentiviral pseudotypes for the development and distribution of virus neutralisation assay kits for rabies, Marburg and influenza viruses

    Get PDF
    Purpose: Some conventional serological assays can accurately quantify neutralising antibody responses raised against epitopes on virus glycoproteins, enabling mass vaccine evaluation and serosurveillance studies to take place. However, these assays often necessitate the handling of wild-type virus in expensive high biosafety laboratories, which restricts the scope of their application, particularly in resource-deprived areas. A solution to this issue is the use of lentiviral pseudotype viruses (PVs)—chimeric, replication-deficient virions that imitate the binding and entry mechanisms of their wild-type equivalents. Pseudotype virus neutralisation assays (PVNAs) bypass high biosafety requirements and yield comparable results to established assays. This study explores the potential for using lyophilisation of pseudotypes as a cost-effective, alternative means for production, distribution and storage of a PVNAbased diagnostic kit. Methods & Materials: Rabies, Marburg and H5 subtype Influenza virus pseudotypes were each suspended in cryoprotectant solutions of various molarities and subjected to freeze-drying before incubation at a variety of temperatures, humidities and time periods. Samples were then employed in antibody neutralisation assays using specific sera. Results: High levels of PV titre were retained post-lyophilisation, with acceptable levels of virus activity maintained even after medium-term storage in tropical conditions. Also, the performance of PVs in neutralisation assays was not affected by the lyophilisation process. Conclusion: These results confirm the viability of a freeze-dried PVNA-based diagnostic kit, which could considerably facilitate in-field serology for a number of clinically important viruses

    An investigation of the performance portability of OpenCL

    Get PDF
    This paper reports on the development of an MPI/OpenCL implementation of LU, an application-level benchmark from the NAS Parallel Benchmark Suite. An account of the design decisions addressed during the development of this code is presented, demonstrating the importance of memory arrangement and work-item/work-group distribution strategies when applications are deployed on different device types. The resulting platform-agnostic, single source application is benchmarked on a number of different architectures, and is shown to be 1.3–1.5× slower than native FORTRAN 77 or CUDA implementations on a single node and 1.3–3.1× slower on multiple nodes. We also explore the potential performance gains of OpenCL’s device fissioning capability, demonstrating up to a 3× speed-up over our original OpenCL implementation
    corecore