4,134 research outputs found
Status of HPV vaccine introduction and barriers to country uptake.
During the last 12 years, over 80 countries have introduced national HPV vaccination programs. The majority of these countries are high or upper-middle income countries. The barriers to HPV vaccine introduction remain greatest in those countries with the highest burden of cervical cancer and the most need for vaccination. Innovation and global leadership is required to increase and sustain introductions in low income and lower-middle income countries
A new design of friction test rig and determination of friction coefficient when warm forming an aluminium alloy
To facilitate reduced fuel consumption and increase environmental friendliness, in recent years, demands for lightweight vehicles have been increasing, and interest in hot or warm forming of sheet aluminium alloys for use in vehicle body structures, has grown. For better understanding and optimisation of the forming processes, knowledge of friction coefficient between tooling and work-piece, at elevated temperature, is critical. However, because of difficulties with measurement at elevated temperature, most studies on friction are limited to room temperature. In this study, a friction rig was designed for isothermal tests at elevated temperature. The test rig enables pure sliding between pins (made of a tool steel) and a metal sheet. The friction behaviour of Forge Ease 278, a water based solid lubricant pre-applied to aluminium alloy AA5754, was investigated, under isothermal warm forming conditions, using the test rig. The effects of testing temperature, sliding speed and applied pressure on the friction coefficient were studied. It was found that Forge Ease produced a low friction coefficient of around 0.05, above room temperature and below 250 °C. The lubricant performance degrades at 350 °C and the friction coefficient increases markedly. Both sliding speed (up to 150 mm s -1 ) and applied pressure (up to 12.8 MPa) had no significant effect on friction coefficient of Forge Ease
Interpretable machine learning for genomics
High-throughput technologies such as next-generation sequencing allow biologists to observe cell function with unprecedented resolution, but the resulting datasets are too large and complicated for humans to understand without the aid of advanced statistical methods. Machine learning (ML) algorithms, which are designed to automatically find patterns in data, are well suited to this task. Yet these models are often so complex as to be opaque, leaving researchers with few clues about underlying mechanisms. Interpretable machine learning (iML) is a burgeoning subdiscipline of computational statistics devoted to making the predictions of ML models more intelligible to end users. This article is a gentle and critical introduction to iML, with an emphasis on genomic applications. I define relevant concepts, motivate leading methodologies, and provide a simple typology of existing approaches. I survey recent examples of iML in genomics, demonstrating how such techniques are increasingly integrated into research workflows. I argue that iML solutions are required to realize the promise of precision medicine. However, several open challenges remain. I examine the limitations of current state-of-the-art tools and propose a number of directions for future research. While the horizon for iML in genomics is wide and bright, continued progress requires close collaboration across disciplines
No explanation without inference
Complex algorithms are increasingly used to automate high-stakes decisions in sensitive areas like healthcare and finance. However, the opacity of such models raises problems of intelligibility and trust. Researchers in interpretable machine learning (iML) have proposed a number of solutions, including local linear approximations, rule lists, and counterfactuals. I argue that all three methods share the same fundamental flaw – namely, a disregard for severe testing. Techniques for quantifying uncertainty and error are central to scientific explanation, yet iML has largely ignored this methodological imperative. I consider examples that illustrate the dangers of such negligence, with an emphasis on issues of scoping and confounding. Drawing on recent work in philosophy of science, I conclude that there can be no explanation – algorithmic or otherwise – without inference. I propose several ways to severely test existing iML methods and evaluate the resulting trade-offs
Lessons learnt from human papillomavirus (HPV) vaccine demonstration projects and national programmes in low- and middle-income countries
The value of demonstration projects for new interventions: The case of human papillomavirus vaccine introduction in low- and middle-income countries.
Demonstration projects or pilots of new public health interventions aim to build learning and capacity to inform country-wide implementation. Authors examined the value of HPV vaccination demonstration projects and initial national programmes in low-income and lower-middle-income countries, including potential drawbacks and how value for national scale-up might be increased. Data from a systematic review and key informant interviews, analyzed thematically, included 55 demonstration projects and 8 national programmes implemented between 2007-2015 (89 years' experience). Initial demonstration projects quickly provided consistent lessons. Value would increase if projects were designed to inform sustainable national scale-up. Well-designed projects can test multiple delivery strategies, implementation for challenging areas and populations, and integration with national systems. Introduction of vaccines or other health interventions, particularly those involving new target groups or delivery strategies, needs flexible funding approaches to address specific questions of scalability and sustainability, including learning lessons through phased national expansion
On the Complexity of Case-Based Planning
We analyze the computational complexity of problems related to case-based
planning: planning when a plan for a similar instance is known, and planning
from a library of plans. We prove that planning from a single case has the same
complexity than generative planning (i.e., planning "from scratch"); using an
extended definition of cases, complexity is reduced if the domain stored in the
case is similar to the one to search plans for. Planning from a library of
cases is shown to have the same complexity. In both cases, the complexity of
planning remains, in the worst case, PSPACE-complete
Evolutionary connectionism: algorithmic principles underlying the evolution of biological organisation in evo-devo, evo-eco and evolutionary transitions
The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term “evolutionary connectionism” to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary systems and modify the adaptive capabilities of natural selection over time. We review the evidence supporting the functional equivalences between the domains of learning and of evolution, and discuss the potential for this to resolve conceptual problems in our understanding of the evolution of developmental, ecological and reproductive organisations and, in particular, the major evolutionary transitions
- …
