2,239 research outputs found
Recommended from our members
Evolutionary contingency as non-trivial objective probability: Biological evitability and evolutionary trajectories.
Contingency-theorists have put forth differing accounts of evolutionary contingency. The bulk of these accounts abstractly refer to certain causal structures in which an evolutionarily contingent outcome is supposedly embedded. For example, an outcome is evolutionarily contingent if it is at the end of a 'path-dependent' or 'causally dependent' causal chain. However, this paper argues that many of these proposals fail to include a desideratum - the notion of biological evitability or that evolutionary outcomes could have been otherwise - that for good theoretical reasons ought to be part of an account of evolutionary contingency. Although an inclusion of this desideratum might seem obvious enough, under some existing accounts, an outcome can be contingent yet inevitable all the same. In my diagnosis of this issue, I develop the idea of trajectory propensity to highlight the fact that there are plausible biological scenarios in which causal structures, alone, fail to exhaustively determine the biological evitability of evolutionary forms. In the second half of the paper, I present two additional desiderata of an account of evolutionary contingency and, subsequently, proffer a novel account of evolutionary contingency as non-trivial objective probability, which overcomes the shortcomings of some previous proposals. According to this outcome-based account, contingency claims are probabilistic statements about an evolutionary outcome's objective probability of evolution within a specifically defined modal range: an outcome, O, is evolutionarily contingent in modal range, R, to the degree of objective probability, P (where P is in between 1 and 0)
Anti-vascular endothelial growth factor treatment for eye diseases
published_or_final_versio
Hand hygiene promotion in long-term care facilities (LTCF) – a cluster randomized controlled trial
Code coverage of adaptive random testing
Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the first software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher confidence in the reliability of the software under test even when no failure is detected
Avian Influenza: a global threat needing a global solution
There have been three influenza pandemics since the 1900s, of which the 1919–1919 flu pandemic had the highest mortality rates. The influenza virus infects both humans and birds, and mutates using two mechanisms: antigenic drift and antigenic shift. Currently, the H5N1 avian flu virus is limited to outbreaks among poultry and persons in direct contact to infected poultry, but the mortality rate among infected humans is high. Avian influenza (AI) is endemic in Asia as a result of unregulated poultry rearing in rural areas. Such birds often live in close proximity to humans and this increases the chance of genetic re-assortment between avian and human influenza viruses which may produce a mutant strain that is easily transmitted between humans. Once this happens, a global pandemic is likely. Unlike SARS, a person with influenza infection is contagious before the onset of case-defining symptoms which limits the effectiveness of case isolation as a control strategy. Researchers have shown that carefully orchestrated of public health measures could potentially limit the spread of an AI pandemic if implemented soon after the first cases appear. To successfully contain and control an AI pandemic, both national and global strategies are needed. National strategies include source surveillance and control, adequate stockpiles of anti-viral agents, timely production of flu vaccines and healthcare system readiness. Global strategies such as early integrated response, curbing the disease outbreak at source, utilization of global resources, continuing research and open communication are also critical
An assessment of systems and software engineering scholars and institutions (2002-2006)
This paper summarizes a survey of publications in the field of systems and software engineering from 2002 to 2006. The survey is an ongoing, annual event that identifies the top 15 scholars and institutions over a 5-year period. The rankings are calculated based on the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea, and the top-ranked scholar is Magne Jørgensen of Simula Research Laboratory, Norway. © 2009 Elsevier Inc. All rights reserved.postprin
An assessment of systems and software engineering scholars and institutions (2001-2005)
This paper presents the findings of a five-year study of the top scholars and institutions in the systems and software engineering field, as measured by the quantity of papers published in the journals of the field in 2001-2005. The top scholar is Magne Jørgensen of Simula Research Laboratory, Norway, and the top institution is Korea Advanced Institute of Science and Technology, Korea. This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period. © 2007 Elsevier Inc. All rights reserved.postprin
Similarity regularized sparse group lasso for cup to disc ratio computation
© 2017 Optical Society of America. Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well
- …