2,455 research outputs found

    Experimental hut comparisons of nets treated with carbamate or pyrethroid insecticides, washed or unwashed, against pyrethroid-resistant mosquitoes.

    No full text
    The efficacy against mosquitoes (Diptera: Culicidae) of a bednet treated with carbamate insecticide [carbosulfan capsule suspension (CS) 200 mg/m(2)] was compared with four types of pyrethroid-treated nets in veranda-trap huts at Yaokoffikro near Bouaké, Côte d'Ivoire, where the malaria vector Anopheles gambiae Giles carries the kdr gene (conferring pyrethroid resistance) at high frequency and Culex quinquefasciatus Say is also pyrethroid resistant. Pyrethroids compared were lambdacyhalothrin CS 18 mg/m(2), alphacypermethrin water dispersible granules (WG) 20 mg/m(2), deltamethrin 50 mg/m(2) (Permanet) and permethrin emulsifiable concentrate (EC) 500 mg/m(2). Insecticidal power and personal protection from mosquito bites were assessed before and after the nets were used for 8 months and hand washed five times in cold soapy water. Before washing, all treatments except permethrin significantly reduced blood-feeding and all had significant insecticidal activity against An. gambiae. The carbosulfan net gave significantly higher killing of An. gambiae than all pyrethroid treatments except the Permanet. Against Culex spp., carbosulfan was more insecticidal and gave a significantly better protective effect than any of the pyrethroid treatments. After washing, treated nets retained various degrees of efficacy against both mosquito genera - but least for the carbosulfan net. Washed nets with three types of pyrethroid treatment (alphacypermethrin, lambdacyhalothrin, permethrin) gave significantly higher mortality rates of Culex than in huts with the same pyrethroid-treated nets before washing. After five washes, the Permanet, which is sold as a long-lasting insecticidal product, performed no better than the other nets in our experimental conditions

    Population dynamics of Herves transposable element in Anopheles gambiae

    Get PDF

    Sources of Relativistic Jets in the Galaxy

    Full text link
    Black holes of stellar mass and neutron stars in binary systems are first detected as hard X-ray sources using high-energy space telescopes. Relativistic jets in some of these compact sources are found by means of multiwavelength observations with ground-based telescopes. The X-ray emission probes the inner accretion disk and immediate surroundings of the compact object, whereas the synchrotron emission from the jets is observed in the radio and infrared bands, and in the future could be detected at even shorter wavelengths. Black-hole X-ray binaries with relativistic jets mimic, on a much smaller scale, many of the phenomena seen in quasars and are thus called microquasars. Because of their proximity, their study opens the way for a better understanding of the relativistic jets seen elsewhere in the Universe. From the observation of two-sided moving jets it is inferred that the ejecta in microquasars move with relativistic speeds similar to those believed to be present in quasars. The simultaneous multiwavelength approach to microquasars reveals in short timescales the close connection between instabilities in the accretion disk seen in the X-rays, and the ejection of relativistic clouds of plasma observed as synchrotron emission at longer wavelengths. Besides contributing to a deeper comprehension of accretion disks and jets, microquasars may serve in the future to determine the distances of jet sources using constraints from special relativity, and the spin of black holes using general relativity.Comment: 39 pages, Tex, 8 figures, to appear in vol. 37 (1999) of Annual Reviews of Astronomy and Astrophysic

    The role of Comprehension in Requirements and Implications for Use Case Descriptions

    Get PDF
    Within requirements engineering it is generally accepted that in writing specifications (or indeed any requirements phase document), one attempts to produce an artefact which will be simple to comprehend for the user. That is, whether the document is intended for customers to validate requirements, or engineers to understand what the design must deliver, comprehension is an important goal for the author. Indeed, advice on producing ‘readable’ or ‘understandable’ documents is often included in courses on requirements engineering. However, few researchers, particularly within the software engineering domain, have attempted either to define or to understand the nature of comprehension and it’s implications for guidance on the production of quality requirements. Therefore, this paper examines thoroughly the nature of textual comprehension, drawing heavily from research in discourse process, and suggests some implications for requirements (and other) software documentation. In essence, we find that the guidance on writing requirements, often prevalent within software engineering, may be based upon assumptions which are an oversimplification of the nature of comprehension. Hence, the paper examines guidelines which have been proposed, in this case for use case descriptions, and the extent to which they agree with discourse process theory; before suggesting refinements to the guidelines which attempt to utilise lessons learned from our richer understanding of the underlying discourse process theory. For example, we suggest subtly different sets of writing guidelines for the different tasks of requirements, specification and design

    Comparison of algorithms that detect drug side effects using electronic healthcare databases

    Get PDF
    The electronic healthcare databases are starting to become more readily available and are thought to have excellent potential for generating adverse drug reaction signals. The Health Improvement Network (THIN) database is an electronic healthcare database containing medical information on over 11 million patients that has excellent potential for detecting ADRs. In this paper we apply four existing electronic healthcare database signal detecting algorithms (MUTARA, HUNT, Temporal Pattern Discovery and modified ROR) on the THIN database for a selection of drugs from six chosen drug families. This is the first comparison of ADR signalling algorithms that includes MUTARA and HUNT and enabled us to set a benchmark for the adverse drug reaction signalling ability of the THIN database. The drugs were selectively chosen to enable a comparison with previous work and for variety. It was found that no algorithm was generally superior and the algorithms’ natural thresholds act at variable stringencies. Furthermore, none of the algorithms perform well at detecting rare ADRs

    Comparative Field Evaluation of Combinations of Long-Lasting Insecticide Treated Nets and Indoor Residual Spraying, Relative to Either Method Alone, for Malaria Prevention in an Area where the main Vector is Anopheles Arabiensis.

    Get PDF
    Long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) are commonly used together in the same households to improve malaria control despite inconsistent evidence on whether such combinations actually offer better protection than nets alone or IRS alone. Comparative tests were conducted using experimental huts fitted with LLINs, untreated nets, IRS plus untreated nets, or combinations of LLINs and IRS, in an area where Anopheles arabiensis is the predominant malaria vector species. Three LLIN types, Olyset®, PermaNet 2.0® and Icon Life® nets and three IRS treatments, pirimiphos-methyl, DDT, and lambda cyhalothrin, were used singly or in combinations. We compared, number of mosquitoes entering huts, proportion and number killed, proportions prevented from blood-feeding, time when mosquitoes exited the huts, and proportions caught exiting. The tests were done for four months in dry season and another six months in wet season, each time using new intact nets. All the net types, used with or without IRS, prevented >99% of indoor mosquito bites. Adding PermaNet 2.0® and Icon Life®, but not Olyset® nets into huts with any IRS increased mortality of malaria vectors relative to IRS alone. However, of all IRS treatments, only pirimiphos-methyl significantly increased vector mortality relative to LLINs alone, though this increase was modest. Overall, median mortality of An. arabiensis caught in huts with any of the treatments did not exceed 29%. No treatment reduced entry of the vectors into huts, except for marginal reductions due to PermaNet 2.0® nets and DDT. More than 95% of all mosquitoes were caught in exit traps rather than inside huts. Where the main malaria vector is An. arabiensis, adding IRS into houses with intact pyrethroid LLINs does not enhance house-hold level protection except where the IRS employs non-pyrethroid insecticides such as pirimiphos-methyl, which can confer modest enhancements. In contrast, adding intact bednets onto IRS enhances protection by preventing mosquito blood-feeding (even if the nets are non-insecticidal) and by slightly increasing mosquito mortality (in case of LLINs). The primary mode of action of intact LLINs against An. arabiensis is clearly bite prevention rather than insecticidal activity. Therefore, where resources are limited, priority should be to ensure that everyone at risk consistently uses LLINs and that the nets are regularly replaced before being excessively torn. Measures that maximize bite prevention (e.g. proper net sizes to effectively cover sleeping spaces, stronger net fibres that resist tears and burns and net use practices that preserve net longevity), should be emphasized

    The Hubble Constant

    Get PDF
    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0H_0 values of around 72-74km/s/Mpc , with typical errors of 2-3km/s/Mpc. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68km/s/Mpc and typical errors of 1-2km/s/Mpc. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.Comment: Extensively revised and updated since the 2007 version: accepted by Living Reviews in Relativity as a major (2014) update of LRR 10, 4, 200

    Discovering study-specific gene regulatory networks

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Microarrays are commonly used in biology because of their ability to simultaneously measure thousands of genes under different conditions. Due to their structure, typically containing a high amount of variables but far fewer samples, scalable network analysis techniques are often employed. In particular, consensus approaches have been recently used that combine multiple microarray studies in order to find networks that are more robust. The purpose of this paper, however, is to combine multiple microarray studies to automatically identify subnetworks that are distinctive to specific experimental conditions rather than common to them all. To better understand key regulatory mechanisms and how they change under different conditions, we derive unique networks from multiple independent networks built using glasso which goes beyond standard correlations. This involves calculating cluster prediction accuracies to detect the most predictive genes for a specific set of conditions. We differentiate between accuracies calculated using cross-validation within a selected cluster of studies (the intra prediction accuracy) and those calculated on a set of independent studies belonging to different study clusters (inter prediction accuracy). Finally, we compare our method's results to related state-of-the art techniques. We explore how the proposed pipeline performs on both synthetic data and real data (wheat and Fusarium). Our results show that subnetworks can be identified reliably that are specific to subsets of studies and that these networks reflect key mechanisms that are fundamental to the experimental conditions in each of those subsets
    corecore