794 research outputs found

    Electron microscopy at the Marine Biological Association: 1961-2006. Occasional Publication of the Marine Biological Association 23

    Get PDF
    This memoir recalls the instruments in the Electron Microscope Unit and the staff, students and visitors who used them. Accessory equipment is also described because much of it was innovative and built in the laboratory, also, much of the science would not have been possible without it. This publication includes 33 figures, 4 plates and 7 appendices. The appendices record that 54 MBA staff and 196 students and visitors have used the microscopes and that 413 titles have been published (to the end of 2006)

    Deconstructing Weight Management Interventions for Young Adults: Looking Inside the Black Box of the EARLY Consortium Trials.

    Get PDF
    ObjectiveThe goal of the present study was to deconstruct the 17 treatment arms used in the Early Adult Reduction of weight through LifestYle (EARLY) weight management trials.MethodsIntervention materials were coded to reflect behavioral domains and behavior change techniques (BCTs) within those domains planned for each treatment arm. The analytical hierarchy process was employed to determine an emphasis profile of domains in each intervention.ResultsThe intervention arms used BCTs from all of the 16 domains, with an average of 29.3 BCTs per intervention arm. All 12 of the interventions included BCTs from the six domains of Goals and Planning, Feedback and Monitoring, Social Support, Shaping Knowledge, Natural Consequences, and Comparison of Outcomes; 11 of the 12 interventions shared 15 BCTs in common across those six domains.ConclusionsWeight management interventions are complex. The shared set of BCTs used in the EARLY trials may represent a core intervention that could be studied to determine the required emphases of BCTs and whether additional BCTs add to or detract from efficacy. Deconstructing interventions will aid in reproducibility and understanding of active ingredients

    Prevalence, safety and effectiveness of oral anticoagulant use in people with and without dementia or cognitive impairment: a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Differences in management and outcomes of oral anticoagulant (OAC) use may exist for people with and without dementia or cognitive impairment (CI). OBJECTIVE: To systematically review the prevalence and safety and effectiveness outcomes of OAC use in people with and without dementia or CI. METHODS: MEDLINE, EMBASE and CINAHL were searched for studies reporting prevalence or safety and effectiveness outcomes of OAC use for people with and without dementia, published between 2000 to September 2017. Study selection, data extraction and quality assessment were performed by two-reviewers. RESULTS: 27 studies met pre-specified inclusion criteria (21 prevalence studies, six outcomes studies). People with dementia had 52% lower odds of receiving OAC compared to people without dementia. Mean OAC prevalence was 32% for people with dementia, compared to 48% without dementia. There was no difference in the composite outcome of embolic events, myocardial infarction, and all-cause death between dementia and non-dementia groups (adjusted hazard ratio (HR) 0.72, 95% CI, 0.45-1.14, p=0.155). Bleeding rate was lower for people without dementia (HR 0.56, 95% CI, 0.37-0.85). Adverse warfarin events were more common for residents of long-term care with dementia (adjusted incidence rate ratio 1.48, 95% CI, 1.20-1.82). Community-dwelling people with dementia treated with warfarin had poorer 3 anticoagulation control than those without dementia (mean time in therapeutic range (TTR) % ± SD, 38±26 (dementia), 61±27 (no dementia), p<0.0001). CONCLUSION: A lower proportion of people with dementia received oral anticoagulation compared with people without dementia. People with dementia had higher bleeding risk and poorer anticoagulation control when treated with warfarin

    How do people respond to self-test results? A cross-sectional survey

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Self-tests, tests on medical conditions that can be performed by consumers without consulting a doctor first, are frequently used. Nevertheless, there are concerns about the safety of self-testing, as it may delay diagnosis and appropriate treatment in the case of inappropriate use of the test, or false-negative results. It is unclear whether self-tests stimulate appropriate follow-up behaviour. Our aim was to examine the frequency of self-test use, consumers' response to self-test results in terms of their confidence in the result, reassurance by the test result, and follow-up behaviour.</p> <p>Methods</p> <p>A two step cross-sectional survey was designed. A random sample of 6700 Internet users in an existing Internet panel received an online questionnaire on the use of self-tests. Self-tests were defined as tests on body materials, initiated by consumers with the aim to diagnose a disease or risk factor. A second questionnaire on consumers' response to self-test results was sent to the respondents that were identified as a self-tester in the first questionnaire (n = 703).</p> <p>Results</p> <p>18.1% (799/4416) of the respondents had ever performed a self-test, the most frequently used tests being those for diabetes (5.3%), kidney disease (4.9%), cholesterol (4.5%), urinary tract infection (1.9%) and HIV/AIDS and Chlamydia (both 1.6%). A total of 78.1% of the testers with a normal test result and 81.4% of those with an abnormal result reported confidence in this result. Almost all (95.6%) of the testers with a normal result felt reassured. After a normal result, 78.1% did not take any further action and 5.8% consulted a doctor. The corresponding figures after an abnormal test result were 9.3% and 72.2%, respectively.</p> <p>Conclusions</p> <p>Respondents who had performed a self-test seemed to base their follow-up behaviour on the result of the test. They had confidence in the test result, and were often reassured by a normal result. After an abnormal result, most self-testers sought medical care. Because consumers seem to trust the self-test results, further research should focus on the development of consumer information addressing indications for performing a self-test, the validity of self-tests and appropriate interpretation of and management after a test.</p

    Optimization of Fuzzy System Inference Model on Mini Batch Gradient Descent

    Get PDF
    Optimization is one of the factors in machine learning to help model training during backpropagation. This is conducted by adjusting the weights to minimize the loss function and to overcome dimensional problems. Also, the gradient descent method is a simple approach in the backpropagation model to solve minimum problems. The mini-batch gradient descent (MBGD) is one of the methods proven to be powerful for large-scale learning. The addition of several approaches to the MBGD such as AB, BN, and UR can accelerate the convergence process, hence, the algorithm becomes faster and more effective. This added method will perform an optimization process on the results of the data rule that has been processed as its objective function. The processing results showed the MBGD-ABBN-UR method has a more stable computational time in the three data sets than the other methods. For the model evaluation, this research used RMSE, MAE, and MAP

    NIA-AA Research Framework: Toward a Biological Definition of Alzheimer\u27s Disease

    Get PDF
    In 2011, the National Institute on Aging and Alzheimer\u27s Association created separate diagnostic recommendations for the preclinical, mild cognitive impairment, and dementia stages of Alzheimer\u27s disease. Scientific progress in the interim led to an initiative by the National Institute on Aging and Alzheimer\u27s Association to update and unify the 2011 guidelines. This unifying update is labeled a “research framework” because its intended use is for observational and interventional research, not routine clinical care. In the National Institute on Aging and Alzheimer\u27s Association Research Framework, Alzheimer\u27s disease (AD) is defined by its underlying pathologic processes that can be documented by postmortem examination or in vivo by biomarkers. The diagnosis is not based on the clinical consequences of the disease (i.e., symptoms/signs) in this research framework, which shifts the definition of AD in living people from a syndromal to a biological construct. The research framework focuses on the diagnosis of AD with biomarkers in living persons. Biomarkers are grouped into those of β amyloid deposition, pathologic tau, and neurodegeneration [AT(N)]. This ATN classification system groups different biomarkers (imaging and biofluids) by the pathologic process each measures. The AT(N) system is flexible in that new biomarkers can be added to the three existing AT(N) groups, and new biomarker groups beyond AT(N) can be added when they become available. We focus on AD as a continuum, and cognitive staging may be accomplished using continuous measures. However, we also outline two different categorical cognitive schemes for staging the severity of cognitive impairment: a scheme using three traditional syndromal categories and a six-stage numeric scheme. It is important to stress that this framework seeks to create a common language with which investigators can generate and test hypotheses about the interactions among different pathologic processes (denoted by biomarkers) and cognitive symptoms. We appreciate the concern that this biomarker-based research framework has the potential to be misused. Therefore, we emphasize, first, it is premature and inappropriate to use this research framework in general medical practice. Second, this research framework should not be used to restrict alternative approaches to hypothesis testing that do not use biomarkers. There will be situations where biomarkers are not available or requiring them would be counterproductive to the specific research goals (discussed in more detail later in the document). Thus, biomarker-based research should not be considered a template for all research into age-related cognitive impairment and dementia; rather, it should be applied when it is fit for the purpose of the specific research goals of a study. Importantly, this framework should be examined in diverse populations. Although it is possible that β-amyloid plaques and neurofibrillary tau deposits are not causal in AD pathogenesis, it is these abnormal protein deposits that define AD as a unique neurodegenerative diseaseamong different disorders that can lead to dementia. We envision that defining AD as a biological construct will enable a more accurate characterization and understanding of the sequence of events that lead to cognitive impairment that is associated with AD, as well as the multifactorial etiology of dementia. This approach also will enable a more precise approach to interventional trials where specific pathways can be targeted in the disease process and in the appropriate people

    Design of Experiments for Screening

    Full text link
    The aim of this paper is to review methods of designing screening experiments, ranging from designs originally developed for physical experiments to those especially tailored to experiments on numerical models. The strengths and weaknesses of the various designs for screening variables in numerical models are discussed. First, classes of factorial designs for experiments to estimate main effects and interactions through a linear statistical model are described, specifically regular and nonregular fractional factorial designs, supersaturated designs and systematic fractional replicate designs. Generic issues of aliasing, bias and cancellation of factorial effects are discussed. Second, group screening experiments are considered including factorial group screening and sequential bifurcation. Third, random sampling plans are discussed including Latin hypercube sampling and sampling plans to estimate elementary effects. Fourth, a variety of modelling methods commonly employed with screening designs are briefly described. Finally, a novel study demonstrates six screening methods on two frequently-used exemplars, and their performances are compared

    Quantifying the effects of temperature on mosquito and parasite traits that determine the transmission potential of human malaria

    Get PDF
    Malaria transmission is known to be strongly impacted by temperature. The current understanding of how temperature affects mosquito and parasite life history traits derives from a limited number of empirical studies. These studies, some dating back to the early part of last century, are often poorly controlled, have limited replication, explore a narrow range of temperatures, and use a mixture of parasite and mosquito species. Here, we use a single pairing of the Asian mosquito vector, An. stephensi and the human malaria parasite, P. falciparum to conduct a comprehensive evaluation of the thermal performance curves of a range of mosquito and parasite traits relevant to transmission. We show that biting rate, adult mortality rate, parasite development rate, and vector competence are temperature sensitive. Importantly, we find qualitative and quantitative differences to the assumed temperature-dependent relationships. To explore the overall implications of temperature for transmission, we first use a standard model of relative vectorial capacity. This approach suggests a temperature optimum for transmission of 29°C, with minimum and maximum temperatures of 12°C and 38°C, respectively. However, the robustness of the vectorial capacity approach is challenged by the fact that the empirical data violate several of the model's simplifying assumptions. Accordingly, we present an alternative model of relative force of infection that better captures the observed biology of the vector-parasite interaction. This model suggests a temperature optimum for transmission of 26°C, with a minimum and maximum of 17°C and 35°C, respectively. The differences between the models lead to potentially divergent predictions for the potential impacts of current and future climate change on malaria transmission. The study provides a framework for more detailed, system-specific studies that are essential to develop an improved understanding on the effects of temperature on malaria transmission

    Reviewing evidence of marine ecosystem change off South Africa

    Get PDF
    Recent changes have been observed in South African marine ecosystems. The main pressures on these ecosystems are fishing, climate change, pollution, ocean acidification and mining. The best long-term datasets are for trends in fishing pressures but there are many gaps, especially for non-commercial species. Fishing pressures have varied over time, depending on the species being caught. Little information exists for trends in other anthropogenic pressures. Field observations of environmental variables are limited in time and space. Remotely sensed satellite data have improved spatial and temporal coverage but the time-series are still too short to distinguish long-term trends from interannual and decadal variability. There are indications of recent cooling on the West and South coasts and warming on the East Coast over a period of 20 - 30 years. Oxygen concentrations on the West Coast have decreased over this period. Observed changes in offshore marine communities include southward and eastward changes in species distributions, changes in abundance of species, and probable alterations in foodweb dynamics. Causes of observed changes are difficult to attribute. Full understanding of marine ecosystem change requires ongoing and effective data collection, management and archiving, and coordination in carrying out ecosystem research.DHE

    Articular cartilage mineralization in osteoarthritis of the hip

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The aim of this study was to examine the frequency of articular cartilage calcification in patients with end-stage hip OA. Further, its impact on the clinical situation and the OA severity are analyzed.</p> <p>Methods</p> <p>Eighty patients with OA of the hip who consecutively underwent total hip replacement were prospectively evaluated, and 10 controls were included. The patients' X-rays were analyzed for the presence of articular cartilage mineralization. A Harris Hip Score (HHS) was preoperatively calculated for every patient.</p> <p>Slab specimens from the femoral head of bone and cartilage and an additional square centimeter of articular cartilage from the main chondral defect were obtained from each patient for analysis of mineralization by digital contact radiography (DCR). Histological grading was also performed. In a subset of 20 patients, minerals were characterized with an electron microscope (FE-SEM).</p> <p>Results</p> <p>Calcifications were seen in all OA cartilage and slab specimens using DCR, while preoperative X-rays revealed calcification in only 17.5%. None of the control cartilage specimens showed mineralization. There was a highly significant inverse correlation between articular cartilage calcification and preoperative HHS. Histological OA grade correlated positively with the amount of matrix calcification. FE-SEM analysis revealed basic calcium phosphate (BCP) as the predominant mineral; CPPD crystals were found in only two patients.</p> <p>Conclusions</p> <p>Articular cartilage calcification is a common event in osteoarthritis of the hip. The amount of calcification correlates with clinical symptoms and histological OA grade.</p
    corecore