3,995 research outputs found
Potential reduction of carbon emissions by performance improvement: A cement industry case study
The cement industry is generally considered responsible for upwards of 5% of anthropogenic greenhouse gas emissions. This is a result of the high energy intensity of the process, significant CO release from the raw materials used, and large global consumption. It is also a high growth sector as emerging economies develop their infrastructure. This paper outlines an investigation into day-to-day performance variation that, if scaled to the global level, represents a potential for improvement of up to 100 Mt CO equivalent per year. Optimising this operational variation is not included in current roadmaps for reduction of cement industry CO emissions, and has the potential to be cost neutral, or even save money for cement producing companies. The paper analyses a case study of a plant in the UK, operating a pre-calciner type kiln commissioned in 1986. Production data was analysed to examine the day-to-day variation in the fuel-derived CO emissions, in order to estimate the potential for operational improvement. Various factors were then analysed to determine what drives this potential improvement, including fuel mix, rate of production, and process airflow. The day-to-day performance of the plant, as measured by the fuel-derived CO emissions per tonne of clinker produced, varied significantly. (Clinker is the material ground and mixed with ~3% gypsum to produce cement). Improvement of the plant to 10th percentile best observed performance levels would represent a 10% drop in CO emissions and a 7% drop in energy consumption, with associated cost savings. Two mathematical models were used, first to examine the energy balance of the plant and then to predict CO emissions from given input conditions. The largest source of energy consumption was the dissociation energy required to form clinker, however, the variation in this was small. Airflow and fuel type were found to dominate the variation of performance. Optimising the factors affecting performance was predicted to reduce energy consumption by 8.5% and CO emissions by 19.5%. The paper concludes that there exists significant opportunity to reduce the emissions from cement plants by operational means, and that fuel mix and excess air ratio should be the focus of future research.Engineering and Physical Sciences Research Council (Grant ID: EP/K503009/1)This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.jclepro.2016.06.15
Persistent threats to validity in single‐group interrupted time series analysis with a cross over design
Rationale, aims and objectivesThe basic single‐group interrupted time series analysis (ITSA) design has been shown to be susceptible to the most common threat to validity—history—the possibility that some other event caused the observed effect in the time series. A single‐group ITSA with a crossover design (in which the intervention is introduced and withdrawn 1 or more times) should be more robust. In this paper, we describe and empirically assess the susceptibility of this design to bias from history.MethodTime series data from 2 natural experiments (the effect of multiple repeals and reinstatements of Louisiana’s motorcycle helmet law on motorcycle fatalities and the association between the implementation and withdrawal of Gorbachev’s antialcohol campaign with Russia’s mortality crisis) are used to illustrate that history remains a threat to ITSA validity, even in a crossover design.ResultsBoth empirical examples reveal that the single‐group ITSA with a crossover design may be biased because of history. In the case of motorcycle fatalities, helmet laws appeared effective in reducing mortality (while repealing the law increased mortality), but when a control group was added, it was shown that this trend was similar in both groups. In the case of Gorbachev’s antialcohol campaign, only when contrasting the results against those of a control group was the withdrawal of the campaign found to be the more likely culprit in explaining the Russian mortality crisis than the collapse of the Soviet Union.ConclusionsEven with a robust crossover design, single‐group ITSA models remain susceptible to bias from history. Therefore, a comparable control group design should be included, whenever possible.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136538/1/jep12668.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136538/2/jep12668_am.pd
The role of input noise in transcriptional regulation
Even under constant external conditions, the expression levels of genes
fluctuate. Much emphasis has been placed on the components of this noise that
are due to randomness in transcription and translation; here we analyze the
role of noise associated with the inputs to transcriptional regulation, the
random arrival and binding of transcription factors to their target sites along
the genome. This noise sets a fundamental physical limit to the reliability of
genetic control, and has clear signatures, but we show that these are easily
obscured by experimental limitations and even by conventional methods for
plotting the variance vs. mean expression level. We argue that simple, global
models of noise dominated by transcription and translation are inconsistent
with the embedding of gene expression in a network of regulatory interactions.
Analysis of recent experiments on transcriptional control in the early
Drosophila embryo shows that these results are quantitatively consistent with
the predicted signatures of input noise, and we discuss the experiments needed
to test the importance of input noise more generally.Comment: 11 pages, 5 figures minor correction
Recommended from our members
Turbulent flow at 190 m height above London during 2006-2008: A climatology and the applicability of similarity theory
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statistics—dimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transfer—were analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Λ (where Λ is the local Obukhov length and z is the height above ground); the σ_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth
Information transmission in oscillatory neural activity
Periodic neural activity not locked to the stimulus or to motor responses is
usually ignored. Here, we present new tools for modeling and quantifying the
information transmission based on periodic neural activity that occurs with
quasi-random phase relative to the stimulus. We propose a model to reproduce
characteristic features of oscillatory spike trains, such as histograms of
inter-spike intervals and phase locking of spikes to an oscillatory influence.
The proposed model is based on an inhomogeneous Gamma process governed by a
density function that is a product of the usual stimulus-dependent rate and a
quasi-periodic function. Further, we present an analysis method generalizing
the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the
information content in such data. We demonstrate these tools on recordings from
relay cells in the lateral geniculate nucleus of the cat.Comment: 18 pages, 8 figures, to appear in Biological Cybernetic
What’s in a Name? Parents’ and Healthcare Professionals’ Preferred Terminology for Pathogenic Variants in Childhood Cancer Predisposition Genes
Current literature/guidelines regarding the most appropriate term to communicate a cancer-related disease-causing germline variant in childhood cancer lack consensus. Guidelines also rarely address preferences of patients/families. We aimed to assess preferences of parents of children with cancer, genetics professionals, and pediatric oncologists towards terminology to describe a disease-causing germline variant in childhood cancer. Using semi-structured interviews we asked participants their most/least preferred terms from; ‘faulty gene,’ ‘altered gene,’ ‘gene change,’ and ‘genetic variant,’ analyzing responses with directed content analysis. Twenty-five parents, 6 genetics professionals, and 29 oncologists participated. An equal number of parents most preferred ‘gene change,’ ‘altered gene,’ or ‘genetic variant’ (n = 8/25). Parents least preferred ‘faulty gene’ (n = 18/25). Half the genetics professionals most preferred ‘faulty gene’ (n = 3/6); however this was least preferred by the remaining genetics professionals (n = 3/6). Many oncologists most preferred ‘genetic variant’ (n = 11/29) and least preferred ‘faulty gene’ (n = 19/29). Participants across all groups perceived ‘faulty gene’ as having negative connotations, potentially placing blame/guilt on parents/children. Health professionals described challenges selecting a term that was scientifically accurate, easily understood and not distressing to families. Lack of consensus highlights the need to be guided by families’ preferred terminology, while providing accurate explanations regarding implications of genetic findings
Selective serotonin reuptake inhibitors in the treatment of generalized anxiety disorder
Selective serotonin reuptake inhibitors have proven efficacy in the treatment of panic disorder, obsessive–compulsive disorder, post-traumatic stress disorder and social anxiety disorder. Accumulating data shows that selective serotonin reuptake inhibitor treatment can also be efficacious in patients with generalized anxiety disorder. This review summarizes the findings of randomized controlled trials of selective serotonin reuptake inhibitor treatment for generalized anxiety disorder, examines the strengths and weaknesses of other therapeutic approaches and considers potential new treatments for patients with this chronic and disabling anxiety disorder
Randomized controlled trial of a good practice approach to treatment of childhood obesity in Malaysia: Malaysian childhood obesity treatment trial (MASCOT)
Context. Few randomized controlled trials (RCTs) of interventions for the treatment of childhood obesity have taken place outside the Western world. Aim. To test whether a good practice intervention for the treatment of childhood obesity would have a greater impact on weight status and other outcomes than a control condition in Kuala Lumpur, Malaysia. Methods. Assessor-blinded RCT of a treatment intervention in 107 obese 7- to 11-year olds. The intervention was relatively low intensity (8 hours contact over 26 weeks, group based), aiming to change child sedentary behavior, physical activity, and diet using behavior change counselling. Outcomes were measured at baseline and six months after the start of the intervention. Primary outcome was BMI z-score, other outcomes were weight change, health-related quality of life (Peds QL), objectively measured physical activity and sedentary behavior (Actigraph accelerometry over 5 days). Results. The intervention had no significant effect on BMI z score relative to control. Weight gain was reduced significantly in the intervention group compared to the control group (+1.5 kg vs. +3.5 kg, respectively, t-test p < 0.01). Changes in health-related quality of life and objectively measured physical activity and sedentary behavior favored the intervention group. Conclusions. Treatment was associated with reduced rate of weight gain, and improvements in physical activity and quality of life. More substantial benefits may require longer term and more intensive interventions which aim for more substantive lifestyle changes
Classification of Neuroblastoma Histopathological Images Using Machine Learning
Neuroblastoma is the most common cancer in young children accounting for over 15% of deaths in children due to cancer. Identification of the class of neuroblastoma is dependent on histopathological classification performed by pathologists which are considered the gold standard. However, due to the heterogeneous nature of neuroblast tumours, the human eye can miss critical visual features in histopathology. Hence, the use of computer-based models can assist pathologists in classification through mathematical analysis. There is no publicly available dataset containing neuroblastoma histopathological images. So, this study uses dataset gathered from The Tumour Bank at Kids Research at The Children’s Hospital at Westmead, which has been used in previous research. Previous work on this dataset has shown maximum accuracy of 84%. One main issue that previous research fails to address is the class imbalance problem that exists in the dataset as one class represents over 50% of the samples. This study explores a range of feature extraction and data undersampling and over-sampling techniques to improve classification accuracy. Using these methods, this study was able to achieve accuracy of over 90% in the dataset. Moreover, significant improvements observed in this study were in the minority classes where previous work failed to achieve high level of classification accuracy. In doing so, this study shows importance of effective management of available data for any application of machine learning
Pediatricians' weight assessment and obesity management practices
<p>Abstract</p> <p>Background</p> <p>Clinician adherence to obesity screening guidelines from United States health agencies remains suboptimal. This study explored how personal and career demographics influence pediatricians' weight assessment and management practices.</p> <p>Methods</p> <p>A web-based survey was distributed to U.S. pediatricians. Respondents were asked to identify the weight status of photographed children and about their weight assessment and management practices. Associations between career and personal demographic variables and pediatricians' weight perceptions, weight assessment and management practices were evaluated using univariate and multivariate modeling.</p> <p>Results</p> <p>3,633 pediatric medical providers correctly identified the weight status of children at a median rate of 58%. The majority of pediatric clinicians were white, female, and of normal weight status with more than 10 years clinical experience. Experienced pediatric medical providers were less likely than younger colleagues to correctly identify the weight status of pictured children and were also less likely to know and use BMI criteria for assessing weight status. General pediatricians were more likely than subspecialty practitioners to provide diverse interventions for weight management. Non-white and Hispanic general practitioners were more likely than counterparts to consider cultural approaches to weight management.</p> <p>Conclusion</p> <p>Pediatricians' perceptions of children's weight and their weight assessment and management practices are influenced by career and personal characteristics. Objective criteria and clinical guidelines should be uniformly applied by pediatricians to screen for and manage pediatric obesity.</p
- …