97 research outputs found

    Interfacing a Stepper Motor with ARM Controller LPC2148

    Get PDF
    Another useful machine interfaced to the computer system is the Stepper motor. A Stepper motor is a digital motor because each input pulse results in discrete output or discrete steps as it traverses through 3600 which means the shaft rotation by definite angle called step angle. Stepper motors are DC motors that move in discrete steps. They possess multiple coils that are organized in groups referred to as phases. By energizing each phase in sequence, the motor will rotate one step at a time. Stepper motors are available in various sizes and styles as well as electrical characteristics. Nowadays, the use of ARM controllers is in limelight. The ARM controllers are basically designed to target the 32 bit microcontrollers. These controllers provide excellent performance and are available with latest and enhanced features. The ARM controllers are suitable for 32 bit embedded applications. The state of the art presented in this paper is the interfacing of Stepper motor with ARM controller LPC 2148

    Searching for the scale of homogeneity

    Get PDF
    We introduce a statistical quantity, known as the KK function, related to the integral of the two--point correlation function. It gives us straightforward information about the scale where clustering dominates and the scale at which homogeneity is reached. We evaluate the correlation dimension, D2D_2, as the local slope of the log--log plot of the KK function. We apply this statistic to several stochastic point fields, to three numerical simulations describing the distribution of clusters and finally to real galaxy redshift surveys. Four different galaxy catalogues have been analysed using this technique: the Center for Astrophysics I, the Perseus--Pisces redshift surveys (these two lying in our local neighbourhood), the Stromlo--APM and the 1.2 Jy {\it IRAS} redshift surveys (these two encompassing a larger volume). In all cases, this cumulant quantity shows the fingerprint of the transition to homogeneity. The reliability of the estimates is clearly demonstrated by the results from controllable point sets, such as the segment Cox processes. In the cluster distribution models, as well as in the real galaxy catalogues, we never see long plateaus when plotting D2D_2 as a function of the scale, leaving no hope for unbounded fractal distributions.Comment: 9 pages, 11 figures, MNRAS, in press; minor revision and added reference

    Stochastic approximation of the MLE for a spatial point pattern

    Get PDF

    Linking genotoxicity and cytotoxicity with membrane fluidity: A comparative study in ovarian cancer cell lines following exposure to auranofin

    Get PDF
    publisher: Elsevier articletitle: Linking genotoxicity and cytotoxicity with membrane fluidity: A comparative study in ovarian cancer cell lines following exposure to auranofin journaltitle: Mutation Research/Genetic Toxicology and Environmental Mutagenesis articlelink: http://dx.doi.org/10.1016/j.mrgentox.2016.09.003 content_type: article copyright: © 2016 Elsevier B.V. All rights reserved

    Blood biomarker-based classification study for neurodegenerative diseases

    Get PDF
    \ua9 2023, Springer Nature Limited. As the population ages, neurodegenerative diseases are becoming more prevalent, making it crucial to comprehend the underlying disease mechanisms and identify biomarkers to allow for early diagnosis and effective screening for clinical trials. Thanks to advancements in gene expression profiling, it is now possible to search for disease biomarkers on an unprecedented scale.Here we applied a selection of five machine learning (ML) approaches to identify blood-based biomarkers for Alzheimer\u27s (AD) and Parkinson\u27s disease (PD) with the application of multiple feature selection methods. Based on ROC AUC performance, one optimal random forest (RF) model was discovered for AD with 159 gene markers (ROC-AUC = 0.886), while one optimal RF model was discovered for PD (ROC-AUC = 0.743). Additionally, in comparison to traditional ML approaches, deep learning approaches were applied to evaluate their potential applications in future works. We demonstrated that convolutional neural networks perform consistently well across both the Alzheimer\u27s (ROC AUC = 0.810) and Parkinson\u27s (ROC AUC = 0.715) datasets, suggesting its potential in gene expression biomarker detection with increased tuning of their architecture

    Understanding predictive uncertainty in hydrologic modeling: The challenge of identifying input and structural errors

    Get PDF
    Meaningful quantification of data and structural uncertainties in conceptual rainfall-runoff modeling is a major scientific and engineering challenge. This paper focuses on the total predictive uncertainty and its decomposition into input and structural components under different inference scenarios. Several Bayesian inference schemes are investigated, differing in the treatment of rainfall and structural uncertainties, and in the precision of the priors describing rainfall uncertainty. Compared with traditional lumped additive error approaches, the quantification of the total predictive uncertainty in the runoff is improved when rainfall and/or structural errors are characterized explicitly. However, the decomposition of the total uncertainty into individual sources is more challenging. In particular, poor identifiability may arise when the inference scheme represents rainfall and structural errors using separate probabilistic models. The inference becomes ill‐posed unless sufficiently precise prior knowledge of data uncertainty is supplied; this ill‐posedness can often be detected from the behavior of the Monte Carlo sampling algorithm. Moreover, the priors on the data quality must also be sufficiently accurate if the inference is to be reliable and support meaningful uncertainty decomposition. Our findings highlight the inherent limitations of inferring inaccurate hydrologic models using rainfall‐runoff data with large unknown errors. Bayesian total error analysis can overcome these problems using independent prior information. The need for deriving independent descriptions of the uncertainties in the input and output data is clearly demonstrated.Benjamin Renard, Dmitri Kavetski, George Kuczera, Mark Thyer, and Stewart W. Frank

    Spatial variation of Anopheles-transmitted Wuchereria bancrofti and Plasmodium falciparum infection densities in Papua New Guinea.

    Get PDF
    RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.The spatial variation of Wuchereria bancrofti and Plasmodium falciparum infection densities was measured in a rural area of Papua New Guinea where they share anopheline vectors. The spatial correlation of W. bancrofti was found to reduce by half over an estimated distance of 1.7 km, much smaller than the 50 km grid used by the World Health Organization rapid mapping method. For P. falciparum, negligible spatial correlation was found. After mass treatment with anti-filarial drugs, there was negligible correlation between the changes in the densities of the two parasites

    Identification of temporal consistency in rating curve data : Bidirectional Reach (BReach)

    Get PDF
    In this paper, a methodology is developed to identify consistency of rating curve data based on a quality analysis of model results. This methodology, called Bidirectional Reach (BReach), evaluates results of a rating curve model with randomly sampled parameter sets in each observation. The combination of a parameter set and an observation is classified as nonacceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Based on this classification, conditions for satisfactory behavior of a model in a sequence of observations are defined. Subsequently, a parameter set is evaluated in a data point by assessing the span for which it behaves satisfactory in the direction of the previous (or following) chronologically sorted observations. This is repeated for all sampled parameter sets and results are aggregated by indicating the endpoint of the largest span, called the maximum left (right) reach. This temporal reach should not be confused with a spatial reach (indicating a part of a river). The same procedure is followed for each data point and for different definitions of satisfactory behavior. Results of this analysis enable the detection of changes in data consistency. The methodology is validated with observed data and various synthetic stage-discharge data sets and proves to be a robust technique to investigate temporal consistency of rating curve data. It provides satisfying results despite of low data availability, errors in the estimated observational uncertainty, and a rating curve model that is known to cover only a limited part of the observations
    corecore