1,253,117 research outputs found

    Handling missing values in trait data

    Get PDF
    Aim Trait data are widely used in ecological and evolutionary phylogenetic comparative studies, but often values are not available for all species of interest. Researchers traditionally have excluded species without data from analyses, but estimation of missing values using imputation has been proposed as a better approach. However, imputation methods have largely been designed for randomly missing data, yet trait data are often not missing at random (e.g. more data for bigger species). Here we evaluate the performance of approaches for handling missing values considering biased datasets. Location Any Time period Any Major taxa studied Any Methods We simulated continuous traits and separate response variables to test performance of nine imputation methods and complete-case analysis (excluding missing values from the dataset) under biased missing data scenarios. We characterized performance by estimating error in imputed trait values (deviation from the true value) and inferred trait-response relationships (deviation from the true relationship between a trait and response). Results Generally, Rphylopars imputation produced the most accurate estimate of missing values and best preserved the response-trait slope. However, estimates of missing data were still inaccurate, even with only 5% of values missing. Under severe biases, errors were high with every approach. Imputation was not always the best option, with complete-case analysis frequently outperforming Mice imputation, and to a lesser degree BHPMF imputation. Mice, a popular approach, performed poorly when the response variable was excluded from the imputation model. Main conclusions Imputation can effectively handle missing data under some conditions, but is not always the best solution. None of the methods we tested could effectively deal with severe biases, which may be common in trait datasets. We recommend rigorous data checking for biases before and after imputation and propose variables that can assist researchers working with incomplete datasets to detect data biases and minimise errors

    Polynomial Bounds for Learning Noisy Optical Physical Unclonable Functions and Connections to Learning With Errors

    Full text link
    It is shown that a class of optical physical unclonable functions (PUFs) can be learned to arbitrary precision with arbitrarily high probability, even in the presence of noise, given access to polynomially many challenge-response pairs and polynomially bounded computational power, under mild assumptions about the distributions of the noise and challenge vectors. This extends the results of Rh\"uramir et al. (2013), who showed a subset of this class of PUFs to be learnable in polynomial time in the absence of noise, under the assumption that the optics of the PUF were either linear or had negligible nonlinear effects. We derive polynomial bounds for the required number of samples and the computational complexity of a linear regression algorithm, based on size parameters of the PUF, the distributions of the challenge and noise vectors, and the probability and accuracy of the regression algorithm, with a similar analysis to one done by Bootle et al. (2018), who demonstrated a learning attack on a poorly implemented version of the Learning With Errors problem.Comment: 10 pages, 2 figures, submitted to IEEE Transactions on Information Forensics and Securit

    Handling missing values in trait data

    Get PDF
    Aim: Trait data are widely used in ecological and evolutionary phylogenetic comparative studies, but often values are not available for all species of interest. Traditionally, researchers have excluded species without data from analyses, but estimation of missing values using imputation has been proposed as a better approach. However, imputation methods have largely been designed for randomly missing data, whereas trait data are often not missing at random (e.g., more data for bigger species). Here, we evaluate the performance of approaches for handling missing values when considering biased datasets. Location: Any. Time period: Any. Major taxa studied: Any. Methods: We simulated continuous traits and separate response variables to test the performance of nine imputation methods and complete-case analysis (excluding missing values from the dataset) under biased missing data scenarios. We characterized performance by estimating the error in imputed trait values (deviation from the true value) and inferred trait–response relationships (deviation from the true relationship between a trait and response). Results: Generally, Rphylopars imputation produced the most accurate estimate of missing values and best preserved the response–trait slope. However, estimates of missing data were still inaccurate, even with only 5% of values missing. Under severe biases, errors were high with every approach. Imputation was not always the best option, with complete-case analysis frequently outperforming Mice imputation and, to a lesser degree, BHPMF imputation. Mice, a popular approach, performed poorly when the response variable was excluded from the imputation model. Main conclusions: Imputation can handle missing data effectively in some conditions but is not always the best solution. None of the methods we tested could deal effectively with severe biases, which can be common in trait datasets. We recommend rigorous data checking for biases before and after imputation and propose variables that can assist researchers working with incomplete datasets to detect data biases and minimize errors.Fil: Johnson, Thomas F.. University of Reading; Reino UnidoFil: Isaac, Nick J. B.. Centre For Ecology And Hydrology; Reino UnidoFil: Paviolo, Agustin Javier. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Nordeste. Instituto de Biología Subtropical. Instituto de Biología Subtropical - Nodo Puerto Iguazú | Universidad Nacional de Misiones. Instituto de Biología Subtropical. Instituto de Biología Subtropical - Nodo Puerto Iguazú; Argentina. Centro de Investigaciones del Bosque Atlántico; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Nordeste; ArgentinaFil: González Suárez, Manuela. University of Reading; Reino Unid

    Simplicity versus accuracy trade-off in estimating seismic fragility of existing reinforced concrete buildings

    Get PDF
    This paper investigates the trade-off between simplicity (modelling effort and computational time) and result accuracy in seismic fragility analysis of reinforced concrete (RC) frames. For many applications, simplified methods focusing on “archetype” structural models are often the state-of-practice. These simplified approaches may provide a rapid-yet-accurate estimation of seismic fragility, requiring a relatively small amount of input data and computational resources. However, such approaches often fail to capture specific structural deficiencies and/or failure mechanisms that might significantly affect the final assessment outcomes (e.g. shear failure in beam-column joints, in-plane and out-of-plane failure of infill walls, among others). To overcome these shortcomings, the alternative response analysis methods considered in this paper are all characterised by a mechanics-based approach and the explicit consideration of record-to-record variability in modelling seismic input/demands. Specifically, this paper compares three different seismic response analysis approaches, each characterised by a different refinement: 1) low refinement - non-linear static analysis (either analytical SLaMA or pushover analysis), coupled with the capacity spectrum method; 2) medium refinement - non-linear time-history analysis of equivalent single degree of freedom (SDoF) systems calibrated based on either the SLaMA-based or the pushover-based force-displacement curves; 3) high refinement - non-linear time-history analysis of multi-degree of freedom (MDoF) numerical models. In all cases, fragility curves are derived through a cloud-based approach employing unscaled real (i.e. recorded) ground motions. 14 four- or eight-storey RC frames showing different plastic mechanisms and distribution of the infills are analysed using each method. The results show that non-linear time-history analysis of equivalent SDoF systems is not substantially superior with respect to a non-linear static analysis coupled with the capacity spectrum method. The estimated median fragility (for different damage states) of the simplified methods generally falls within ±20% (generally as an under-estimation) of the corresponding estimates from the MDoF non-linear time-history analysis, with slightly-higher errors for the uniformly-infilled frames. In this latter cases, such error range increases up to ±32%. The fragility dispersion is generally over-estimated up to 30%. Although such bias levels are generally non-negligible, their rigorous characterisation can potentially guide an analyst to select/use a specific fragility derivation approach, depending on their needs and context, or to calibrate appropriate correction factors for the more simplified methods

    Development of a mathematical model for predicting electrically elicited quadriceps femoris muscle forces during isovelocity knee joint motion

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Direct electrical activation of skeletal muscles of patients with upper motor neuron lesions can restore functional movements, such as standing or walking. Because responses to electrical stimulation are highly nonlinear and time varying, accurate control of muscles to produce functional movements is very difficult. Accurate and predictive mathematical models can facilitate the design of stimulation patterns and control strategies that will produce the desired force and motion. In the present study, we build upon our previous isometric model to capture the effects of constant angular velocity on the forces produced during electrically elicited concentric contractions of healthy human quadriceps femoris muscle. Modelling the isovelocity condition is important because it will enable us to understand how our model behaves under the relatively simple condition of constant velocity and will enable us to better understand the interactions of muscle length, limb velocity, and stimulation pattern on the force produced by the muscle.</p> <p>Methods</p> <p>An additional term was introduced into our previous isometric model to predict the force responses during constant velocity limb motion. Ten healthy subjects were recruited for the study. Using a KinCom dynamometer, isometric and isovelocity force data were collected from the human quadriceps femoris muscle in response to a wide range of stimulation frequencies and patterns. % error, linear regression trend lines, and paired t-tests were used to test how well the model predicted the experimental forces. In addition, sensitivity analysis was performed using Fourier Amplitude Sensitivity Test to obtain a measure of the sensitivity of our model's output to changes in model parameters.</p> <p>Results</p> <p>Percentage RMS errors between modelled and experimental forces determined for each subject at each stimulation pattern and velocity showed that the errors were in general less than 20%. The coefficients of determination between the measured and predicted forces show that the model accounted for ~86% and ~85% of the variances in the measured force-time integrals and peak forces, respectively.</p> <p>Conclusion</p> <p>The range of predictive abilities of the isovelocity model in response to changes in muscle length, velocity, and stimulation frequency for each individual make it ideal for dynamic applications like FES cycling.</p

    Design, control and error analysis of a fast tool positioning system for ultra-precision machining of freeform surfaces

    Get PDF
    This thesis was previously held under moratorium from 03/12/19 to 03/12/21Freeform surfaces are widely found in advanced imaging and illumination systems, orthopaedic implants, high-power beam shaping applications, and other high-end scientific instruments. They give the designers greater ability to cope with the performance limitations commonly encountered in simple-shape designs. However, the stringent requirements for surface roughness and form accuracy of freeform components pose significant challenges for current machining techniques—especially in the optical and display market where large surfaces with tens of thousands of micro features are to be machined. Such highly wavy surfaces require the machine tool cutter to move rapidly while keeping following errors small. Manufacturing efficiency has been a bottleneck in these applications. The rapidly changing cutting forces and inertial forces also contribute a great deal to the machining errors. The difficulty in maintaining good surface quality under conditions of high operational frequency suggests the need for an error analysis approach that can predict the dynamic errors. The machining requirements also impose great challenges on machine tool design and the control process. There has been a knowledge gap on how the mechanical structural design affects the achievable positioning stability. The goal of this study was to develop a tool positioning system capable of delivering fast motion with the required positioning accuracy and stiffness for ultra-precision freeform manufacturing. This goal is achieved through deterministic structural design, detailed error analysis, and novel control algorithms. Firstly, a novel stiff-support design was proposed to eliminate the structural and bearing compliances in the structural loop. To implement the concept, a fast positioning device was developed based on a new-type flat voice coil motor. Flexure bearing, magnet track, and motor coil parameters were designed and calculated in detail. A high-performance digital controller and a power amplifier were also built to meet the servo rate requirement of the closed-loop system. A thorough understanding was established of how signals propagated within the control system, which is fundamentally important in determining the loop performance of high-speed control. A systematic error analysis approach based on a detailed model of the system was proposed and verified for the first time that could reveal how disturbances contribute to the tool positioning errors. Each source of disturbance was treated as a stochastic process, and these disturbances were synthesised in the frequency domain. The differences between following error and real positioning error were discussed and clarified. The predicted spectrum of following errors agreed with the measured spectrum across the frequency range. It is found that the following errors read from the control software underestimated the real positioning errors at low frequencies and overestimated them at high frequencies. The error analysis approach thus successfully revealed the real tool positioning errors that are mingled with sensor noise. Approaches to suppress disturbances were discussed from the perspectives of both system design and control. A deterministic controller design approach was developed to preclude the uncertainty associated with controller tuning, resulting in a control law that can minimize positioning errors. The influences of mechanical parameters such as mass, damping, and stiffness were investigated within the closed-loop framework. Under a given disturbance condition, the optimal bearing stiffness and optimal damping coefficients were found. Experimental positioning tests showed that a larger moving mass helped to combat all disturbances but sensor noise. Because of power limits, the inertia of the fast tool positioning system could not be high. A control algorithm with an additional acceleration-feedback loop was then studied to enhance the dynamic stiffness of the cutting system without any need for large inertia. An analytical model of the dynamic stiffness of the system with acceleration feedback was established. The dynamic stiffness was tested by frequency response tests as well as by intermittent diamond-turning experiments. The following errors and the form errors of the machined surfaces were compared with the estimates provided by the model. It is found that the dynamic stiffness within the acceleration sensor bandwidth was proportionally improved. The additional acceleration sensor brought a new error source into the loop, and its contribution of errors increased with a larger acceleration gain. At a certain point, the error caused by the increased acceleration gain surpassed other disturbances and started to dominate, representing the practical upper limit of the acceleration gain. Finally, the developed positioning system was used to cut some typical freeform surfaces. A surface roughness of 1.2 nm (Ra) was achieved on a NiP alloy substrate in flat cutting experiments. Freeform surfaces—including beam integrator surface, sinusoidal surface, and arbitrary freeform surface—were successfully machined with optical-grade quality. Ideas for future improvements were proposed in the end of this thesis.Freeform surfaces are widely found in advanced imaging and illumination systems, orthopaedic implants, high-power beam shaping applications, and other high-end scientific instruments. They give the designers greater ability to cope with the performance limitations commonly encountered in simple-shape designs. However, the stringent requirements for surface roughness and form accuracy of freeform components pose significant challenges for current machining techniques—especially in the optical and display market where large surfaces with tens of thousands of micro features are to be machined. Such highly wavy surfaces require the machine tool cutter to move rapidly while keeping following errors small. Manufacturing efficiency has been a bottleneck in these applications. The rapidly changing cutting forces and inertial forces also contribute a great deal to the machining errors. The difficulty in maintaining good surface quality under conditions of high operational frequency suggests the need for an error analysis approach that can predict the dynamic errors. The machining requirements also impose great challenges on machine tool design and the control process. There has been a knowledge gap on how the mechanical structural design affects the achievable positioning stability. The goal of this study was to develop a tool positioning system capable of delivering fast motion with the required positioning accuracy and stiffness for ultra-precision freeform manufacturing. This goal is achieved through deterministic structural design, detailed error analysis, and novel control algorithms. Firstly, a novel stiff-support design was proposed to eliminate the structural and bearing compliances in the structural loop. To implement the concept, a fast positioning device was developed based on a new-type flat voice coil motor. Flexure bearing, magnet track, and motor coil parameters were designed and calculated in detail. A high-performance digital controller and a power amplifier were also built to meet the servo rate requirement of the closed-loop system. A thorough understanding was established of how signals propagated within the control system, which is fundamentally important in determining the loop performance of high-speed control. A systematic error analysis approach based on a detailed model of the system was proposed and verified for the first time that could reveal how disturbances contribute to the tool positioning errors. Each source of disturbance was treated as a stochastic process, and these disturbances were synthesised in the frequency domain. The differences between following error and real positioning error were discussed and clarified. The predicted spectrum of following errors agreed with the measured spectrum across the frequency range. It is found that the following errors read from the control software underestimated the real positioning errors at low frequencies and overestimated them at high frequencies. The error analysis approach thus successfully revealed the real tool positioning errors that are mingled with sensor noise. Approaches to suppress disturbances were discussed from the perspectives of both system design and control. A deterministic controller design approach was developed to preclude the uncertainty associated with controller tuning, resulting in a control law that can minimize positioning errors. The influences of mechanical parameters such as mass, damping, and stiffness were investigated within the closed-loop framework. Under a given disturbance condition, the optimal bearing stiffness and optimal damping coefficients were found. Experimental positioning tests showed that a larger moving mass helped to combat all disturbances but sensor noise. Because of power limits, the inertia of the fast tool positioning system could not be high. A control algorithm with an additional acceleration-feedback loop was then studied to enhance the dynamic stiffness of the cutting system without any need for large inertia. An analytical model of the dynamic stiffness of the system with acceleration feedback was established. The dynamic stiffness was tested by frequency response tests as well as by intermittent diamond-turning experiments. The following errors and the form errors of the machined surfaces were compared with the estimates provided by the model. It is found that the dynamic stiffness within the acceleration sensor bandwidth was proportionally improved. The additional acceleration sensor brought a new error source into the loop, and its contribution of errors increased with a larger acceleration gain. At a certain point, the error caused by the increased acceleration gain surpassed other disturbances and started to dominate, representing the practical upper limit of the acceleration gain. Finally, the developed positioning system was used to cut some typical freeform surfaces. A surface roughness of 1.2 nm (Ra) was achieved on a NiP alloy substrate in flat cutting experiments. Freeform surfaces—including beam integrator surface, sinusoidal surface, and arbitrary freeform surface—were successfully machined with optical-grade quality. Ideas for future improvements were proposed in the end of this thesis
    • …
    corecore