1,287,150 research outputs found

    Polynomial Bounds for Learning Noisy Optical Physical Unclonable Functions and Connections to Learning With Errors

    Full text link
    It is shown that a class of optical physical unclonable functions (PUFs) can be learned to arbitrary precision with arbitrarily high probability, even in the presence of noise, given access to polynomially many challenge-response pairs and polynomially bounded computational power, under mild assumptions about the distributions of the noise and challenge vectors. This extends the results of Rh\"uramir et al. (2013), who showed a subset of this class of PUFs to be learnable in polynomial time in the absence of noise, under the assumption that the optics of the PUF were either linear or had negligible nonlinear effects. We derive polynomial bounds for the required number of samples and the computational complexity of a linear regression algorithm, based on size parameters of the PUF, the distributions of the challenge and noise vectors, and the probability and accuracy of the regression algorithm, with a similar analysis to one done by Bootle et al. (2018), who demonstrated a learning attack on a poorly implemented version of the Learning With Errors problem.Comment: 10 pages, 2 figures, submitted to IEEE Transactions on Information Forensics and Securit

    Handling missing values in trait data

    Get PDF
    Aim: Trait data are widely used in ecological and evolutionary phylogenetic comparative studies, but often values are not available for all species of interest. Traditionally, researchers have excluded species without data from analyses, but estimation of missing values using imputation has been proposed as a better approach. However, imputation methods have largely been designed for randomly missing data, whereas trait data are often not missing at random (e.g., more data for bigger species). Here, we evaluate the performance of approaches for handling missing values when considering biased datasets. Location: Any. Time period: Any. Major taxa studied: Any. Methods: We simulated continuous traits and separate response variables to test the performance of nine imputation methods and complete-case analysis (excluding missing values from the dataset) under biased missing data scenarios. We characterized performance by estimating the error in imputed trait values (deviation from the true value) and inferred trait–response relationships (deviation from the true relationship between a trait and response). Results: Generally, Rphylopars imputation produced the most accurate estimate of missing values and best preserved the response–trait slope. However, estimates of missing data were still inaccurate, even with only 5% of values missing. Under severe biases, errors were high with every approach. Imputation was not always the best option, with complete-case analysis frequently outperforming Mice imputation and, to a lesser degree, BHPMF imputation. Mice, a popular approach, performed poorly when the response variable was excluded from the imputation model. Main conclusions: Imputation can handle missing data effectively in some conditions but is not always the best solution. None of the methods we tested could deal effectively with severe biases, which can be common in trait datasets. We recommend rigorous data checking for biases before and after imputation and propose variables that can assist researchers working with incomplete datasets to detect data biases and minimize errors.Fil: Johnson, Thomas F.. University of Reading; Reino UnidoFil: Isaac, Nick J. B.. Centre For Ecology And Hydrology; Reino UnidoFil: Paviolo, Agustin Javier. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Nordeste. Instituto de Biología Subtropical. Instituto de Biología Subtropical - Nodo Puerto Iguazú | Universidad Nacional de Misiones. Instituto de Biología Subtropical. Instituto de Biología Subtropical - Nodo Puerto Iguazú; Argentina. Centro de Investigaciones del Bosque Atlántico; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Nordeste; ArgentinaFil: González Suárez, Manuela. University of Reading; Reino Unid

    Simplicity versus accuracy trade-off in estimating seismic fragility of existing reinforced concrete buildings

    Get PDF
    This paper investigates the trade-off between simplicity (modelling effort and computational time) and result accuracy in seismic fragility analysis of reinforced concrete (RC) frames. For many applications, simplified methods focusing on “archetype” structural models are often the state-of-practice. These simplified approaches may provide a rapid-yet-accurate estimation of seismic fragility, requiring a relatively small amount of input data and computational resources. However, such approaches often fail to capture specific structural deficiencies and/or failure mechanisms that might significantly affect the final assessment outcomes (e.g. shear failure in beam-column joints, in-plane and out-of-plane failure of infill walls, among others). To overcome these shortcomings, the alternative response analysis methods considered in this paper are all characterised by a mechanics-based approach and the explicit consideration of record-to-record variability in modelling seismic input/demands. Specifically, this paper compares three different seismic response analysis approaches, each characterised by a different refinement: 1) low refinement - non-linear static analysis (either analytical SLaMA or pushover analysis), coupled with the capacity spectrum method; 2) medium refinement - non-linear time-history analysis of equivalent single degree of freedom (SDoF) systems calibrated based on either the SLaMA-based or the pushover-based force-displacement curves; 3) high refinement - non-linear time-history analysis of multi-degree of freedom (MDoF) numerical models. In all cases, fragility curves are derived through a cloud-based approach employing unscaled real (i.e. recorded) ground motions. 14 four- or eight-storey RC frames showing different plastic mechanisms and distribution of the infills are analysed using each method. The results show that non-linear time-history analysis of equivalent SDoF systems is not substantially superior with respect to a non-linear static analysis coupled with the capacity spectrum method. The estimated median fragility (for different damage states) of the simplified methods generally falls within ±20% (generally as an under-estimation) of the corresponding estimates from the MDoF non-linear time-history analysis, with slightly-higher errors for the uniformly-infilled frames. In this latter cases, such error range increases up to ±32%. The fragility dispersion is generally over-estimated up to 30%. Although such bias levels are generally non-negligible, their rigorous characterisation can potentially guide an analyst to select/use a specific fragility derivation approach, depending on their needs and context, or to calibrate appropriate correction factors for the more simplified methods

    Development of a mathematical model for predicting electrically elicited quadriceps femoris muscle forces during isovelocity knee joint motion

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Direct electrical activation of skeletal muscles of patients with upper motor neuron lesions can restore functional movements, such as standing or walking. Because responses to electrical stimulation are highly nonlinear and time varying, accurate control of muscles to produce functional movements is very difficult. Accurate and predictive mathematical models can facilitate the design of stimulation patterns and control strategies that will produce the desired force and motion. In the present study, we build upon our previous isometric model to capture the effects of constant angular velocity on the forces produced during electrically elicited concentric contractions of healthy human quadriceps femoris muscle. Modelling the isovelocity condition is important because it will enable us to understand how our model behaves under the relatively simple condition of constant velocity and will enable us to better understand the interactions of muscle length, limb velocity, and stimulation pattern on the force produced by the muscle.</p> <p>Methods</p> <p>An additional term was introduced into our previous isometric model to predict the force responses during constant velocity limb motion. Ten healthy subjects were recruited for the study. Using a KinCom dynamometer, isometric and isovelocity force data were collected from the human quadriceps femoris muscle in response to a wide range of stimulation frequencies and patterns. % error, linear regression trend lines, and paired t-tests were used to test how well the model predicted the experimental forces. In addition, sensitivity analysis was performed using Fourier Amplitude Sensitivity Test to obtain a measure of the sensitivity of our model's output to changes in model parameters.</p> <p>Results</p> <p>Percentage RMS errors between modelled and experimental forces determined for each subject at each stimulation pattern and velocity showed that the errors were in general less than 20%. The coefficients of determination between the measured and predicted forces show that the model accounted for ~86% and ~85% of the variances in the measured force-time integrals and peak forces, respectively.</p> <p>Conclusion</p> <p>The range of predictive abilities of the isovelocity model in response to changes in muscle length, velocity, and stimulation frequency for each individual make it ideal for dynamic applications like FES cycling.</p
    • …
    corecore