1,459 research outputs found

    PMS5 PERSISTENCE WITH BISPHOSPHONATE THERAPY AND RISK OF HIP FRACTURE

    Get PDF
    Primer pla d'un senzill edifici d'habitatges de principis del s. XIX. Consta de planta baixa i quatre plantes pis, separades cada una de les plantes per impostes. S'estructura sobre la base d'eixos de simetria verticals

    Contextual Object Detection with a Few Relevant Neighbors

    Full text link
    A natural way to improve the detection of objects is to consider the contextual constraints imposed by the detection of additional objects in a given scene. In this work, we exploit the spatial relations between objects in order to improve detection capacity, as well as analyze various properties of the contextual object detection problem. To precisely calculate context-based probabilities of objects, we developed a model that examines the interactions between objects in an exact probabilistic setting, in contrast to previous methods that typically utilize approximations based on pairwise interactions. Such a scheme is facilitated by the realistic assumption that the existence of an object in any given location is influenced by only few informative locations in space. Based on this assumption, we suggest a method for identifying these relevant locations and integrating them into a mostly exact calculation of probability based on their raw detector responses. This scheme is shown to improve detection results and provides unique insights about the process of contextual inference for object detection. We show that it is generally difficult to learn that a particular object reduces the probability of another, and that in cases when the context and detector strongly disagree this learning becomes virtually impossible for the purposes of improving the results of an object detector. Finally, we demonstrate improved detection results through use of our approach as applied to the PASCAL VOC and COCO datasets

    Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features

    Full text link
    One-class support vector machine (OC-SVM) for a long time has been one of the most effective anomaly detection methods and extensively adopted in both research as well as industrial applications. The biggest issue for OC-SVM is yet the capability to operate with large and high-dimensional datasets due to optimization complexity. Those problems might be mitigated via dimensionality reduction techniques such as manifold learning or autoencoder. However, previous work often treats representation learning and anomaly prediction separately. In this paper, we propose autoencoder based one-class support vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier features to approximate the radial basis kernel, into deep learning context by combining it with a representation learning architecture and jointly exploit stochastic gradient descent to obtain end-to-end training. Interestingly, this also opens up the possible use of gradient-based attribution methods to explain the decision making for anomaly detection, which has ever been challenging as a result of the implicit mappings between the input space and the kernel space. To the best of our knowledge, this is the first work to study the interpretability of deep learning in anomaly detection. We evaluate our method on a wide range of unsupervised anomaly detection tasks in which our end-to-end training architecture achieves a performance significantly better than the previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201

    A new anisotropic poroelasticity model to describe damage accumulation during cyclic triaxial loading of rock

    Get PDF
    Acknowledgments The paper benefited from useful comments by two referees, Manolis Veveakis and Klaus Regenauer-Lieb, and the editor, Alexis Maineult. The contributions by Lyakhovsky and Shalev was supported by grant from the Israel Science Foundation, ISF 363/20. The contributions by Browning, Meredith, Healy and Mitchell were supported by UKRI NERC awards NE/N003063/1, NE/N002938/1, NE/T007826/1, NE/T00780X/1. The contributions by Browning was also supported by FONDECYT grant number 11190143. The contribution by Panteleev was supported by Russian Science Foundation (project N 19-77-30008).Peer reviewedPostprin

    Private Incremental Regression

    Full text link
    Data is continuously generated by modern data sources, and a recent challenge in machine learning has been to develop techniques that perform well in an incremental (streaming) setting. In this paper, we investigate the problem of private machine learning, where as common in practice, the data is not given at once, but rather arrives incrementally over time. We introduce the problems of private incremental ERM and private incremental regression where the general goal is to always maintain a good empirical risk minimizer for the history observed under differential privacy. Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals. We take this construction as a baseline for comparison. We then provide two mechanisms for the private incremental regression problem. Our first mechanism is based on privately constructing a noisy incremental gradient function, which is then used in a modified projected gradient procedure at every timestep. This mechanism has an excess empirical risk of d\approx\sqrt{d}, where dd is the dimensionality of the data. While from the results of [Bassily et al. 2014] this bound is tight in the worst-case, we show that certain geometric properties of the input and constraint set can be used to derive significantly better results for certain interesting regression problems.Comment: To appear in PODS 201

    Childhood Sexual Abuse and Early Timing of Puberty

    Get PDF
    AbstractPurposeThe purpose was to examine whether the timing of puberty, indexed by breast development and pubic hair development, was earlier for sexually abused females compared with a matched comparison group of nonabused females, controlling for key alternative confounds.MethodsA cohort of sexually abused females and matched comparisons was followed longitudinally at mean ages 11 through 20 years. Sexually abused participants (N = 84) were referred by protective services. Comparison participants (N = 89) were recruited to be comparable in terms of age, ethnicity, income level, family constellation, zip codes, and nonsexual trauma histories. Stage of puberty was indexed at each assessment by nurse and participant ratings of breast and pubic hair development using Tanner staging—the gold standard for assessing pubertal onset and development. Cumulative logit mixed models were used to estimate the association between sexual abuse status and the likelihood of transitioning from earlier to later Tanner stage categories controlling for covariates and potential confounds.ResultsSexual abuse was associated with earlier pubertal onset: 8 months earlier for breasts (odds ratio: 3.06, 95% CI: 1.11–8.49) and 12 months earlier for pubic hair (odds ratio: 3.49, 95% CI: 1.34–9.12). Alternative explanations including ethnicity, obesity, and biological father absence did not eradicate these findings.ConclusionsThis study confirms an association between exposure to childhood sexual abuse and earlier pubertal onset. Results highlight the possibility that, due to this early onset, sexual abuse survivors may be at increased risk for psychosocial difficulties, menstrual and fertility problems, and even reproductive cancers due to prolonged exposure to sex hormones

    Radiographic Image Enhancement by Wiener Decorrelation

    Get PDF
    The primary focus of the application of image processing to radiography is the problem of segmentation. The general segmentation problem has been attacked on a broad front [1, 2], and thresholding, in particular, is a popular method [1, 3-6]. Unfortunately, geometric unsharpness destroys the crisp edges needed for unambiguous decisions, and this difficulty can be considered a problem in filtering in which the object is to devise a high-pass (sharpening) filter. This approach has been studied for more than 20 years [7-13]

    Measurement of corrosion content of archaeological lead artifacts by their Meissner response in the superconducting state; a new dating method

    Full text link
    Meissner fraction in the superconducting state of lead archaeological artifacts is used to evaluate the mass of the uncorroded metal in the sample. Knowing the total mass of the sample the mass of all corrosion products is established. It is shown that this mass correlates with the archaeological age of the lead artifacts over a time span of ~2500 years. Well-dated untreated lead samples from Tel-Dor, the Persian period, Caesarea, the Byzantine and the Crusader periods as well as contemporary data were used to establish the dating correlation. This new chemical dating method is apparently applicable to lead artifacts buried in soils with the pH>6.5. In such soils the corrosion process is very slow and the corrosion products, mainly PbO and PbCO3, accumulate over hundreds of years. The method presented is in principle non-destructive. (corresponding author: )Comment: File ARCH_4.pdf 14 pages including 1 table and 5 figure

    Dynamic sustained attention markers differentiate atypical development: the case of Williams syndrome and Down's syndrome

    Get PDF
    Impaired sustained attention is considered an important factor in determining poor functional outcomes across multiple cognitive and behavioural disorders. Sustained attention is compromised for both children with Williams syndrome (WS) and Down's syndrome (DS), but specific difficulties remain poorly understood because of limitations in how sustained attention has been assessed thus far. In the current study, we compared the performance of typically developing children (N = 99), children with WS (N = 25), and children with DS (N = 18), on a Continuous Performance Task - a standard tool for measuring sustained attention. In contrast to previous studies, primarily focused on overall differences in mean performance, we estimated the extent to which performance changed over time on task, thus focusing directly on the sustained element of performance. Children with WS and children with DS performed more poorly overall compared to typically developing children. Importantly, measures specific to changes over time differentiated between children with the two syndromes. Children with WS showed a decrement in performance, whereas children with Down's syndrome demonstrated non-specific poor performance. In addition, our measure of change in performance predicted teacher-rated attention deficits symptoms across the full sample. An approach that captures dynamic changes in performance over assessments may be fruitful for investigating similarities and differences in sustained attention for other atypically developing populations. [Abstract copyright: Copyright © 2019. Published by Elsevier Ltd.
    corecore