3,588 research outputs found

    Estimating snow cover from publicly available images

    Get PDF
    In this paper we study the problem of estimating snow cover in mountainous regions, that is, the spatial extent of the earth surface covered by snow. We argue that publicly available visual content, in the form of user generated photographs and image feeds from outdoor webcams, can both be leveraged as additional measurement sources, complementing existing ground, satellite and airborne sensor data. To this end, we describe two content acquisition and processing pipelines that are tailored to such sources, addressing the specific challenges posed by each of them, e.g., identifying the mountain peaks, filtering out images taken in bad weather conditions, handling varying illumination conditions. The final outcome is summarized in a snow cover index, which indicates for a specific mountain and day of the year, the fraction of visible area covered by snow, possibly at different elevations. We created a manually labelled dataset to assess the accuracy of the image snow covered area estimation, achieving 90.0% precision at 91.1% recall. In addition, we show that seasonal trends related to air temperature are captured by the snow cover index.Comment: submitted to IEEE Transactions on Multimedi

    Detecting confounding in multivariate linear models via spectral analysis

    Full text link
    We study a model where one target variable Y is correlated with a vector X:=(X_1,...,X_d) of predictor variables being potential causes of Y. We describe a method that infers to what extent the statistical dependences between X and Y are due to the influence of X on Y and to what extent due to a hidden common cause (confounder) of X and Y. The method relies on concentration of measure results for large dimensions d and an independence assumption stating that, in the absence of confounding, the vector of regression coefficients describing the influence of each X on Y typically has `generic orientation' relative to the eigenspaces of the covariance matrix of X. For the special case of a scalar confounder we show that confounding typically spoils this generic orientation in a characteristic way that can be used to quantitatively estimate the amount of confounding.Comment: 27 pages, 16 figure

    Attribution of intentional causation influences the perception of observed movements: behavioral evidence and neural correlates

    Get PDF
    Recent research on human agency suggests that intentional causation is associated with a subjective compression in the temporal interval between actions and their effects. That is, intentional movements and their causal effects are perceived as closer together in time than equivalent unintentional movements and their causal effects. This so-called intentional binding effect is consistently found for one's own self-generated actions. It has also been suggested that intentional binding occurs when observing intentional movements of others. However, this evidence is undermined by limitations of the paradigm used. In the current study we aimed to overcome these limitations using a more rigorous design in combination with functional Magnetic Resonance Imaging (fMRI) to explore the neural underpinnings of intentional binding of observed movements. In particular, we aimed to identify brain areas sensitive to the interaction between intentionality and causality attributed to the observed action. Our behavioral results confirmed the occurrence of intentional binding for observed movements using this more rigorous paradigm. Our fMRI results highlighted a collection of brain regions whose activity was sensitive to the interaction between intentionality and causation. Intriguingly, these brain regions have previously been implicated in the sense of agency over one's own movements. We discuss the implications of these results for intentional binding specifically, and the sense of agency more generally

    Optical mouse acting as biospeckle sensor

    Get PDF
    In this work we propose some experiments with the use of optical computer mouse, associated to low cost lasers that can be used to perform several measurements with applications in industry and in human health monitoring. The mouse was used to grab the movements produced by speckle pattern changes and to get information through the adaptation of its structure. We measured displacements in wood samples under strain, variations of the diameter of an artery due to heart beat and, through a hardware simulation, the movement of an eye, an experiment that could be of low cost help for communication to severely handicapped motor patients. Those measurements were done in spite of the fact that the CCD sensor of the mice is monolithically included into an integrated circuit so that the raw image cannot be accessed. If, as was the case with primitive optical mouse, that signal could be accessed, the quality and usefulness of the measurements could be significantly increased. As it was not possible, a webcam sensor was used for measuring the drying of paint, a standard phenomenon for testing biospeckle techniques, in order to prove the usefulness of the mouse design. The results showed that the use of the mouse associated to a laser pointer could be the way to get metrological information from many phenomena involving the whole field spatial displacement, as well as the use of the mouse as in its prime version allowed to get images of the speckle patterns and to analyze them.Centro de Investigaciones Óptica

    Influence of the digital space on suicidal behavior of adolescents

    Get PDF
    With the Internet now firmly established as the main medium of communication in today’s world, studying the effect of its various aspects on the behavior of minors is now more relevant than ever. This article provides arguments in favor of the need to study the phenomenon of cybersuicide among adolescents in light of the rising number of suicides among children in many countries in recent years, including Ukraine. The aim of this article is to study the role of the digital space, namely the Internet, in the reinforcement of suicidal ideation and intentions among children and, ultimately, in driving them to suicide. To achieve this goal, a number of general and special research methods for understanding social realities were used, to ensure objectivity and accuracy of obtained data, which was all the more important given the nature of the subject. The dangers of the pre-suicidal state (pre-suicide) were examined, including from a medical perspective. Particular attention was paid to behavioral tendencies common among adolescents. The authors arrived at the conclusion that the digital space can both trigger suicidal thoughts and intentions in adolescents with its content and facilitate their committing suicide through “support” or even encouragement from online friends. The authors stress that related content children post on social media can help discover whether they have been having suicidal ideation. Arguments are given in favor of the need for parents, teachers, and psychologists to monitor said content to be able to provide timely psychological help, including via the digital space

    An Improved Algorithm for Learning to Perform Exception-Tolerant Abduction

    Get PDF
    Abstract Inference from an observed or hypothesized condition to a plausible cause or explanation for this condition is known as abduction. For many tasks, the acquisition of the necessary knowledge by machine learning has been widely found to be highly effective. However, the semantics of learned knowledge are weaker than the usual classical semantics, and this necessitates new formulations of many tasks. We focus on a recently introduced formulation of the abductive inference task that is thus adapted to the semantics of machine learning. A key problem is that we cannot expect that our causes or explanations will be perfect, and they must tolerate some error due to the world being more complicated than our formalization allows. This is a version of the qualification problem, and in machine learning, this is known as agnostic learning. In the work by Juba that introduced the task of learning to make abductive inferences, an algorithm is given for producing k-DNF explanations that tolerates such exceptions: if the best possible k-DNF explanation fails to justify the condition with probability , then the algorithm is promised to find a k-DNF explanation that fails to justify the condition with probability at most , where n is the number of propositional attributes used to describe the domain. Here, we present an improved algorithm for this task. When the best k-DNF fails with probability , our algorithm finds a k-DNF that fails with probability at most (i.e., suppressing logarithmic factors in n and ).We examine the empirical advantage of this new algorithm over the previous algorithm in two test domains, one of explaining conditions generated by a “noisy k-DNF rule, and another of explaining conditions that are actually generated by a linear threshold rule. We also apply the algorithm on the real world application Anomaly explanation. In this work, as opposed to anomaly detection, we are interested in finding possible descriptions of what may be causing anomalies in visual data. We use PCA to perform anomaly detection. The task is attaching semantics drawn from the image meta-data to a portion of the anomalous images from some source such as web-came. Such a partial description of the anomalous images in terms of the meta-data is useful both because it may help to explain what causes the identified anomalies, and also because it may help to identify the truly unusual images that defy such simple categorization. We find that it is a good match to apply our approximation algorithm on this task. Our algorithm successfully finds plausible explanations of the anomalies. It yields low error rate when the data set is large(\u3e80,000 inputs) and also works well when the data set is not very large(\u3c 50,000 examples). It finds small 2-DNFs that are easy to interpret and capture a non-negligible
    corecore