725 research outputs found

    Fraternity/Sorority Membership: Good News About First-Year Impact

    Get PDF
    Much has been written about the importance of student involvement for building a sense of belonging on college campuses. Fraternity/sorority membership, as a form of undergraduate involvement, frequently invokes perceptions of misbehavior more often than positive outcomes. This study considered the impact of fraternity/sorority membership on the academic performance of more than 45,000 first-year students, from 17 different institutions. Quantitative analysis involved grades, credit hours earned, and retention. Findings offer a comprehensive view for judging the efficacy of maintaining fraternal organizations on college campuses and encouragement to individual institutions to use this methodology to inform institutional policy, particularly the potential benefits of deferring recruitment

    The basophil activation test by flow cytometry: recent developments in clinical studies, standardization and emerging perspectives

    Get PDF
    The diagnosis of immediate allergy is mainly based upon an evocative clinical history, positive skin tests (gold standard) and, if available, detection of specific IgE. In some complicated cases, functional in vitro tests are necessary. The general concept of those tests is to mimic in vitro the contact between allergens and circulating basophils. The first approach to basophil functional responses was the histamine release test but this has remained controversial due to insufficient sensitivity and specificity. During recent years an increasing number of studies have demonstrated that flow cytometry is a reliable tool for monitoring basophil activation upon allergen challenge by detecting surface expression of degranulation/activation markers (CD63 or CD203c). This article reviews the recent improvements to the basophil activation test made possible by flow cytometry, focusing on the use of anti-CRTH2/DP(2 )antibodies for basophil recognition. On the basis of a new triple staining protocol, the basophil activation test has become a standardized tool for in vitro diagnosis of immediate allergy. It is also suitable for pharmacological studies on non-purified human basophils. Multicenter studies are now required for its clinical assessment in large patient populations and to define the cut-off values for clinical decision-making

    Learning to Recognize Touch Gestures: Recurrent vs. Convolutional Features and Dynamic Sampling

    Get PDF
    International audienceWe propose a fully automatic method for learning gestures on big touch devices in a potentially multiuser context. The goal is to learn general models capable of adapting to different gestures, user styles and hardware variations (e.g. device sizes, sampling frequencies and regularities). Based on deep neural networks, our method features a novel dynamic sampling and temporal normalization component, transforming variable length gestures into fixed length representations while preserving finger/surface contact transitions, that is, the topology of the signal. This sequential representation is then processed with a convolutional model capable, unlike recurrent networks, of learning hierarchical representations with different levels of abstraction. To demonstrate the interest of the proposed method, we introduce a new touch gestures dataset with 6591 gestures performed by 27 people, which is, up to our knowledge, the first of its kind: a publicly available multi-touch gesture dataset for interaction. We also tested our method on a standard dataset of symbolic touch gesture recognition, the MMG dataset, outperforming the state of the art and reporting close to perfect performance

    Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling

    Get PDF
    We propose a fully automatic method for learning gestures on big touch devices in a potentially multi-user context. The goal is to learn general models capable of adapting to different gestures, user styles and hardware variations (e.g. device sizes, sampling frequencies and regularities). Based on deep neural networks, our method features a novel dynamic sampling and temporal normalization component, transforming variable length gestures into fixed length representations while preserving finger/surface contact transitions, that is, the topology of the signal. This sequential representation is then processed with a convolutional model capable, unlike recurrent networks, of learning hierarchical representations with different levels of abstraction. To demonstrate the interest of the proposed method, we introduce a new touch gestures dataset with 6591 gestures performed by 27 people, which is, up to our knowledge, the first of its kind: a publicly available multi-touch gesture dataset for interaction. We also tested our method on a standard dataset of symbolic touch gesture recognition, the MMG dataset, outperforming the state of the art and reporting close to perfect performance.Comment: 9 pages, 4 figures, accepted at the 13th IEEE Conference on Automatic Face and Gesture Recognition (FG2018). Dataset available at http://itekube7.itekube.co

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    International audienceUsing touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems(GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user.We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors

    Full text link
    The proliferation of DeepFake technology is a rising challenge in today's society, owing to more powerful and accessible generation methods. To counter this, the research community has developed detectors of ever-increasing accuracy. However, the ability to explain the decisions of such models to users is lacking behind and is considered an accessory in large-scale benchmarks, despite being a crucial requirement for the correct deployment of automated tools for content moderation. We attribute the issue to the reliance on qualitative comparisons and the lack of established metrics. We describe a simple set of metrics to evaluate the visual quality and informativeness of explanations of video DeepFake classifiers from a human-centric perspective. With these metrics, we compare common approaches to improve explanation quality and discuss their effect on both classification and explanation performance on the recent DFDC and DFD datasets.Comment: Accepted at BMVC 2022, code repository at https://github.com/baldassarreFe/deepfake-detectio

    Toward a watershed-scale stormwater management

    Get PDF
    • …
    corecore