25 research outputs found

    The instrument suite of the European Spallation Source

    Get PDF
    An overview is provided of the 15 neutron beam instruments making up the initial instrument suite of the European Spallation Source (ESS), and being made available to the neutron user community. The ESS neutron source consists of a high-power accelerator and target station, providing a unique long-pulse time structure of slow neutrons. The design considerations behind the time structure, moderator geometry and instrument layout are presented. The 15-instrument suite consists of two small-angle instruments, two reflectometers, an imaging beamline, two single-crystal diffractometers; one for macromolecular crystallography and one for magnetism, two powder diffractometers, and an engineering diffractometer, as well as an array of five inelastic instruments comprising two chopper spectrometers, an inverse-geometry single-crystal excitations spectrometer, an instrument for vibrational spectroscopy and a high-resolution backscattering spectrometer. The conceptual design, performance and scientific drivers of each of these instruments are described. All of the instruments are designed to provide breakthrough new scientific capability, not currently available at existing facilities, building on the inherent strengths of the ESS long-pulse neutron source of high flux, flexible resolution and large bandwidth. Each of them is predicted to provide world-leading performance at an accelerator power of 2 MW. This technical capability translates into a very broad range of scientific capabilities. The composition of the instrument suite has been chosen to maximise the breadth and depth of the scientific impact o

    Self-supervised pain intensity estimation from facial videos via statistical spatiotemporal distillation

    No full text
    Abstract Recently, automatic pain assessment technology, in particular automatically detecting pain from facial expressions, has been developed to improve the quality of pain management, and has attracted increasing attention. In this paper, we propose self-supervised learning for automatic yet efficient pain assessment, in order to reduce the cost of collecting large amount of labeled data. To achieve this, we introduce a novel similarity function to learn generalized representations using a Siamese network in the pretext task. The learned representations are finetuned in the downstream task of pain intensity estimation. To make the method computationally efficient, we propose Statistical Spatiotemporal Distillation (SSD) to encode the spatiotemporal variations underlying the facial video into a single RGB image, enabling the use of less complex 2D deep models for video representation. Experiments on two publicly available pain datasets and cross-dataset evaluation demonstrate promising results, showing the good generalization ability of the learned representations

    MDN:a deep maximization-differentiation network for spatio-temporal depression detection

    No full text
    Abstract Deep learning (DL) models have been successfully applied in video-based affective computing, allowing to recognize emotions and mood, or to estimate the intensity of pain or stress based on facial expressions. Despite the advances with state-of-the-art DL models for spatio-temporal recognition of facial expressions associated with depression, some challenges remain in the cost-effective application of 3D-CNNs: (1) 3D convolutions employ structures with fixed temporal depth that decreases the potential to extract discriminative representations due to the usually small difference of spatio-temporal variations along different depression levels; and (2) the computationally complexity of these models with consequent susceptibility to overfitting. To address these challenges, we propose a novel DL architecture called the Maximization and Differentiation Network (MDN) in order to effectively represent facial expression variations that are relevant for depression assessment. The MDN operates without 3D convolutions, by exploring multiscale temporal information using a maximization block that captures smooth facial variations, and a difference block to encode sudden facial variations. Extensive experiments using our proposed MDN result in improved performance while reducing the number of parameters by more than 3 when compared with 3D-ResNet models. Our model also outperforms other 3D models and achieves state-of-the-art results for depression detection. Code available at: https://github.com/wheidima/MDN

    Kinship verification from facial images and videos:human versus machine

    No full text
    Abstract Automatic kinship verification from facial images is a relatively new and challenging research problem in computer vision. It consists in automatically determining whether two persons have a biological kin relation by examining their facial attributes. In this work, we compare the performance of humans and machines in kinship verification tasks. We investigate the state-of-the-art methods in automatic kinship verification from facial images, comparing their performance with the one obtained by asking humans to complete an equivalent task using a crowdsourcing system. Our results show that machines can consistently beat humans in kinship classification tasks in both images and videos. In addition, we study the limitations of currently available kinship databases and analyzing their possible impact in kinship verification experiment and this type of comparison

    Learning human-blockage direction prediction from indoor mmWave radio measurements

    No full text
    Abstract Millimeter wave (mmWave) beamforming is a vital component of the fifth generation (5G) new radio (NR) and beyond wireless communication systems. The usage of mmWave narrow beams encounters frequent signal attenuation due to random human blockages in indoor environments. Human blockage predictions can jointly improve the signal quality as well as passively sense human activities during mmWave communication. Human sensing using wireless fidelity (WiFi) systems has earlier been studied using receiver signal strength indicator (RSSI) signal level fluctuations based on distance measurements. Other conventional approaches using cameras, lidars, radars, etc. require additional hardware deployments. Current device-free WiFi sensing approaches use vendor-specific channel state information to obtain fine-grained human blockage predictions. Our novelty in this work is to obtain fine-grained human blockage direction predictions in mmWave spectrum, using a time series of RSSI measurements and build fingerprints. We perform experiments to construct a Human Millimetre-wave Radio Blockage Detection (HuMRaBD) dataset and observe human influence in different radio beam directions during each radio initial access procedure. We design a multi layer perceptron (MLP) framework to analyze the HuMRaBD dataset over coarse-grained and fine-grained mmWave blockage directions from static and dynamic human movements. The results show that our trained MLP-trained models can simultaneously sense multiple indoor human radio-blockage directions at an average F1 score of 0.84 and area under curve (AUC) score of 0.95 during mmWave communication
    corecore