296 research outputs found

    Learning morphological phenomena of Modern Greek an exploratory approach

    Get PDF
    This paper presents a computational model for the description of concatenative morphological phenomena of modern Greek (such as inflection, derivation and compounding) to allow learners, trainers and developers to explore linguistic processes through their own constructions in an interactive open‐ended multimedia environment. The proposed model introduces a new language metaphor, the ‘puzzle‐metaphor’ (similar to the existing ‘turtle‐metaphor’ for concepts from mathematics and physics), based on a visualized unification‐like mechanism for pattern matching. The computational implementation of the model can be used for creating environments for learning through design and learning by teaching

    Replay: multi-modal multi-view acted videos for casual holography

    Get PDF
    We introduce Replay, a collection of multi-view, multimodal videos of humans interacting socially. Each scene is filmed in high production quality, from different viewpoints with several static cameras, as well as wearable action cameras, and recorded with a large array of microphones at different positions in the room. Overall, the dataset contains over 4000 minutes of footage and over 7 million timestamped high-resolution frames annotated with camera poses and partially with foreground masks. The Replay dataset has many potential applications, such as novelview synthesis, 3D reconstruction, novel-view acoustic synthesis, human body and face analysis, and training generative models. We provide a benchmark for training and evaluating novel-view synthesis, with two scenarios of different difficulty. Finally, we evaluate several baseline state-of-theart methods on the new benchmark

    Text-to-4D dynamic scene generation

    Get PDF
    We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data and the T2V model is trained only on Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our approach using comprehensive quantitative and qualitative experiments and show an improvement over previously established internal baselines. To the best of our knowledge, our method is the first to generate 3D dynamic scenes given a text description. Generated samples can be viewed at make-a-video3d.github.i

    Can disordered mobile phone use be considered a behavioral addiction? An update on current evidence and a comprehensive model for future research

    Get PDF
    Despite the many positive outcomes, excessive mobile phone use is now often associated with potentially harmful and/or disturbing behaviors (e.g., symptoms of deregulated use, negative impact on various aspects of daily life such as relationship problems, and work intrusion). Problematic mobile phone use (PMPU) has generally been considered as a behavioral addiction that shares many features with more established drug addictions. In light of the most recent data, the current paper reviews the validity of the behavioral addiction model when applied to PMPU. On the whole, it is argued that the evidence supporting PMPU as an addictive behavior is scarce. In particular, it lacks studies that definitively show behavioral and neurobiological similarities between mobile phone addiction and other types of legitimate addictive behaviors. Given this context, an integrative pathway model is proposed that aims to provide a theoretical framework to guide future research in the field of PMPU. This model highlights that PMPU is a heterogeneous and multi-faceted condition

    Negotiated economic grid brokering for quality of service

    Get PDF
    We demonstrate a Grid broker's job submission system and its selection process for finding the provider that is most likely to be able to complete work on time and on budget. We compare several traditional site selection mechanisms with an economic and Quality of Service (QoS) oriented approach. We show how a greater profit and QoS can be achieved if jobs are accepted by the most appropriate provider. We particularly focus upon the benefits of a negotiation process for QoS that enables our selection process to occur

    Fully Trainable and Interpretable Non-Local Sparse Models for Image Restoration

    Get PDF
    Non-local self-similarity and sparsity principles have proven to be powerful priors for natural image modeling. We propose a novel differentiable relaxation of joint sparsity that exploits both principles and leads to a general framework for image restoration which is (1) trainable end to end, (2) fully interpretable, and (3) much more compact than competing deep learning architectures. We apply this approach to denoising, jpeg deblocking, and demosaicking, and show that, with as few as 100K parameters, its performance on several standard benchmarks is on par or better than state-of-the-art methods that may have an order of magnitude or more parameters.Comment: ECCV 202

    Learning to Recognize 3D Human Action from A New Skeleton-based Representation Using Deep Convolutional Neural Networks

    Get PDF
    Recognizing human actions in untrimmed videos is an important challenging task. An effective 3D motion representation and a powerful learning model are two key factors influencing recognition performance. In this paper we introduce a new skeletonbased representation for 3D action recognition in videos. The key idea of the proposed representation is to transform 3D joint coordinates of the human body carried in skeleton sequences into RGB images via a color encoding process. By normalizing the 3D joint coordinates and dividing each skeleton frame into five parts, where the joints are concatenated according to the order of their physical connections, the color-coded representation is able to represent spatio-temporal evolutions of complex 3D motions, independently of the length of each sequence. We then design and train different Deep Convolutional Neural Networks (D-CNNs) based on the Residual Network architecture (ResNet) on the obtained image-based representations to learn 3D motion features and classify them into classes. Our method is evaluated on two widely used action recognition benchmarks: MSR Action3D and NTU-RGB+D, a very large-scale dataset for 3D human action recognition. The experimental results demonstrate that the proposed method outperforms previous state-of-the-art approaches whilst requiring less computation for training and prediction.This research was carried out at the Cerema Research Center (CEREMA) and Toulouse Institute of Computer Science Research (IRIT), Toulouse, France. Sergio A. Velastin is grateful for funding received from the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for Research, Technological Development and demonstration under grant agreement N. 600371, el Ministerio de Economia, Industria y Competitividad (COFUND2013-51509) el Ministerio de Educación, cultura y Deporte (CEI-15-17) and Banco Santander

    Associations between clinical canine leishmaniosis and multiple vector-borne co-infections: a case-control serological study

    Get PDF
    Dogs that have clinical leishmaniosis (ClinL), caused by the parasite Leishmania infantum, are commonly co-infected with other pathogens, especially vector-borne pathogens (VBP). A recent PCR-based study found that ClinL dogs are more likely to be additionally infected with the rickettsial bacteria Ehrlichia canis. Further information on co-infections in ClinL cases with VBP, as assessed by serology, is required. The research described in this report determined if dogs with ClinL are at higher risk of exposure to VBP than healthy control dogs using a case-control serology study
    corecore