10,361 research outputs found

    Automated artemia length measurement using U-shaped fully convolutional networks and second-order anisotropic Gaussian kernels

    No full text
    The brine shrimp Artemia, a small crustacean zooplankton organism, is universally used as live prey for larval fish and shrimps in aquaculture. In Artemia studies, it would be highly desired to have access to automated techniques to obtain the length information from Anemia images. However, this problem has so far not been addressed in literature. Moreover, conventional image-based length measurement approaches cannot be readily transferred to measure the Artemia length, due to the distortion of non-rigid bodies, the variation over growth stages and the interference from the antennae and other appendages. To address this problem, we compile a dataset containing 250 images as well as the corresponding label maps of length measuring lines. We propose an automated Anemia length measurement method using U-shaped fully convolutional networks (UNet) and second-order anisotropic Gaussian kernels. For a given Artemia image, the designed UNet model is used to extract a length measuring line structure, and, subsequently, the second-order Gaussian kernels are employed to transform the length measuring line structure into a thin measuring line. For comparison, we also follow conventional fish length measurement approaches and develop a non-learning-based method using mathematical morphology and polynomial curve fitting. We evaluate the proposed method and the competing methods on 100 test images taken from the dataset compiled. Experimental results show that the proposed method can accurately measure the length of Artemia objects in images, obtaining a mean absolute percentage error of 1.16%

    Universal Dependencies Parsing for Colloquial Singaporean English

    Full text link
    Singlish can be interesting to the ACL community both linguistically as a major creole based on English, and computationally for information extraction and sentiment analysis of regional social media. We investigate dependency parsing of Singlish by constructing a dependency treebank under the Universal Dependencies scheme, and then training a neural network model by integrating English syntactic knowledge into a state-of-the-art parser trained on the Singlish treebank. Results show that English knowledge can lead to 25% relative error reduction, resulting in a parser of 84.47% accuracies. To the best of our knowledge, we are the first to use neural stacking to improve cross-lingual dependency parsing on low-resource languages. We make both our annotation and parser available for further research.Comment: Accepted by ACL 201

    Incremental multi-domain learning with network latent tensor factorization

    Full text link
    The prominence of deep learning, large amount of annotated data and increasingly powerful hardware made it possible to reach remarkable performance for supervised classification tasks, in many cases saturating the training sets. However the resulting models are specialized to a single very specific task and domain. Adapting the learned classification to new domains is a hard problem due to at least three reasons: (1) the new domains and the tasks might be drastically different; (2) there might be very limited amount of annotated data on the new domain and (3) full training of a new model for each new task is prohibitive in terms of computation and memory, due to the sheer number of parameters of deep CNNs. In this paper, we present a method to learn new-domains and tasks incrementally, building on prior knowledge from already learned tasks and without catastrophic forgetting. We do so by jointly parametrizing weights across layers using low-rank Tucker structure. The core is task agnostic while a set of task specific factors are learnt on each new domain. We show that leveraging tensor structure enables better performance than simply using matrix operations. Joint tensor modelling also naturally leverages correlations across different layers. Compared with previous methods which have focused on adapting each layer separately, our approach results in more compact representations for each new task/domain. We apply the proposed method to the 10 datasets of the Visual Decathlon Challenge and show that our method offers on average about 7.5x reduction in number of parameters and competitive performance in terms of both classification accuracy and Decathlon score.Comment: AAAI2

    Corrective Focus Detection in Italian Speech Using Neural Networks

    Get PDF
    The corrective focus is a particular kind of prosodic prominence where the speaker is intended to correct or to emphasize a concept. This work develops an Artificial Cognitive System (ACS) based on Recurrent Neural Networks that analyzes suitablefeatures of the audio channel in order to automatically identify the Corrective Focus on speech signals. Two different approaches to build the ACS have been developed. The first one addresses the detection of focused syllables within a given Intonational Unit whereas the second one identifies a whole IU as focused or not. The experimental evaluation over an Italian Corpus has shown the ability of the Artificial Cognitive System to identify the focus in the speaker IUs. This ability can lead to further important improvements in human-machine communication. The addressed problem is a good example of synergies between Humans and Artificial Cognitive Systems.The research leading to the results in this paper has been conducted in the project EMPATHIC (Grant N: 769872) that received funding from the European Union’s Horizon2020 research and innovation programme.Additionally, this work has been partially funded by the Spanish Minister of Science under grants TIN2014-54288-C4-4-R and TIN2017-85854-C4-3-R, by the Basque Government under grant PRE_2017_1_0357,andby the University of the Basque Country UPV/EHU under grantPIF17/310

    Statistical parsing of morphologically rich languages (SPMRL): what, how and whither

    Get PDF
    The term Morphologically Rich Languages (MRLs) refers to languages in which significant information concerning syntactic units and relations is expressed at word-level. There is ample evidence that the application of readily available statistical parsing models to such languages is susceptible to serious performance degradation. The first workshop on statistical parsing of MRLs hosts a variety of contributions which show that despite language-specific idiosyncrasies, the problems associated with parsing MRLs cut across languages and parsing frameworks. In this paper we review the current state-of-affairs with respect to parsing MRLs and point out central challenges. We synthesize the contributions of researchers working on parsing Arabic, Basque, French, German, Hebrew, Hindi and Korean to point out shared solutions across languages. The overarching analysis suggests itself as a source of directions for future investigations

    Computational Sociolinguistics: A Survey

    Get PDF
    Language is a social phenomenon and variation is inherent to its social nature. Recently, there has been a surge of interest within the computational linguistics (CL) community in the social dimension of language. In this article we present a survey of the emerging field of "Computational Sociolinguistics" that reflects this increased interest. We aim to provide a comprehensive overview of CL research on sociolinguistic themes, featuring topics such as the relation between language and social identity, language use in social interaction and multilingual communication. Moreover, we demonstrate the potential for synergy between the research communities involved, by showing how the large-scale data-driven methods that are widely used in CL can complement existing sociolinguistic studies, and how sociolinguistics can inform and challenge the methods and assumptions employed in CL studies. We hope to convey the possible benefits of a closer collaboration between the two communities and conclude with a discussion of open challenges.Comment: To appear in Computational Linguistics. Accepted for publication: 18th February, 201

    Personalized modeling for real-time pressure ulcer prevention in sitting posture

    Full text link
    , Ischial pressure ulcer is an important risk for every paraplegic person and a major public health issue. Pressure ulcers appear following excessive compression of buttock's soft tissues by bony structures, and particularly in ischial and sacral bones. Current prevention techniques are mainly based on daily skin inspection to spot red patches or injuries. Nevertheless, most pressure ulcers occur internally and are difficult to detect early. Estimating internal strains within soft tissues could help to evaluate the risk of pressure ulcer. A subject-specific biomechanical model could be used to assess internal strains from measured skin surface pressures. However, a realistic 3D non-linear Finite Element buttock model, with different layers of tissue materials for skin, fat and muscles, requires somewhere between minutes and hours to compute, therefore forbidding its use in a real-time daily prevention context. In this article, we propose to optimize these computations by using a reduced order modeling technique (ROM) based on proper orthogonal decompositions of the pressure and strain fields coupled with a machine learning method. ROM allows strains to be evaluated inside the model interactively (i.e. in less than a second) for any pressure field measured below the buttocks. In our case, with only 19 modes of variation of pressure patterns, an error divergence of one percent is observed compared to the full scale simulation for evaluating the strain field. This reduced model could therefore be the first step towards interactive pressure ulcer prevention in a daily setup. Highlights-Buttocks biomechanical modelling,-Reduced order model,-Daily pressure ulcer prevention
    corecore