17,206 research outputs found
Cellular tracking in time-lapse phase contrast images
The quantitative analysis of live cells is a key issue in evaluating biological processes. The current clinical practice involves the application of a tedious and time consuming manual tracking procedure on large amount of data. As a result, automatic tracking systems are currently developed and evaluated. However, problems caused by cellular division, agglomeration, Brownian motion and topology changes are difficult issues that have to be accommodated by automatic tracking techniques. In this paper, we detail the development of a fully automated multi-target tracking system that is able to deal with Brownian motion and cellular division. During the tracking process our approach includes the neighbourhood relationship and motion history to enforce the cellular tracking continuity in the spatial and temporal domain. The experimental results reported in this paper indicate that our method is able to accurately track cellular structures in time-lapse data
Bimodal network architectures for automatic generation of image annotation from text
Medical image analysis practitioners have embraced big data methodologies.
This has created a need for large annotated datasets. The source of big data is
typically large image collections and clinical reports recorded for these
images. In many cases, however, building algorithms aimed at segmentation and
detection of disease requires a training dataset with markings of the areas of
interest on the image that match with the described anomalies. This process of
annotation is expensive and needs the involvement of clinicians. In this work
we propose two separate deep neural network architectures for automatic marking
of a region of interest (ROI) on the image best representing a finding
location, given a textual report or a set of keywords. One architecture
consists of LSTM and CNN components and is trained end to end with images,
matching text, and markings of ROIs for those images. The output layer
estimates the coordinates of the vertices of a polygonal region. The second
architecture uses a network pre-trained on a large dataset of the same image
types for learning feature representations of the findings of interest. We show
that for a variety of findings from chest X-ray images, both proposed
architectures learn to estimate the ROI, as validated by clinical annotations.
There is a clear advantage obtained from the architecture with pre-trained
imaging network. The centroids of the ROIs marked by this network were on
average at a distance equivalent to 5.1% of the image width from the centroids
of the ground truth ROIs.Comment: Accepted to MICCAI 2018, LNCS 1107
Using the beat histogram for speech rhythm description and language identification
In this paper we present a novel approach for the description of speech rhythm and the extraction of rhythm-related features for automatic language identification (LID). Previous methods have extracted speech rhythm through the calculation of features based on salient elements of speech such as consonants, vowels and syllables. We present how an automatic rhythm extraction method borrowed from music information retrieval, the beat histogram, can be adapted for the analysis of speech rhythm by defining the most relevant novelty functions in the speech signal and extracting features describing their periodicities. We have evaluated those features in a rhythm-based LID task for two multilingual speech corpora using support vector machines, including feature selection methods to identify the most informative descriptors. Results suggest that the method is successful in describing speech rhythm and provides LID classification accuracy comparable to or better than that of other approaches, without the need for a preceding segmentation or annotation of the speech signal. Concerning rhythm typology, the rhythm class hypothesis in its original form seems to be only partly confirmed by our results
Tupleware: Redefining Modern Analytics
There is a fundamental discrepancy between the targeted and actual users of
current analytics frameworks. Most systems are designed for the data and
infrastructure of the Googles and Facebooks of the world---petabytes of data
distributed across large cloud deployments consisting of thousands of cheap
commodity machines. Yet, the vast majority of users operate clusters ranging
from a few to a few dozen nodes, analyze relatively small datasets of up to a
few terabytes, and perform primarily compute-intensive operations. Targeting
these users fundamentally changes the way we should build analytics systems.
This paper describes the design of Tupleware, a new system specifically aimed
at the challenges faced by the typical user. Tupleware's architecture brings
together ideas from the database, compiler, and programming languages
communities to create a powerful end-to-end solution for data analysis. We
propose novel techniques that consider the data, computations, and hardware
together to achieve maximum performance on a case-by-case basis. Our
experimental evaluation quantifies the impact of our novel techniques and shows
orders of magnitude performance improvement over alternative systems
- âŠ