389 research outputs found
Weakly Supervised Learning with Automated Labels from Radiology Reports for Glioma Change Detection
Gliomas are the most frequent primary brain tumors in adults. Glioma change
detection aims at finding the relevant parts of the image that change over
time. Although Deep Learning (DL) shows promising performances in similar
change detection tasks, the creation of large annotated datasets represents a
major bottleneck for supervised DL applications in radiology. To overcome this,
we propose a combined use of weak labels (imprecise, but fast-to-create
annotations) and Transfer Learning (TL). Specifically, we explore inductive TL,
where source and target domains are identical, but tasks are different due to a
label shift: our target labels are created manually by three radiologists,
whereas our source weak labels are generated automatically from radiology
reports via NLP. We frame knowledge transfer as hyperparameter optimization,
thus avoiding heuristic choices that are frequent in related works. We
investigate the relationship between model size and TL, comparing a
low-capacity VGG with a higher-capacity ResNeXt model. We evaluate our models
on 1693 T2-weighted magnetic resonance imaging difference maps created from 183
patients, by classifying them into stable or unstable according to tumor
evolution. The weak labels extracted from radiology reports allowed us to
increase dataset size more than 3-fold, and improve VGG classification results
from 75% to 82% AUC. Mixed training from scratch led to higher performance than
fine-tuning or feature extraction. To assess generalizability, we ran inference
on an open dataset (BraTS-2015: 15 patients, 51 difference maps), reaching up
to 76% AUC. Overall, results suggest that medical imaging problems may benefit
from smaller models and different TL strategies with respect to computer vision
datasets, and that report-generated weak labels are effective in improving
model performances. Code, in-house dataset and BraTS labels are released.Comment: This work has been submitted as Original Paper to a Journa
Neuroevolution of Self-Interpretable Agents
Inattentional blindness is the psychological phenomenon that causes one to
miss things in plain sight. It is a consequence of the selective attention in
perception that lets us remain focused on important parts of our world without
distraction from irrelevant details. Motivated by selective attention, we study
the properties of artificial agents that perceive the world through the lens of
a self-attention bottleneck. By constraining access to only a small fraction of
the visual input, we show that their policies are directly interpretable in
pixel space. We find neuroevolution ideal for training self-attention
architectures for vision-based reinforcement learning (RL) tasks, allowing us
to incorporate modules that can include discrete, non-differentiable operations
which are useful for our agent. We argue that self-attention has similar
properties as indirect encoding, in the sense that large implicit weight
matrices are generated from a small number of key-query parameters, thus
enabling our agent to solve challenging vision based tasks with at least 1000x
fewer parameters than existing methods. Since our agent attends to only task
critical visual hints, they are able to generalize to environments where task
irrelevant elements are modified while conventional methods fail. Videos of our
results and source code available at https://attentionagent.github.io/Comment: To appear at the Genetic and Evolutionary Computation Conference
(GECCO 2020) as a full pape
Quantitative Assessment of Eye Phenotypes for Functional Genetic Studies Using Drosophila melanogaster
About two-thirds of the vital genes in the Drosophila genome are involved in eye development, making the fly eye an excellent genetic system to study cellular function and development, neurodevelopment/degeneration, and complex diseases such as cancer and diabetes. We developed a novel computational method, implemented as Flynotyper software (http://flynotyper.sourceforge.net), to quantitatively assess the morphological defects in the Drosophila eye resulting from genetic alterations affecting basic cellular and developmental processes. Flynotyper utilizes a series of image processing operations to automatically detect the fly eye and the individual ommatidium, and calculates a phenotypic score as a measure of the disorderliness of ommatidial arrangement in the fly eye. As a proof of principle, we tested our method by analyzing the defects due to eye-specific knockdown of Drosophila orthologs of 12 neurodevelopmental genes to accurately document differential sensitivities of these genes to dosage alteration. We also evaluated eye images from six independent studies assessing the effect of overexpression of repeats, candidates from peptide library screens, and modifiers of neurotoxicity and developmental processes on eye morphology, and show strong concordance with the original assessment. We further demonstrate the utility of this method by analyzing 16 modifiers of sine oculis obtained from two genome-wide deficiency screens of Drosophila and accurately quantifying the effect of its enhancers and suppressors during eye development. Our method will complement existing assays for eye phenotypes and increase the accuracy of studies that use fly eyes for functional evaluation of genes and genetic interactions
Probabilistic Models for Joint Segmentation, Detection and Tracking
Migrace buněk a buněčných částic hraje důležitou roli ve fungování živých organismů. Systematický výzkum buněčné migrace byl umožněn v posledních dvaceti letech rychlým rozvojem neinvazivních zobrazovacích technik a digitálních snímačů. Moderní zobrazovací systémy dovolují studovat chování buněčných populací složených z mnoha ticíců buněk. Manuální analýza takového množství dat by byla velice zdlouhavá, protože některé experimenty vyžadují analyzovat tvar, rychlost a další charakteristiky jednotlivých buněk. Z tohoto důvodu je ve vědecké komunitě velká poptávka po automatických metodách.Migration of cells and subcellular particles plays a crucial role in many processes in living organisms. Despite its importance a systematic research of cell motility has only been possible in last two decades due to rapid development of non-invasive imaging techniques and digital cameras. Modern imaging systems allow to study large populations with thousands of cells. Manual analysis of the acquired data is infeasible, because in order to gain insight into underlying biochemical processes it is sometimes necessary to determine shape, velocity and other characteristics of individual cells. Thus there is a high demand for automatic methods
Internet of Things-based Traffic Management System for Maseru, Lesotho.
Published ThesisThe number of vehicles in Maseru has been steadily increasing, leading to heightened intensity of congestion and traffic occurrences. This is further exacerbated by ineffective solutions that are currently in place as well as the absence of tools that facilitate dispersal of information to motorists.
Traffic lights have been put in place to manage flow of traffic but are becoming increasingly inefficient due to their design. The preset timing cycles between green, amber and red disregarding prevailing conditions leads, inter alia, to increased wait times, use of additional fuel and air pollution. In addition, lack of equipment that is able to provide motorists with information about prevailing road conditions further increases the possibility of one being stuck in traffic.
To make traffic management more efficient at signaled junctions, the implementation of the Internet of Things (IoT) paradigm is used to create intelligent traffic management systems such as Wireless Sensor Networks (WSN) and fuzzy algorithms to intelligently decide the phases of traffic lights. Road density and vehicles’ speeds are collected from the road infrastructure using cameras and are passed to a fuzzy algorithm to determine how congested a road is. Dependent on these parameters, the algorithm will also determine which roads should be given highest priority while maintaining a degree of fairness, thus optimizing traffic flow. In addition, the ubiquitous provision of road condition information to motorists in various formats such as text and audio is also used. This feature allows for the acquisition of the latest road status, thus making it possible to find alternative routes. The unique feature in this project is the ability to collect road parameters from the road infrastructure itself, using WSN as well as crowd source data from road users using mobile devices.
A study conducted in this research revealed a relationship between the number of cars on a road and concentration of Carbon Dioxide (CO2); the results showed that as the number of cars increases, so does the measure of CO2. Questionnaire-based surveys showed that Maseru citizens have noted an increase in congestion which they attributed to the increase in number of vehicles on the road that is not met by the increase or improvement in road infrastructure. The respondents in this survey also noted limited mechanisms that provide them with road conditions and highlighted that such tools may alleviate congestion. The performance of intelligent traffic lights was conducted via simulations compared with fixed cycle traffic lights. From the simulations it was observed that IoT- based traffic management systems reduced the wait times of vehicles at signaled junctions which would also result in reduction of the pollutant CO2. It is envisaged that the future implementation will include the ability to manage a network of junctions and ability to predict abnormal traffic flows
Recommended from our members
Injecting Inductive Biases into Distributed Representations of Text
Distributed real-valued vector representations of text (a.k.a. embeddings), learned by neural networks, encode various (linguistic) knowledge. To encode this knowledge into the embeddings the common approach is to train a large neural network on large corpora. There is, however, a growing concern regarding the sustainability and rationality of pursuing this approach further. We depart from the mainstream trend and instead, to incorporate the desired properties into embeddings, use inductive biases.
First, we use Knowledge Graphs (KGs) as a data-based inductive bias to derive the semantic representation of words and sentences. The explicit semantics that is encoded in a structure of a KG allows us to acquire the semantic representations without the need of employing a large amount of text. We use graph embedding techniques to learn the semantic representation of words and the sequence-to-sequence model to learn the semantic representation of sentences. We demonstrate the efficacy of the inductive bias for learning embeddings for rare words and the ability of sentence embeddings to encode topological dependencies that exist between entities of a KG.
Then, we explore the amount of information and sparsity as two key (data-agnostic) inductive biases to regulate the utilisation of the representation space. We impose these properties with Variational Autoencoders (VAEs). First, we regulate the amount of information encoded in a sentence embedding via constraint optimisation of a VAE objective function. We show that increasing amount of information allows to better discriminate sentences. Afterwards, to impose distributed sparsity we design a state-of-the-art Hierarchical Sparse VAE with a flexible posterior which captures the statistical characteristics of text effectively. While sparsity, in general, has desired computational and statistical representational properties, it is known to compensate task performance. We illustrate that with distributed sparsity, task performance could be maintained or even improved.
The findings of the thesis advocate further development of inductive biases that could mitigate the dependence of representation learning quality on large data and model sizes
- …