12,625 research outputs found
A Convolutional Neural Network for the Automatic Diagnosis of Collagen VI related Muscular Dystrophies
The development of machine learning systems for the diagnosis of rare
diseases is challenging mainly due the lack of data to study them. Despite this
challenge, this paper proposes a system for the Computer Aided Diagnosis (CAD)
of low-prevalence, congenital muscular dystrophies from confocal microscopy
images. The proposed CAD system relies on a Convolutional Neural Network (CNN)
which performs an independent classification for non-overlapping patches tiling
the input image, and generates an overall decision summarizing the individual
decisions for the patches on the query image. This decision scheme points to
the possibly problematic areas in the input images and provides a global
quantitative evaluation of the state of the patients, which is fundamental for
diagnosis and to monitor the efficiency of therapies.Comment: Submitted for review to Expert Systems With Application
Socializing the Semantic Gap: A Comparative Survey on Image Tag Assignment, Refinement and Retrieval
Where previous reviews on content-based image retrieval emphasize on what can
be seen in an image to bridge the semantic gap, this survey considers what
people tag about an image. A comprehensive treatise of three closely linked
problems, i.e., image tag assignment, refinement, and tag-based image retrieval
is presented. While existing works vary in terms of their targeted tasks and
methodology, they rely on the key functionality of tag relevance, i.e.
estimating the relevance of a specific tag with respect to the visual content
of a given image and its social context. By analyzing what information a
specific method exploits to construct its tag relevance function and how such
information is exploited, this paper introduces a taxonomy to structure the
growing literature, understand the ingredients of the main works, clarify their
connections and difference, and recognize their merits and limitations. For a
head-to-head comparison between the state-of-the-art, a new experimental
protocol is presented, with training sets containing 10k, 100k and 1m images
and an evaluation on three test sets, contributed by various research groups.
Eleven representative works are implemented and evaluated. Putting all this
together, the survey aims to provide an overview of the past and foster
progress for the near future.Comment: to appear in ACM Computing Survey
Fault tolerant architectures for integrated aircraft electronics systems, task 2
The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported
GeoSay: A Geometric Saliency for Extracting Buildings in Remote Sensing Images
Automatic extraction of buildings in remote sensing images is an important
but challenging task and finds many applications in different fields such as
urban planning, navigation and so on. This paper addresses the problem of
buildings extraction in very high-spatial-resolution (VHSR) remote sensing (RS)
images, whose spatial resolution is often up to half meters and provides rich
information about buildings. Based on the observation that buildings in VHSR-RS
images are always more distinguishable in geometry than in texture or spectral
domain, this paper proposes a geometric building index (GBI) for accurate
building extraction, by computing the geometric saliency from VHSR-RS images.
More precisely, given an image, the geometric saliency is derived from a
mid-level geometric representations based on meaningful junctions that can
locally describe geometrical structures of images. The resulting GBI is finally
measured by integrating the derived geometric saliency of buildings.
Experiments on three public and commonly used datasets demonstrate that the
proposed GBI achieves the state-of-the-art performance and shows impressive
generalization capability. Additionally, GBI preserves both the exact position
and accurate shape of single buildings compared to existing methods
Towards automatic Markov reliability modeling of computer architectures
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation
Computational fact checking from knowledge networks
Traditional fact checking by expert journalists cannot keep up with the
enormous volume of information that is now generated online. Computational fact
checking may significantly enhance our ability to evaluate the veracity of
dubious information. Here we show that the complexities of human fact checking
can be approximated quite well by finding the shortest path between concept
nodes under properly defined semantic proximity metrics on knowledge graphs.
Framed as a network problem this approach is feasible with efficient
computational techniques. We evaluate this approach by examining tens of
thousands of claims related to history, entertainment, geography, and
biographical information using a public knowledge graph extracted from
Wikipedia. Statements independently known to be true consistently receive
higher support via our method than do false ones. These findings represent a
significant step toward scalable computational fact-checking methods that may
one day mitigate the spread of harmful misinformation
Space shuttle main engine fault detection using neural networks
A method for on-line Space Shuttle Main Engine (SSME) anomaly detection and fault typing using a feedback neural network is described. The method involves the computation of features representing time-variance of SSME sensor parameters, using historical test case data. The network is trained, using backpropagation, to recognize a set of fault cases. The network is then able to diagnose new fault cases correctly. An essential element of the training technique is the inclusion of randomly generated data along with the real data, in order to span the entire input space of potential non-nominal data
- …