64,853 research outputs found
The malaria system microApp: A new, mobile device-based tool for malaria diagnosis
Background: Malaria is a public health problem that affects remote areas worldwide. Climate change has contributed to the problem by allowing for the survival of Anopheles in previously uninhabited areas. As such, several groups have made developing news systems for the automated diagnosis of malaria a priority.
Objective: The objective of this study was to develop a new, automated, mobile device-based diagnostic system for malaria. The system uses Giemsa-stained peripheral blood samples combined with light microscopy to identify the Plasmodium falciparum species in the ring stage of development.
Methods: The system uses image processing and artificial intelligence techniques as well as a known face detection algorithm to identify Plasmodium parasites. The algorithm is based on integral image and haar-like features concepts, and makes use of weak classifiers with adaptive boosting learning. The search scope of the learning algorithm is reduced in the preprocessing step by removing the background around blood cells.
Results: As a proof of concept experiment, the tool was used on 555 malaria-positive and 777 malaria-negative previously-made slides. The accuracy of the system was, on average, 91%, meaning that for every 100 parasite-infected samples, 91 were identified correctly.
Conclusions: Accessibility barriers of low-resource countries can be addressed with low-cost diagnostic tools. Our system, developed for mobile devices (mobile phones and tablets), addresses this by enabling access to health centers in remote communities, and importantly, not depending on extensive malaria expertise or expensive diagnostic detection equipment.Peer ReviewedPostprint (published version
A Better Alternative to Piecewise Linear Time Series Segmentation
Time series are difficult to monitor, summarize and predict. Segmentation
organizes time series into few intervals having uniform characteristics
(flatness, linearity, modality, monotonicity and so on). For scalability, we
require fast linear time algorithms. The popular piecewise linear model can
determine where the data goes up or down and at what rate. Unfortunately, when
the data does not follow a linear model, the computation of the local slope
creates overfitting. We propose an adaptive time series model where the
polynomial degree of each interval vary (constant, linear and so on). Given a
number of regressors, the cost of each interval is its polynomial degree:
constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so
on. Our goal is to minimize the Euclidean (l_2) error for a given model
complexity. Experimentally, we investigate the model where intervals can be
either constant or linear. Over synthetic random walks, historical stock market
prices, and electrocardiograms, the adaptive model provides a more accurate
segmentation than the piecewise linear model without increasing the
cross-validation error or the running time, while providing a richer vocabulary
to applications. Implementation issues, such as numerical stability and
real-world performance, are discussed.Comment: to appear in SIAM Data Mining 200
Regression modeling for digital test of ÎŁÎ modulators
The cost of Analogue and Mixed-Signal circuit
testing is an important bottleneck in the industry, due to timeconsuming
verification of specifications that require state-ofthe-
art Automatic Test Equipment. In this paper, we apply
the concept of Alternate Test to achieve digital testing of
converters. By training an ensemble of regression models that
maps simple digital defect-oriented signatures onto Signal to
Noise and Distortion Ratio (SNDR), an average error of 1:7%
is achieved. Beyond the inference of functional metrics, we show
that the approach can provide interesting diagnosis information.Ministerio de EducaciĂłn y Ciencia TEC2007-68072/MICJunta de AndalucĂa TIC 5386, CT 30
Automatic epilepsy detection using fractal dimensions segmentation and GP-SVM classification
Objective: The most important part of signal processing for classification is feature extraction as a mapping from original input electroencephalographic (EEG) data space to new features space with the biggest class separability value. Features are not only the most important, but also the most difficult task from the classification process as they define input data and classification quality. An ideal set of features would make the classification problem trivial. This article presents novel methods of feature extraction processing and automatic epilepsy seizure classification combining machine learning methods with genetic evolution algorithms.
Methods: Classification is performed on EEG data that represent electric brain activity. At first, the signal is preprocessed with digital filtration and adaptive segmentation using fractal dimensions as the only segmentation measure. In the next step, a novel method using genetic programming (GP) combined with support vector machine (SVM) confusion matrix as fitness function weight is used to extract feature vectors compressed into lower dimension space and classify the final result into ictal or interictal epochs.
Results: The final application of GP SVM method improves the discriminatory performance of a classifier by reducing feature dimensionality at the same time. Members of the GP tree structure represent the features themselves and their number is automatically decided by the compression function introduced in this paper. This novel method improves the overall performance of the SVM classification by dramatically reducing the size of input feature vector.
Conclusion: According to results, the accuracy of this algorithm is very high and comparable, or even superior to other automatic detection algorithms. In combination with the great efficiency, this algorithm can be used in real-time epilepsy detection applications. From the results of the algorithm's classification, we can observe high sensitivity, specificity results, except for the Generalized Tonic Clonic Seizure (GTCS). As the next step, the optimization of the compression stage and final SVM evaluation stage is in place. More data need to be obtained on GTCS to improve the overall classification score for GTCS.Web of Science142449243
Design and implementation of a multi-octave-band audio camera for realtime diagnosis
Noise pollution investigation takes advantage of two common methods of
diagnosis: measurement using a Sound Level Meter and acoustical imaging. The
former enables a detailed analysis of the surrounding noise spectrum whereas
the latter is rather used for source localization. Both approaches complete
each other, and merging them into a unique system, working in realtime, would
offer new possibilities of dynamic diagnosis. This paper describes the design
of a complete system for this purpose: imaging in realtime the acoustic field
at different octave bands, with a convenient device. The acoustic field is
sampled in time and space using an array of MEMS microphones. This recent
technology enables a compact and fully digital design of the system. However,
performing realtime imaging with resource-intensive algorithm on a large amount
of measured data confronts with a technical challenge. This is overcome by
executing the whole process on a Graphic Processing Unit, which has recently
become an attractive device for parallel computing
- âŠ