25 research outputs found
BGrowth: an efficient approach for the segmentation of vertebral compression fractures in magnetic resonance imaging
Segmentation of medical images is a critical issue: several process of
analysis and classification rely on this segmentation. With the growing number
of people presenting back pain and problems related to it, the automatic or
semi-automatic segmentation of fractured vertebral bodies became a challenging
task. In general, those fractures present several regions with non-homogeneous
intensities and the dark regions are quite similar to the structures nearby.
Aimed at overriding this challenge, in this paper we present a semi-automatic
segmentation method, called Balanced Growth (BGrowth). The experimental results
on a dataset with 102 crushed and 89 normal vertebrae show that our approach
significantly outperforms well-known methods from the literature. We have
achieved an accuracy up to 95% while keeping acceptable processing time
performance, that is equivalent to the state-of-the-artmethods. Moreover,
BGrowth presents the best results even with a rough (sloppy) manual annotation
(seed points).Comment: This is a pre-print of an article published in Symposium on Applied
Computing. The final authenticated version is available online at
https://doi.org/10.1145/3297280.329972
StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible
Blind people frequently encounter inaccessible dynamic touchscreens in their
everyday lives that are difficult, frustrating, and often impossible to use
independently. Touchscreens are often the only way to control everything from
coffee machines and payment terminals, to subway ticket machines and in-flight
entertainment systems. Interacting with dynamic touchscreens is difficult
non-visually because the visual user interfaces change, interactions often
occur over multiple different screens, and it is easy to accidentally trigger
interface actions while exploring the screen. To solve these problems, we
introduce StateLens - a three-part reverse engineering solution that makes
existing dynamic touchscreens accessible. First, StateLens reverse engineers
the underlying state diagrams of existing interfaces using point-of-view videos
found online or taken by users using a hybrid crowd-computer vision pipeline.
Second, using the state diagrams, StateLens automatically generates
conversational agents to guide blind users through specifying the tasks that
the interface can perform, allowing the StateLens iOS application to provide
interactive guidance and feedback so that blind users can access the interface.
Finally, a set of 3D-printed accessories enable blind people to explore
capacitive touchscreens without the risk of triggering accidental touches on
the interface. Our technical evaluation shows that StateLens can accurately
reconstruct interfaces from stationary, hand-held, and web videos; and, a user
study of the complete system demonstrates that StateLens successfully enables
blind users to access otherwise inaccessible dynamic touchscreens.Comment: ACM UIST 201
Abstract Learnable Swendsen-Wang Cuts for Image Segmentation
We propose a framework for Bayesian unsupervised image segmentation with descriptive, learnable models. Our approach is based on learning descriptive models for segmentation and applying Monte Carlo Markov chain to traverse the solution space. Swendsen-Wang cuts are adapted to make meaningful jumps in solution space
Robust and accurate eye contour extraction
This paper describes a novel algorithm for exact eye contour detection in frontal face image. The exact eye shape is a useful piece of input information for applications like facial expression recognition, feature-based face recognition and face modelling. In contrast to well-known eye-segmentation methods, we do not rely on deformable models or image luminance gradient (edge) map. The eye windows (rough eye regions) are assumed to be known. The detection algorithm works in several steps. First, iris center and radius is estimated, then, exact upper eyelid contour is detected by searching for luminance valley points. Finally, lower eyelid is estimated from the eye corners coordinates and iris. The proposed technique has been tested on images of about fifty individuals taken under different lighting conditions with different cameras. It proved to be sufficiently robust and accurate for wide variety of images
Abstract A Survey on Pixel-Based Skin Color Detection Techniques
Skin color has proven to be a useful and robust cue for face detection, localization and tracking. Image content filtering, content-aware video compression and image color balancing applications can also benefit from automatic detection of skin in images. Numerous techniques for skin color modelling and recognition have been proposed during several past years. A few papers comparing different approaches have been published [Zarit et al. 1999], [Terrillon et al. 2000], [Brand and Mason 2000]. However, a comprehensive survey on the topic is still missing. We try to fill this vacuum by reviewing most widely used methods and techniques and collecting their numerical evaluation results
