1,811 research outputs found
Coronal mass ejections from the same active region cluster: Two different perspectives
The cluster formed by active regions (ARs) NOAA 11121 and 11123,
approximately located on the solar central meridian on 11 November 2010, is of
great scientific interest. This complex was the site of violent flux emergence
and the source of a series of Earth-directed events on the same day. The onset
of the events was nearly simultaneously observed by the Atmospheric Imaging
Assembly (AIA) telescope aboard the Solar Dynamics Observatory (SDO) and the
Extreme-Ultraviolet Imagers (EUVI) on the Sun-Earth Connection Coronal and
Heliospheric Investigation (SECCHI) suite of telescopes onboard the
Solar-Terrestrial Relations Observatory (STEREO) twin spacecraft. The
progression of these events in the low corona was tracked by the Large Angle
Spectroscopic Coronagraphs (LASCO) onboard the Solar and Heliospheric
Observatory (SOHO) and the SECCHI/COR coronagraphs on STEREO. SDO and SOHO
imagers provided data from the Earth's perspective, whilst the STEREO twin
instruments procured images from the orthogonal directions. This spatial
configuration of spacecraft allowed optimum simultaneous observations of the AR
cluster and the coronal mass ejections that originated in it. Quadrature
coronal observations provided by STEREO revealed a notably large amount of
ejective events compared to those detected from Earth's perspective.
Furthermore, joint observations by SDO/AIA and STEREO/SECCHI EUVI of the source
region indicate that all events classified by GOES as X-ray flares had an
ejective coronal counterpart in quadrature observations. These results have
direct impact on current space weather forecasting because of the probable
missing alarms when there is a lack of solar observations in a view direction
perpendicular to the Sun-Earth line.Comment: Solar Physics - Accepted for publication 2015-Apr-25 v2: Corrected
metadat
Similarity regularized sparse group lasso for cup to disc ratio computation
© 2017 Optical Society of America. Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Digital ocular fundus imaging: a review
Ocular fundus imaging plays a key role in monitoring the health status of the human eye. Currently, a large number of imaging modalities allow the assessment and/or quantification of ocular changes from a healthy status. This review focuses on the main digital fundus imaging modality, color fundus photography, with a brief overview of complementary techniques, such as fluorescein angiography. While focusing on two-dimensional color fundus photography, the authors address the evolution from nondigital to digital imaging and its impact on diagnosis. They also compare several studies performed along the transitional path of this technology. Retinal image processing and analysis, automated disease detection and identification of the stage of diabetic retinopathy (DR) are addressed as well. The authors emphasize the problems of image segmentation, focusing on the major landmark structures of the ocular fundus: the vascular network, optic disk and the fovea. Several proposed approaches for the automatic detection of signs of disease onset and progression, such as microaneurysms, are surveyed. A thorough comparison is conducted among different studies with regard to the number of eyes/subjects, imaging modality, fundus camera used, field of view and image resolution to identify the large variation in characteristics from one study to another. Similarly, the main features of the proposed classifications and algorithms for the automatic detection of DR are compared, thereby addressing computer-aided diagnosis and computer-aided detection for use in screening programs.Fundação para a Ciência e TecnologiaFEDErPrograma COMPET
Recommended from our members
Smoothness assumptions in human and machine vision, and their implications for optimal surface interpolation
In this paper we shall examine what smoothness assumptions are made about object surfaces, object motion, and image intensities. We begin by looking into the physiological limits of vision and how these might influence our perception of smoothness. We then look at a sampling of the computer vision and psychology literature, inferring smoothness constraints from the mathematical assumptions tacitly presumed by researchers. This look at computer vision and psychology of vision is not meant to be an inclusive study, but rather representative of the assumptions made, and in part representative of the mathematical models used therein. We shall conclude that prevalent assumptions are that surfaces, motion, and intensity images are functions in C2, eland c2 respectively. In the latter portion of this paper we examine one use of explicit assumptions on smoothness in the definition of existing method for obtaining "optimal" surface interpolation. We briefly introduce the nomenclature of information-based complexity originated by Traub, Wozniakowski, and their colleagues, which is the mathematical machinery used in obtaining these "optimal" surfaces. This theory requires that we know the class of functions from which our desired surface comes, and part of the definition of a class is the degree of smoothness. We then survey many possible classes for the visual interpolation problem of two dimensional surfaces, and state formulas from which one can obtain the optimal surface interpolating given depth data
- …