1,811 research outputs found

    Coronal mass ejections from the same active region cluster: Two different perspectives

    Full text link
    The cluster formed by active regions (ARs) NOAA 11121 and 11123, approximately located on the solar central meridian on 11 November 2010, is of great scientific interest. This complex was the site of violent flux emergence and the source of a series of Earth-directed events on the same day. The onset of the events was nearly simultaneously observed by the Atmospheric Imaging Assembly (AIA) telescope aboard the Solar Dynamics Observatory (SDO) and the Extreme-Ultraviolet Imagers (EUVI) on the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI) suite of telescopes onboard the Solar-Terrestrial Relations Observatory (STEREO) twin spacecraft. The progression of these events in the low corona was tracked by the Large Angle Spectroscopic Coronagraphs (LASCO) onboard the Solar and Heliospheric Observatory (SOHO) and the SECCHI/COR coronagraphs on STEREO. SDO and SOHO imagers provided data from the Earth's perspective, whilst the STEREO twin instruments procured images from the orthogonal directions. This spatial configuration of spacecraft allowed optimum simultaneous observations of the AR cluster and the coronal mass ejections that originated in it. Quadrature coronal observations provided by STEREO revealed a notably large amount of ejective events compared to those detected from Earth's perspective. Furthermore, joint observations by SDO/AIA and STEREO/SECCHI EUVI of the source region indicate that all events classified by GOES as X-ray flares had an ejective coronal counterpart in quadrature observations. These results have direct impact on current space weather forecasting because of the probable missing alarms when there is a lack of solar observations in a view direction perpendicular to the Sun-Earth line.Comment: Solar Physics - Accepted for publication 2015-Apr-25 v2: Corrected metadat

    Similarity regularized sparse group lasso for cup to disc ratio computation

    Full text link
    © 2017 Optical Society of America. Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Digital ocular fundus imaging: a review

    Get PDF
    Ocular fundus imaging plays a key role in monitoring the health status of the human eye. Currently, a large number of imaging modalities allow the assessment and/or quantification of ocular changes from a healthy status. This review focuses on the main digital fundus imaging modality, color fundus photography, with a brief overview of complementary techniques, such as fluorescein angiography. While focusing on two-dimensional color fundus photography, the authors address the evolution from nondigital to digital imaging and its impact on diagnosis. They also compare several studies performed along the transitional path of this technology. Retinal image processing and analysis, automated disease detection and identification of the stage of diabetic retinopathy (DR) are addressed as well. The authors emphasize the problems of image segmentation, focusing on the major landmark structures of the ocular fundus: the vascular network, optic disk and the fovea. Several proposed approaches for the automatic detection of signs of disease onset and progression, such as microaneurysms, are surveyed. A thorough comparison is conducted among different studies with regard to the number of eyes/subjects, imaging modality, fundus camera used, field of view and image resolution to identify the large variation in characteristics from one study to another. Similarly, the main features of the proposed classifications and algorithms for the automatic detection of DR are compared, thereby addressing computer-aided diagnosis and computer-aided detection for use in screening programs.Fundação para a Ciência e TecnologiaFEDErPrograma COMPET

    Synchronization and calibration of a stereo vision system

    Get PDF
    corecore