6,443 research outputs found

    Timeline analysis and wavelet multiscale analysis of the AKARI All-Sky Survey at 90 micron

    Get PDF
    We present a careful analysis of the point source detection limit of the AKARI All-Sky Survey in the WIDE-S 90 μ\mum band near the North Ecliptic Pole (NEP). Timeline Analysis is used to detect IRAS sources and then a conversion factor is derived to transform the peak timeline signal to the interpolated 90 μ\mum flux of a source. Combined with a robust noise measurement, the point source flux detection limit at S/N >5>5 for a single detector row is 1.1±0.11.1\pm0.1 Jy which corresponds to a point source detection limit of the survey of ∼\sim0.4 Jy. Wavelet transform offers a multiscale representation of the Time Series Data (TSD). We calculate the continuous wavelet transform of the TSD and then search for significant wavelet coefficients considered as potential source detections. To discriminate real sources from spurious or moving objects, only sources with confirmation are selected. In our multiscale analysis, IRAS sources selected above 4σ4\sigma can be identified as the only real sources at the Point Source Scales. We also investigate the correlation between the non-IRAS sources detected in Timeline Analysis and cirrus emission using wavelet transform and contour plots of wavelet power spectrum. It is shown that the non-IRAS sources are most likely to be caused by excessive noise over a large range of spatial scales rather than real extended structures such as cirrus clouds.Comment: 16 pages, 19 figures, 5 tables, accepted for publication in MNRA

    A Framework for Symmetric Part Detection in Cluttered Scenes

    Full text link
    The role of symmetry in computer vision has waxed and waned in importance during the evolution of the field from its earliest days. At first figuring prominently in support of bottom-up indexing, it fell out of favor as shape gave way to appearance and recognition gave way to detection. With a strong prior in the form of a target object, the role of the weaker priors offered by perceptual grouping was greatly diminished. However, as the field returns to the problem of recognition from a large database, the bottom-up recovery of the parts that make up the objects in a cluttered scene is critical for their recognition. The medial axis community has long exploited the ubiquitous regularity of symmetry as a basis for the decomposition of a closed contour into medial parts. However, today's recognition systems are faced with cluttered scenes, and the assumption that a closed contour exists, i.e. that figure-ground segmentation has been solved, renders much of the medial axis community's work inapplicable. In this article, we review a computational framework, previously reported in Lee et al. (2013), Levinshtein et al. (2009, 2013), that bridges the representation power of the medial axis and the need to recover and group an object's parts in a cluttered scene. Our framework is rooted in the idea that a maximally inscribed disc, the building block of a medial axis, can be modeled as a compact superpixel in the image. We evaluate the method on images of cluttered scenes.Comment: 10 pages, 8 figure

    S4Net: Single Stage Salient-Instance Segmentation

    Full text link
    We consider an interesting problem-salient instance segmentation in this paper. Other than producing bounding boxes, our network also outputs high-quality instance-level segments. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also its surrounding context, enabling us to distinguish the instances in the same scope even with obstruction. Our network is end-to-end trainable and runs at a fast speed (40 fps when processing an image with resolution 320x320). We evaluate our approach on a publicly available benchmark and show that it outperforms other alternative solutions. We also provide a thorough analysis of the design choices to help readers better understand the functions of each part of our network. The source code can be found at \url{https://github.com/RuochenFan/S4Net}

    Methods for Detection and Correction of Sudden Pixel Sensitivity Drops

    Get PDF
    PDC 8.0 includes implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every twenty stellar light curves during a given quarter. An example of such a discontinuity in an actual light curve is shown in fig. 1. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5% in summed apertures) in quantum efficiency, though a partial exponential recovery is often observed. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC de-trending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC de-trending

    A Survey on Deep Learning-based Architectures for Semantic Segmentation on 2D images

    Full text link
    Semantic segmentation is the pixel-wise labelling of an image. Since the problem is defined at the pixel level, determining image class labels only is not acceptable, but localising them at the original image pixel resolution is necessary. Boosted by the extraordinary ability of convolutional neural networks (CNN) in creating semantic, high level and hierarchical image features; excessive numbers of deep learning-based 2D semantic segmentation approaches have been proposed within the last decade. In this survey, we mainly focus on the recent scientific developments in semantic segmentation, specifically on deep learning-based methods using 2D images. We started with an analysis of the public image sets and leaderboards for 2D semantic segmantation, with an overview of the techniques employed in performance evaluation. In examining the evolution of the field, we chronologically categorised the approaches into three main periods, namely pre-and early deep learning era, the fully convolutional era, and the post-FCN era. We technically analysed the solutions put forward in terms of solving the fundamental problems of the field, such as fine-grained localisation and scale invariance. Before drawing our conclusions, we present a table of methods from all mentioned eras, with a brief summary of each approach that explains their contribution to the field. We conclude the survey by discussing the current challenges of the field and to what extent they have been solved.Comment: Updated with new studie
    • …
    corecore