6,355 research outputs found

    Point-wise mutual information-based video segmentation with high temporal consistency

    Full text link
    In this paper, we tackle the problem of temporally consistent boundary detection and hierarchical segmentation in videos. While finding the best high-level reasoning of region assignments in videos is the focus of much recent research, temporal consistency in boundary detection has so far only rarely been tackled. We argue that temporally consistent boundaries are a key component to temporally consistent region assignment. The proposed method is based on the point-wise mutual information (PMI) of spatio-temporal voxels. Temporal consistency is established by an evaluation of PMI-based point affinities in the spectral domain over space and time. Thus, the proposed method is independent of any optical flow computation or previously learned motion models. The proposed low-level video segmentation method outperforms the learning-based state of the art in terms of standard region metrics

    DISCOVERY OF RR LYRAE STARS IN THE NUCLEAR BULGE OF THE MILKY WAY

    Get PDF
    Indexación: Web of ScienceGalactic nuclei, such as that of the Milky Way, are extreme regions with high stellar densities, and in most cases, the hosts of a supermassive black hole. One of the scenarios proposed for the formation of the Galactic nucleus is merging of primordial globular clusters. An implication of this model is that this region should host stars that are characteristically found in old Milky Way globular clusters. RR Lyrae stars are primary distance indicators, well known representatives of old and metal-poor stellar populations, and therefore are regularly found in globular clusters. Here we report the discovery of a dozen RR Lyrae type ab stars in the vicinity of the Galactic center, i.e., in the so-called nuclear stellar bulge of the Milky Way. This discovery provides the first direct observational evidence that the Galactic nuclear stellar bulge contains ancient stars (>10 Gyr old). Based on this we conclude that merging globular clusters likely contributed to the build-up of the high stellar density in the nuclear stellar bulge of the Milky Way.http://iopscience.iop.org/article/10.3847/2041-8205/830/1/L14/meta;jsessionid=2531FBFFF06C9ECBA4852FB9D1F89851.c1.iopscience.cld.iop.or

    Anveshak - A Groundtruth Generation Tool for Foreground Regions of Document Images

    Full text link
    We propose a graphical user interface based groundtruth generation tool in this paper. Here, annotation of an input document image is done based on the foreground pixels. Foreground pixels are grouped together with user interaction to form labeling units. These units are then labeled by the user with the user defined labels. The output produced by the tool is an image with an XML file containing its metadata information. This annotated data can be further used in different applications of document image analysis.Comment: Accepted in DAR 201

    Video enhancement using adaptive spatio-temporal connective filter and piecewise mapping

    Get PDF
    This paper presents a novel video enhancement system based on an adaptive spatio-temporal connective (ASTC) noise filter and an adaptive piecewise mapping function (APMF). For ill-exposed videos or those with much noise, we first introduce a novel local image statistic to identify impulse noise pixels, and then incorporate it into the classical bilateral filter to form ASTC, aiming to reduce the mixture of the most two common types of noises - Gaussian and impulse noises in spatial and temporal directions. After noise removal, we enhance the video contrast with APMF based on the statistical information of frame segmentation results. The experiment results demonstrate that, for diverse low-quality videos corrupted by mixed noise, underexposure, overexposure, or any mixture of the above, the proposed system can automatically produce satisfactory results

    Coordination of Mobile Mules via Facility Location Strategies

    Full text link
    In this paper, we study the problem of wireless sensor network (WSN) maintenance using mobile entities called mules. The mules are deployed in the area of the WSN in such a way that would minimize the time it takes them to reach a failed sensor and fix it. The mules must constantly optimize their collective deployment to account for occupied mules. The objective is to define the optimal deployment and task allocation strategy for the mules, so that the sensors' downtime and the mules' traveling distance are minimized. Our solutions are inspired by research in the field of computational geometry and the design of our algorithms is based on state of the art approximation algorithms for the classical problem of facility location. Our empirical results demonstrate how cooperation enhances the team's performance, and indicate that a combination of k-Median based deployment with closest-available task allocation provides the best results in terms of minimizing the sensors' downtime but is inefficient in terms of the mules' travel distance. A k-Centroid based deployment produces good results in both criteria.Comment: 12 pages, 6 figures, conferenc

    Star Formation Rate Indicators in Wide-Field Infrared Survey Preliminary Release

    Full text link
    With the goal of investigating the degree to which theMIR luminosity in theWidefield Infrared Survey Explorer (WISE) traces the SFR, we analyze 3.4, 4.6, 12 and 22 {\mu}m data in a sample of {\guillemotright} 140,000 star-forming galaxies or star-forming regions covering a wide range in metallicity 7.66 < 12 + log(O/H) < 9.46, with redshift z < 0.4. These star-forming galaxies or star-forming regions are selected by matching the WISE Preliminary Release Catalog with the star-forming galaxy Catalog in SDSS DR8 provided by JHU/MPA 1.We study the relationship between the luminosity at 3.4, 4.6, 12 and 22 {\mu}m from WISE and H\alpha luminosity in SDSS DR8. From these comparisons, we derive reference SFR indicators for use in our analysis. Linear correlations between SFR and the 3.4, 4.6, 12 and 22 {\mu}m luminosity are found, and calibrations of SFRs based on L(3.4), L(4.6), L(12) and L(22) are proposed. The calibrations hold for galaxies with verified spectral observations. The dispersion in the relation between 3.4, 4.6, 12 and 22 {\mu}m luminosity and SFR relates to the galaxy's properties, such as 4000 {\deg}A break and galaxy color.Comment: 10 pages, 3 figure

    Gauge-Higgs Unification In Spontaneously Created Fuzzy Extra Dimensions

    Get PDF
    We propose gauge-Higgs unification in fuzzy extra dimensions as a possible solution to the Higgs naturalness problem. In our approach, the fuzzy extra dimensions are created spontaneously as a vacuum solution of certain four-dimensional gauge theory. As an example, we construct a model which has a fuzzy torus as its vacuum. The Higgs field in our model is associated with the Wilson loop wrapped on the fuzzy torus. We show that the quadratic divergence in the mass of the Higgs field in the one-loop effective potential is absent. We then argue based on symmetries that the quantum corrections to the Higgs mass is suppressed including all loop contributions. We also consider a realization on the worldvolume theory of D3-branes probing C3/(ZN×ZN)C^3/(Z_N \times Z_N) orbifold with discrete torsion.Comment: 1+38 pages, 4 figures v2: refs adde

    THE HIGH CADENCE TRANSIENT SURVEY (HITS). I. SURVEY DESIGN AND SUPERNOVA SHOCK BREAKOUT CONSTRAINTS

    Get PDF
    Indexación: Web of Science; Scopus.We present the first results of the High Cadence Transient Survey (HiTS), a survey for which the objective is to detect and follow-up optical transients with characteristic timescales from hours to days, especially the earliest hours of supernova (SN) explosions. HiTS uses the Dark Energy Camera and a custom pipeline for image subtraction, candidate filtering and candidate visualization, which runs in real-time to be able to react rapidly to the new transients. We discuss the survey design, the technical challenges associated with the real-time analysis of these large volumes of data and our first results. In our 2013, 2014, and 2015 campaigns, we detected more than 120 young SN candidates, but we did not find a clear signature from the short-lived SN shock breakouts (SBOs) originating after the core collapse of red supergiant stars, which was the initial science aim of this survey. Using the empirical distribution of limiting magnitudes from our observational campaigns, we measured the expected recovery fraction of randomly injected SN light curves, which included SBO optical peaks produced with models from Tominaga et al. (2011) and Nakar & Sari (2010). From this analysis, we cannot rule out the models from Tominaga et al. (2011) under any reasonable distributions of progenitor masses, but we can marginally rule out the brighter and longer-lived SBO models from Nakar & Sari (2010) under our best-guess distribution of progenitor masses. Finally, we highlight the implications of this work for future massive data sets produced by astronomical observatories, such as LSST.http://iopscience.iop.org/article/10.3847/0004-637X/832/2/155/meta;jsessionid=76BDFFFE378003616F6DBA56A9225673.c4.iopscience.cld.iop.or

    A novel image compression algorithm for high resolution 3D reconstruction

    Get PDF
    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models

    A novel 2D image compression algorithm based on two levels DWT and DCT transforms with enhanced minimize-matrix-size algorithm for high resolution structured light 3D surface reconstruction

    Get PDF
    Image compression techniques are widely used in 2D and 3D image and video sequences. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level Discrete Wavelet Transform (DWT) and a two level Discrete Cosine Transform (DCT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of 4 steps: 1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix respectively; 2) apply a second level DCT to the DC-Matrix to generate two arrays, namely nonzero-array and zero-array; 3) apply the Minimize-Matrix-Size (MMS) algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT; 4) apply arithmetic coding to the output of previous steps. A novel Fast-Match-Search (FMS) decompression algorithm is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined into one matrix followed by inverse two level DCT with two level DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D RMSE following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D
    corecore