12,384 research outputs found

    Selective Deep Convolutional Features for Image Retrieval

    Full text link
    Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.Comment: Accepted to ACM MM 201

    Masking Strategies for Image Manifolds

    Full text link
    We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the data-dependent masking process, even for modest mask sizes

    Implementation of robust image artifact removal in SWarp through clipped mean stacking

    Full text link
    We implement an algorithm for detecting and removing artifacts from astronomical images by means of outlier rejection during stacking. Our method is capable of addressing both small, highly significant artifacts such as cosmic rays and, by applying a filtering technique to generate single frame masks, larger area but lower surface brightness features such as secondary (ghost) images of bright stars. In contrast to the common method of building a median stack, the clipped or outlier-filtered mean stacked point-spread function (PSF) is a linear combination of the single frame PSFs as long as the latter are moderately homogeneous, a property of great importance for weak lensing shape measurement or model fitting photometry. In addition, it has superior noise properties, allowing a significant reduction in exposure time compared to median stacking. We make publicly available a modified version of SWarp that implements clipped mean stacking and software to generate single frame masks from the list of outlier pixels.Comment: PASP accepted; software for download at http://www.usm.uni-muenchen.de/~dgruen

    Cross-Sender Bit-Mixing Coding

    Full text link
    Scheduling to avoid packet collisions is a long-standing challenge in networking, and has become even trickier in wireless networks with multiple senders and multiple receivers. In fact, researchers have proved that even {\em perfect} scheduling can only achieve R=O(1lnN)\mathbf{R} = O(\frac{1}{\ln N}). Here NN is the number of nodes in the network, and R\mathbf{R} is the {\em medium utilization rate}. Ideally, one would hope to achieve R=Θ(1)\mathbf{R} = \Theta(1), while avoiding all the complexities in scheduling. To this end, this paper proposes {\em cross-sender bit-mixing coding} ({\em BMC}), which does not rely on scheduling. Instead, users transmit simultaneously on suitably-chosen slots, and the amount of overlap in different user's slots is controlled via coding. We prove that in all possible network topologies, using BMC enables us to achieve R=Θ(1)\mathbf{R}=\Theta(1). We also prove that the space and time complexities of BMC encoding/decoding are all low-order polynomials.Comment: Published in the International Conference on Information Processing in Sensor Networks (IPSN), 201

    Evidence for the accelerated expansion of the Universe from weak lensing tomography with COSMOS

    Full text link
    We present a tomographic cosmological weak lensing analysis of the HST COSMOS Survey. Applying our lensing-optimized data reduction, principal component interpolation for the ACS PSF, and improved modelling of charge-transfer inefficiency, we measure a lensing signal which is consistent with pure gravitational modes and no significant shape systematics. We carefully estimate the statistical uncertainty from simulated COSMOS-like fields obtained from ray-tracing through the Millennium Simulation. We test our pipeline on simulated space-based data, recalibrate non-linear power spectrum corrections using the ray-tracing, employ photometric redshifts to reduce potential contamination by intrinsic galaxy alignments, and marginalize over systematic uncertainties. We find that the lensing signal scales with redshift as expected from General Relativity for a concordance LCDM cosmology, including the full cross-correlations between different redshift bins. For a flat LCDM cosmology, we measure sigma_8(Omega_m/0.3)^0.51=0.75+-0.08 from lensing, in perfect agreement with WMAP-5, yielding joint constraints Omega_m=0.266+0.025-0.023, sigma_8=0.802+0.028-0.029 (all 68% conf.). Dropping the assumption of flatness and using HST Key Project and BBN priors only, we find a negative deceleration parameter q_0 at 94.3% conf. from the tomographic lensing analysis, providing independent evidence for the accelerated expansion of the Universe. For a flat wCDM cosmology and prior w in [-2,0], we obtain w<-0.41 (90% conf.). Our dark energy constraints are still relatively weak solely due to the limited area of COSMOS. However, they provide an important demonstration for the usefulness of tomographic weak lensing measurements from space. (abridged)Comment: 26 pages, 25 figures, matches version accepted for publication by Astronomy and Astrophysic

    Lightweight Cryptography Meets Threshold Implementation: A Case Study for SIMON

    Get PDF
    Securing data transmission has always been a challenge. While many cryptographic algorithms are available to solve the problem, many applications have tough area constraints while requiring high-level security. Lightweight cryptography aims at achieving high-level security with the benefit of being low cost. Since the late nineties and with the discovery of side channel attacks the approach towards cryptography has changed quite significantly. An attacker who can get close to a device can extract sensitive data by monitoring side channels such as power consumption, sound, or electromagnetic emanation. This means that embedded implementations of cryptographic schemes require protection against such attacks to achieve the desired level of security. In this work we combine a low-cost embedded cipher, Simon, with a stateof-the-art side channel countermeasure called Threshold Implementation (TI). We show that TI is a great match for lightweight cryptographic ciphers, especially for hardware implementation. Our implementation is the smallest TI of a block-cipher on an FPGA. This implementation utilizes 96 slices of a low-cost Spartan-3 FPGA and 55 slices a modern Kintex-7 FPGA. Moreover, we present a higher order TI which is resistant against second order attacks. This implementation utilizes 163 slices of a Spartan-3 FPGA and 95 slices of a Kintex-7 FPGA. We also present a state of the art leakage analysis and, by applying it to the designs, show that the implementations achieve the expected security. The implementations even feature a significant robustness to higher order attacks, where several million observations are needed to detect leakage

    An Enhanced Dataflow Analysis to Automatically Tailor Side Channel Attack Countermeasures to Software Block Ciphers

    Get PDF
    Protecting software implementations of block ciphers from side channel attacks is a significant concern to realize secure embedded computation platforms. The relevance of the issue calls for the automation of the side channel vulnerability assessment of a block cipher implementation, and the automated application of provably secure defenses. The most recent methodology in the field is an application of a specialized data-flow analysis, performed by means of the LLVM compiler framework, detecting in the AES cipher the portions of the code amenable to key extraction via side channel analysis. The contribution of this work is an enhancement to the existing data-flow analysis which extending it to tackle any block cipher implemented in software. In particular, the extended strategy takes fully into account the data dependencies present in the key schedule of a block cipher, regardless of its complexity, to obtain consistently sound results. This paper details the analysis strategy and presents new results on the tailored application of power and electro-magnetic emission analysis countermeasures, evaluating the performances on both the ARM Cortex-M and the MIPS ISA. The experimental evaluation reports a case study on two block ciphers: the first designed to achieve a high security margin at a non-negligible computational cost, and a lightweight one. The results show that, when side-channel-protected implementations are considered, the high-security block cipher is indeed more efficient than the lightweight one

    Searching for circumplanetary disks around LkCa 15

    Get PDF
    We present Karl G. Jansky Very Large Array (VLA) observations of the 7 mm continuum emission from the disk surrounding the young star LkCa 15. The observations achieve an angular resolution of 70 mas and spatially resolve the circumstellar emission on a spatial scale of 9 AU. The continuum emission traces a dusty annulus of 45 AU in radius that is consistent with the dust morphology observed at shorter wavelengths. The VLA observations also reveal a compact source at the center of the disk, possibly due to thermal emission from hot dust or ionized gas located within a few AU from the central star. No emission is observed between the star and the dusty ring, and, in particular, at the position of the candidate protoplanet LkCa 15 b. By comparing the observations with theoretical models for circumplanetary disk emission, we find that if LkCa~15~b is a massive planet (>5 M_J) accreting at a rate greater than 1.e-6 M_J yr^{-1}, then its circumplanetary disk is less massive than 0.1 M_J, or smaller than 0.4 Hill radii. Similar constraints are derived for any possible circumplanetary disk orbiting within 45 AU from the central star. The mass estimate are uncertain by at least one order of magnitude due to the uncertainties on the mass opacity. Future ALMA observations of this system might be able to detect circumplanetary disks down to a mass of 5.e-4 M_J and as small as 0.2 AU, providing crucial constraints on the presence of giant planets in the act of forming around this young star.Comment: Accepted for publication on Ap

    Real space tests of the statistical isotropy and Gaussianity of the WMAP CMB data

    Full text link
    ABRIDGED: We introduce and analyze a method for testing statistical isotropy and Gaussianity and apply it to the WMAP CMB foreground reduced, temperature maps, and cross-channel difference maps. We divide the sky into regions of varying size and shape and measure the first four moments of the one-point distribution within these regions, and using their simulated spatial distributions we test the statistical isotropy and Gaussianity hypotheses. By randomly varying orientations of these regions, we sample the underlying CMB field in a new manner, that offers a richer exploration of the data content, and avoids possible biasing due to a single choice of sky division. The statistical significance is assessed via comparison with realistic Monte-Carlo simulations. We find the three-year WMAP maps to agree well with the isotropic, Gaussian random field simulations as probed by regions corresponding to the angular scales ranging from 6 deg to 30 deg at 68% confidence level. We report a strong, anomalous (99.8% CL) dipole ``excess'' in the V band of the three-year WMAP data and also in the V band of the WMAP five-year data (99.3% CL). We notice the large scale hemispherical power asymmetry, and find that it is not highly statistically significant in the WMAP three-year data (<~ 97%) at scales l <= 40. The significance is even smaller if multipoles up to l=1024 are considered (~90% CL). We give constraints on the amplitude of the previously-proposed CMB dipole modulation field parameter. We easily detect the residual foregrounds in cross-band difference maps at rms level <~ 7 \mu K (at scales >~ 6 deg) and limit the systematical uncertainties to <~ 1.7 \mu K (at scales >~ 30 deg).Comment: 20 pages, 20 figures; more tests added; updated to match the version to be published in JCA
    corecore