3,021 research outputs found

    Model-Based Edge Detector for Spectral Imagery Using Sparse Spatiospectral Masks

    Get PDF
    Two model-based algorithms for edge detection in spectral imagery are developed that specifically target capturing intrinsic features such as isoluminant edges that are characterized by a jump in color but not in intensity. Given prior knowledge of the classes of reflectance or emittance spectra associated with candidate objects in a scene, a small set of spectral-band ratios, which most profoundly identify the edge between each pair of materials, are selected to define a edge signature. The bands that form the edge signature are fed into a spatial mask, producing a sparse joint spatiospectral nonlinear operator. The first algorithm achieves edge detection for every material pair by matching the response of the operator at every pixel with the edge signature for the pair of materials. The second algorithm is a classifier-enhanced extension of the first algorithm that adaptively accentuates distinctive features before applying the spatiospectral operator. Both algorithms are extensively verified using spectral imagery from the airborne hyperspectral imager and from a dots-in-a-well midinfrared imager. In both cases, the multicolor gradient (MCG) and the hyperspectral/spatial detection of edges (HySPADE) edge detectors are used as a benchmark for comparison. The results demonstrate that the proposed algorithms outperform the MCG and HySPADE edge detectors in accuracy, especially when isoluminant edges are present. By requiring only a few bands as input to the spatiospectral operator, the algorithms enable significant levels of data compression in band selection. In the presented examples, the required operations per pixel are reduced by a factor of 71 with respect to those required by the MCG edge detector

    Scalable Semantic Matching of Queries to Ads in Sponsored Search Advertising

    Full text link
    Sponsored search represents a major source of revenue for web search engines. This popular advertising model brings a unique possibility for advertisers to target users' immediate intent communicated through a search query, usually by displaying their ads alongside organic search results for queries deemed relevant to their products or services. However, due to a large number of unique queries it is challenging for advertisers to identify all such relevant queries. For this reason search engines often provide a service of advanced matching, which automatically finds additional relevant queries for advertisers to bid on. We present a novel advanced matching approach based on the idea of semantic embeddings of queries and ads. The embeddings were learned using a large data set of user search sessions, consisting of search queries, clicked ads and search links, while utilizing contextual information such as dwell time and skipped ads. To address the large-scale nature of our problem, both in terms of data and vocabulary size, we propose a novel distributed algorithm for training of the embeddings. Finally, we present an approach for overcoming a cold-start problem associated with new ads and queries. We report results of editorial evaluation and online tests on actual search traffic. The results show that our approach significantly outperforms baselines in terms of relevance, coverage, and incremental revenue. Lastly, we open-source learned query embeddings to be used by researchers in computational advertising and related fields.Comment: 10 pages, 4 figures, 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2016, Pisa, Ital

    Conditioned spin and charge dynamics of a single electron quantum dot

    Get PDF
    In this article we describe the incoherent and coherent spin and charge dynamics of a single electron quantum dot. We use a stochastic master equation to model the state of the system, as inferred by an observer with access to only the measurement signal. Measurements obtained during an interval of time contribute, by a past quantum state analysis, to our knowledge about the system at any time tt within that interval. Such analysis permits precise estimation of physical parameters, and we propose and test a modification of the classical Baum-Welch parameter re-estimation method to systems driven by both coherent and incoherent processes.Comment: 9 pages, 9 figure

    Space-Time Sampling for Network Observability

    Full text link
    Designing sparse sampling strategies is one of the important components in having resilient estimation and control in networked systems as they make network design problems more cost-effective due to their reduced sampling requirements and less fragile to where and when samples are collected. It is shown that under what conditions taking coarse samples from a network will contain the same amount of information as a more finer set of samples. Our goal is to estimate initial condition of linear time-invariant networks using a set of noisy measurements. The observability condition is reformulated as the frame condition, where one can easily trace location and time stamps of each sample. We compare estimation quality of various sampling strategies using estimation measures, which depend on spectrum of the corresponding frame operators. Using properties of the minimal polynomial of the state matrix, deterministic and randomized methods are suggested to construct observability frames. Intrinsic tradeoffs assert that collecting samples from fewer subsystems dictates taking more samples (in average) per subsystem. Three scalable algorithms are developed to generate sparse space-time sampling strategies with explicit error bounds.Comment: Submitted to IEEE TAC (Revised Version

    Thermal Modeling of Additive Manufacturing Using Graph Theory: Validation with Directed Energy Deposition

    Get PDF
    Metal additive manufacturing (AM/3D printing) offers unparalleled advantages over conventional manufacturing, including greater design freedom and a lower lead time. However, the use of AM parts in safety-critical industries, such as aerospace and biomedical, is limited by the tendency of the process to create flaws that can lead to sudden failure during use. The root cause of flaw formation in metal AM parts, such as porosity and deformation, is linked to the temperature inside the part during the process, called the thermal history. The thermal history is a function of the process parameters and part design. Consequently, the first step towards ensuring consistent part quality in metal AM is to understand how and why the process parameters and part geometry influence the thermal history. Given the current lack of scientific insight into the causal design-process-thermal physics link that governs part quality, AM practitioners resort to expensive and time-consuming trial-and-error tests to optimize part geometry and process parameters. An approach to reduce extensive empirical testing is to identify the viable process parameters and part geometry combinations through rapid thermal simulations. However, a major barrier that deters physics-based design and process optimization efforts in AM is the prohibitive computational burden of existing finite element-based thermal modeling. The objective of this thesis is to understand the causal effect of process parameters on the temperature distribution in AM parts using the theory of heat dissipation on graphs (graph theory). We develop and apply a novel graph theory-based computational thermal modeling approach for predicting the thermal history of titanium alloy parts made using the directed energy deposition metal AM process. As an example of the results obtained for one of the three test parts studied in this work, the temperature trends predicted by the graph theory approach had error ~11% compared to experimental trends. Moreover, the graph theory simulation was obtained within 9 minutes, which is less than the 25 minutes required to print the part. Advisors: Prahalada K. Rao and Kevin D. Col

    Methods of space radiation dose analysis with applications to manned space systems

    Get PDF
    The full potential of state-of-the-art space radiation dose analysis for manned missions has not been exploited. Point doses have been overemphasized, and the critical dose to the bone marrow has been only crudely approximated, despite the existence of detailed man models and computer codes for dose integration in complex geometries. The method presented makes it practical to account for the geometrical detail of the astronaut as well as the vehicle. Discussed are the major assumptions involved and the concept of applying the results of detailed proton dose analysis to the real-time interpretation of on-board dosimetric measurements

    The UTMOST Survey for Magnetars, Intermittent pulsars, RRATs and FRBs I: System description and overview

    Get PDF
    We describe the ongoing `Survey for Magnetars, Intermittent pulsars, Rotating radio transients and Fast radio bursts' (SMIRF), performed using the newly refurbished UTMOST telescope. SMIRF repeatedly sweeps the southern Galactic plane performing real-time periodicity and single-pulse searches, and is the first survey of its kind carried out with an interferometer. SMIRF is facilitated by a robotic scheduler which is capable of fully autonomous commensal operations. We report on the SMIRF observational parameters, the data analysis methods, the survey's sensitivities to pulsars, techniques to mitigate radio frequency interference and present some early survey results. UTMOST's wide field of view permits a full sweep of the Galactic plane to be performed every fortnight, two orders of magnitude faster than previous surveys. In the six months of operations from January to June 2018, we have performed ∼10\sim 10 sweeps of the Galactic plane with SMIRF. Notable blind re-detections include the magnetar PSR J1622−-4950, the RRAT PSR J0941−-3942 and the eclipsing pulsar PSR J1748−-2446A. We also report the discovery of a new pulsar, PSR J1705−-54. Our follow-up of this pulsar with the UTMOST and Parkes telescopes at an average flux limit of ≤20\leq 20 mJy and ≤0.16\leq 0.16 mJy respectively, categorizes this as an intermittent pulsar with a high nulling fraction of <0.002< 0.002Comment: Submitted to MNRAS, comments welcom

    Pulsar-black hole binaries: prospects for new gravity tests with future radio telescopes

    Full text link
    The anticipated discovery of a pulsar in orbit with a black hole is expected to provide a unique laboratory for black hole physics and gravity. In this context, the next generation of radio telescopes, like the Five-hundred-metre Aperture Spherical radio Telescope (FAST) and the Square Kilometre Array (SKA), with their unprecedented sensitivity, will play a key role. In this paper, we investigate the capability of future radio telescopes to probe the spacetime of a black hole and test gravity theories, by timing a pulsar orbiting a stellar-mass-black-hole (SBH). Based on mock data simulations, we show that a few years of timing observations of a sufficiently compact pulsar-SBH (PSR-SBH) system with future radio telescopes would allow precise measurements of the black hole mass and spin. A measurement precision of one per cent can be expected for the spin. Measuring the quadrupole moment of the black hole, needed to test GR's no-hair theorem, requires extreme system configurations with compact orbits and a large SBH mass. Additionally, we show that a PSR-SBH system can lead to greatly improved constraints on alternative gravity theories even if they predict black holes (practically) identical to GR's. This is demonstrated for a specific class of scalar-tensor theories. Finally, we investigate the requirements for searching for PSR-SBH systems. It is shown that the high sensitivity of the next generation of radio telescopes is key for discovering compact PSR-SBH systems, as it will allow for sufficiently short survey integration times.Comment: 20 pages, 11 figures, 1 table, accepted for publication in MNRA
    • …
    corecore