1,954 research outputs found

    A tidal disruption flare in a massive galaxy? Implications for the fuelling mechanisms of nuclear black holes

    Full text link
    We argue that the `changing look' AGN recently reported by LaMassa et al. could be a luminous flare produced by the tidal disruption of a super-solar mass star passing just a few gravitational radii outside the event horizon of a ∼108M⊙\sim 10^8 M_{\odot} nuclear black hole. This flare occurred in a massive, star forming galaxy at redshift z=0.312z=0.312, robustly characterized thanks to repeated late-time photometric and spectroscopic observations. By taking difference-photometry of the well sampled multi-year SDSS Stripe-82 light-curve, we are able to probe the evolution of the nuclear spectrum over the course of the outburst. The tidal disruption event (TDE) interpretation is consistent with the very rapid rise and the decay time of the flare, which displays an evolution consistent with the well-known t−5/3t^{-5/3} behaviour (with a clear superimposed re-brightening flare). Our analysis places constraints on the physical properties of the TDE, such as the putative disrupted star's mass and orbital parameters, as well as the size and temperature of the emitting material. The properties of the broad and narrow emission lines observed in two epochs of SDSS spectra provide further constraints on the circum-nuclear structure, and could be indicative that the system hosted a moderate-luminosity AGN as recently as a few 10410^4 years ago, and is likely undergoing residual accretion as late as ten years after peak, as seen from the broad Hα\alpha emission line. We discuss the complex interplay between tidal disruption events and gas accretion episodes in galactic nuclei, highlighting the implications for future TDE searches and for estimates of their intrinsic rates.Comment: 20 pages, 9 figures, 3 tables. Accepted for publication in MNRA

    3D-POLY: A Robot Vision System for Recognizing Objects in Occluded Environments

    Get PDF
    The two factors that determine the time complexity associated with model-driven interpretation of range maps are: I) the particular strategy used for the generation of object hypotheses; and 2) the manner in which both the model and the sensed data are organized, data organization being a primary determinant of the efficiency of verification of a given hypothesis. In this report, we present 3D-POLY, a working system for recognizing objects in the presence of occlusion and against cluttered backgrounds. The time complexity of this system is only O(n2) for single object recognition, where n is the number of features on the object. The most novel aspect of this system is the manner in which the feature data are organized for the models. We use a data structure called the feature sphere for the purpose. We will present efficient algorithms for assigning a feature to its proper place on a feature sphere, and for extracting the neighbors of a given feature from the feature sphere representation. For hypothesis generation, we use local feature sets, a notion similar to those used before us by Bolles, Shirai and others. The combination of the feature sphere idea for streamlining verification and the local feature sets for hypothesis generation results in a system whose time complexity has a polynomial bound. In addition to recognizing objects in occluded environments, 3D-POLY also possesses model learning capability. Model learning consists of looking at a model object from different views and integrating the resulting information. The 3D-POLY system also contains utilities for range image segmentation and classification of scene surfaces

    The local star-formation rate density: assessing calibrations using [OII], Ha and UV luminosities

    Full text link
    We explore the use of simple star-formation rate (SFR) indicators (such as may be used in high-redshift galaxy surveys) in the local Universe using [OII], Ha, and u-band luminosities from the deeper 275 deg^2 Stripe 82 subsample of the Sloan Digital Sky Survey (SDSS) coupled with UV data from the Galaxy Evolution EXplorer satellite (GALEX). We examine the consistency of such methods using the star-formation rate density (SFRD) as a function of stellar mass in this local volume, and quantify the accuracy of corrections for dust and metallicity on the various indicators. Rest-frame u-band promises to be a particularly good SFR estimator for high redshift studies since it does not require a particularly large or sensitive extinction correction, yet yields results broadly consistent with more observationally expensive methods. We suggest that the [OII]-derived SFR, commonly used at higher redshifts (z~1), can be used to reliably estimate SFRs for ensembles of galaxies, but for high mass galaxies (log(M*/Msun)>10), a larger correction than is typically used is required to compensate for the effects of metallicity dependence and dust extinction. We provide a new empirical mass-dependent correction for the [OII]-SFR.Comment: 22 pages, 16 figures. This version corrects typos in equations 2, 7, and 9 of the published version, as described in the MNRAS Erratum. Published results are unaffected. A simple piece of IDL Code for applying the mass-dependent correction to [OII] SFR available from http://astro.uwaterloo.ca/~dgilbank/data/corroii.pr

    Flexible registration method for light-stripe sensors considering sensor misalignments

    Get PDF
    In many application areas such as object reconstruction or quality assurance, it is required to completely or partly measure the shape of an object or at least the cross section of the required object region. For complex geometries, therefore, multiple views are needed to bypass undercuts respectively occlusions. Hence, a multi-sensor measuring system for complex geometries has to consist of multiple light-stripe sensors that are surrounding the measuring object in order to complete the measurements in a prescribed time. The number of sensors depends on the object geometry and dimensions. In order to create a uniform 3D data set from the data of individual sensors, a registration of each individual data set into a common global coordinate system has to be performed. Stateof- the-art registration methods for light-stripe sensors use only data from object intersection with the respective laser plane of each sensor. At the same time the assumption is met that all laser planes are coplanar and that there are corresponding points in two data sets. However, this assumption does not represent the real case, because it is nearly impossible to align multiple laser planes in the same plane. For this reason, sensor misalignments are neglected by this assumption. In this work a new registration method for light-stripe sensors is presented that considers sensor misalignments as well as intended sensor displacements and tiltings. The developed method combines 3D pose estimation and triangulated data to properly register the real sensor pose in 3D space. © 2017 SPIE

    An Improved Photometric Calibration of the Sloan Digital Sky Survey Imaging Data

    Get PDF
    We present an algorithm to photometrically calibrate wide field optical imaging surveys, that simultaneously solves for the calibration parameters and relative stellar fluxes using overlapping observations. The algorithm decouples the problem of "relative" calibrations, from that of "absolute" calibrations; the absolute calibration is reduced to determining a few numbers for the entire survey. We pay special attention to the spatial structure of the calibration errors, allowing one to isolate particular error modes in downstream analyses. Applying this to the Sloan Digital Sky Survey imaging data, we achieve ~1% relative calibration errors across 8500 sq.deg. in griz; the errors are ~2% for the u band. These errors are dominated by unmodelled atmospheric variations at Apache Point Observatory. These calibrations, dubbed "ubercalibration", are now public with SDSS Data Release 6, and will be a part of subsequent SDSS data releases.Comment: 16 pages, 17 figures, matches version accepted in ApJ. These calibrations are available at http://www.sdss.org/dr

    The characterisation and simulation of 3D vision sensors for measurement optimisation

    Get PDF
    The use of 3D Vision is becoming increasingly common in a range of industrial applications including part identification, reverse engineering, quality control and inspection. To facilitate this increased usage, especially in autonomous applications such as free-form assembly and robotic metrology, the capability to deploy a sensor to the optimum pose for a measurement task is essential to reduce cycle times and increase measurement quality. Doing so requires knowledge of the 3D sensor capabilities on a material specific basis, as the optical properties of a surface, object shape, pose and even the measurement itself have severe implications for the data quality. This need is not reflected in the current state of sensor haracterisation standards which commonly utilise optically compliant artefacts and therefore can not inform the user of a 3D sensor the realistic expected performance on non-ideal objects.This thesis presents a method of scoring candidate viewpoints for their ability to perform geometric measurements on an object of arbitrary surface finish. This is achieved by first defining a technology independent, empirical sensor characterisation method which implements a novel variant of the commonly used point density point cloud quality metric, which is normalised to isolate the effect of surface finish on sensor performance, as well as the more conventional assessment of point standard deviation. The characterisation method generates a set of performance maps for a sensor per material which are a function of distance and surface orientation. A sensor simulation incorporates these performance maps to estimate the statistical properties of a point cloud on objects with arbitrary shape and surface finish, providing the sensor has been characterised on the material in question.A framework for scoring measurement specific candidate viewpoints is presented in the context of the geometric inspection of four artefacts with different surface finish but identical geometry. Views are scored on their ability to perform each measurement based on a novel view score metric, which incorporates the expected point density, noise and occlusion of measurement dependent model features. The simulation is able to score the views reliably on all four surface finishes tested, which range from ideal matt white to highly polished aluminium. In 93% of measurements, a set of optimal or nearly optimal views is correctly selected.</div

    Tele-Autonomous control involving contact

    Get PDF
    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed

    Advances in Stereo Vision

    Get PDF
    Stereopsis is a vision process whose geometrical foundation has been known for a long time, ever since the experiments by Wheatstone, in the 19th century. Nevertheless, its inner workings in biological organisms, as well as its emulation by computer systems, have proven elusive, and stereo vision remains a very active and challenging area of research nowadays. In this volume we have attempted to present a limited but relevant sample of the work being carried out in stereo vision, covering significant aspects both from the applied and from the theoretical standpoints
    • …
    corecore