350 research outputs found

    Phytoplankton Hotspot Prediction With an Unsupervised Spatial Community Model

    Full text link
    Many interesting natural phenomena are sparsely distributed and discrete. Locating the hotspots of such sparsely distributed phenomena is often difficult because their density gradient is likely to be very noisy. We present a novel approach to this search problem, where we model the co-occurrence relations between a robot's observations with a Bayesian nonparametric topic model. This approach makes it possible to produce a robust estimate of the spatial distribution of the target, even in the absence of direct target observations. We apply the proposed approach to the problem of finding the spatial locations of the hotspots of a specific phytoplankton taxon in the ocean. We use classified image data from Imaging FlowCytobot (IFCB), which automatically measures individual microscopic cells and colonies of cells. Given these individual taxon-specific observations, we learn a phytoplankton community model that characterizes the co-occurrence relations between taxa. We present experiments with simulated robot missions drawn from real observation data collected during a research cruise traversing the US Atlantic coast. Our results show that the proposed approach outperforms nearest neighbor and k-means based methods for predicting the spatial distribution of hotspots from in-situ observations.Comment: To appear in ICRA 2017, Singapor

    Establishing the impact of luminous AGN with multi-wavelength observations and simulations

    Full text link
    Cosmological simulations fail to reproduce realistic galaxy populations without energy injection from active galactic nuclei (AGN) into the interstellar medium (ISM) and circumgalactic medium (CGM); a process called `AGN feedback'. Consequently, observational work searches for evidence that luminous AGN impact their host galaxies. Here, we review some of this work. Multi-phase AGN outflows are common, some with potential for significant impact. Additionally, multiple feedback channels can be observed simultaneously; e.g., radio jets from `radio quiet' quasars can inject turbulence on ISM scales, and displace CGM-scale molecular gas. However, caution must be taken comparing outflows to simulations (e.g., kinetic coupling efficiencies) to infer feedback potential, due to a lack of comparable predictions. Furthermore, some work claims limited evidence for feedback because AGN live in gas-rich, star-forming galaxies. However, simulations do not predict instantaneous, global impact on molecular gas or star formation. The impact is expected to be cumulative, over multiple episodes.Comment: Accepted for publication in IAU Symposium 378 Conference Proceedings "Black Hole Winds at all Scales

    Stability Constants of Glutamic Acid Complexes with Some Metal Ions

    Get PDF
    1021-102

    Streaming Gaussian Dirichlet Random Fields for Spatial Predictions of High Dimensional Categorical Observations

    Full text link
    We present the Streaming Gaussian Dirichlet Random Field (S-GDRF) model, a novel approach for modeling a stream of spatiotemporally distributed, sparse, high-dimensional categorical observations. The proposed approach efficiently learns global and local patterns in spatiotemporal data, allowing for fast inference and querying with a bounded time complexity. Using a high-resolution data series of plankton images classified with a neural network, we demonstrate the ability of the approach to make more accurate predictions compared to a Variational Gaussian Process (VGP), and to learn a predictive distribution of observations from streaming categorical data. S-GDRFs open the door to enabling efficient informative path planning over high-dimensional categorical observations, which until now has not been feasible.Comment: 10 pages, 5 figures. Published in Springer Proceedings of Advanced Robotics, ISER 2023 Conference Proceeding

    ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids

    Full text link
    We introduce an unsupervised feature learning approach that embeds 3D shape information into a single-view image representation. The main idea is a self-supervised training objective that, given only a single 2D image, requires all unseen views of the object to be predictable from learned features. We implement this idea as an encoder-decoder convolutional neural network. The network maps an input image of an unknown category and unknown viewpoint to a latent space, from which a deconvolutional decoder can best "lift" the image to its complete viewgrid showing the object from all viewing angles. Our class-agnostic training procedure encourages the representation to capture fundamental shape primitives and semantic regularities in a data-driven manner---without manual semantic labels. Our results on two widely-used shape datasets show 1) our approach successfully learns to perform "mental rotation" even for objects unseen during training, and 2) the learned latent space is a powerful representation for object recognition, outperforming several existing unsupervised feature learning methods.Comment: To appear at ECCV 201

    Learning Shape Priors for Single-View 3D Completion and Reconstruction

    Full text link
    The problem of single-view 3D shape completion or reconstruction is challenging, because among the many possible shapes that explain an observation, most are implausible and do not correspond to natural objects. Recent research in the field has tackled this problem by exploiting the expressiveness of deep convolutional networks. In fact, there is another level of ambiguity that is often overlooked: among plausible shapes, there are still multiple shapes that fit the 2D image equally well; i.e., the ground truth shape is non-deterministic given a single-view input. Existing fully supervised approaches fail to address this issue, and often produce blurry mean shapes with smooth surfaces but no fine details. In this paper, we propose ShapeHD, pushing the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors. The learned priors serve as a regularizer, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth. Our design thus overcomes both levels of ambiguity aforementioned. Experiments demonstrate that ShapeHD outperforms state of the art by a large margin in both shape completion and shape reconstruction on multiple real datasets.Comment: ECCV 2018. The first two authors contributed equally to this work. Project page: http://shapehd.csail.mit.edu

    Inspecting spectra with sound: proof-of-concept & extension to datacubes

    Full text link
    We present a novel approach to inspecting galaxy spectra using sound, via their direct audio representation ('spectral audification'). We discuss the potential of this as a complement to (or stand-in for) visual approaches. We surveyed 58 respondents who use the audio representation alone to rate 30 optical galaxy spectra with strong emission lines. Across three tests, each focusing on different quantities measured from the spectra (signal-to-noise ratio, emission-line width, & flux ratios), we find that user ratings are well correlated with measured quantities. This demonstrates that physical information can be independently gleaned from listening to spectral audifications. We note the importance of context when rating these sonifications, where the order examples are heard can influence responses. Finally, we adapt the method used in this promising pilot study to spectral datacubes. We suggest that audification allows efficient exploration of complex, spatially-resolved spectral data.Comment: 6 pages, 3 figures, accepted for publication in RASTI. Supplementary data (including animated figure) available at https://doi.org/10.25405/data.ncl.2281644
    corecore