45 research outputs found

    Alkali Metal Bismuth(III) Chloride Double Salts

    Get PDF
    Evaporative co-crystallization of MCl (M = Na, K, Rb, Cs) with BiOCl in aqueous HCl produces double salts: MxBiyCl(x+3y)·zH2O. The sodium salt, Na2BiCl5·5H2O (monoclinic P21/c, a = 8.6983(7) Å, b = 21.7779(17) Å, c = 7.1831(6) Å, ÎČ = 103.0540(10)°, V = 1325.54(19) Å3, Z = 4) is composed of zigzag chains of ÎŒ2-Cl-cis-linked (BiCl5)n2n– chains. Edge-sharing chains of NaCln(OH2)6−n octahedra (n = 0, 2, 3) are linked through ÎŒ3-Cl to Bi. The potassium salt, K7Bi3Cl16 (trigonal R−3c, a = 12.7053(9) Å, b = 12.7053(9) Å, c = 99.794(7) Å, V = 13,951(2) Å3, Z = 18) contains (Bi2Cl10)4– edge-sharing dimers of octahedra and simple (BiCl6)3– octahedra. The K+ ions are 5- to 8-coordinate and the chlorides are 3-, 4-, or 5-coordinate. The rubidium salt, Rb3BiCl6·0.5H2O (orthorhombic Pnma, a = 12.6778(10) Å, b = 25.326(2) Å, c = 8.1498(7) Å, V = 2616.8(4) Å3, Z = 8) contains (BiCl6)3– octahedra. The Rb+ ions are 6-, 8-, and 9-coordinate, and the chlorides are 4- or 5-coordinate. Two cesium salts were formed: Cs3BiCl6 (orthorhombic Pbcm, a = 8.2463(9) Å, b = 12.9980(15) Å, c = 26.481(3) Å, V = 2838.4(6) Å3, Z = 8) being comprised of (BiCl6)3– octahedra, 8-coordinate Cs+, and 3-, 4-, and 5-coordinate Cl−. In Cs3Bi2Cl9 (orthorhombic Pnma, a = 18.4615(15) Å, b = 7.5752(6) Å, c = 13.0807(11) Å, V = 1818.87(11) Å3, Z = 4) Bi octahedra are linked by ÎŒ2-bridged Cl into edge-sharing Bi4 squares which form zigzag (Bi2Cl9)n3n– ladders. The 12-coordinate Cs+ ions bridge the ladders, and the Cl− ions are 5- and 6-coordinate. Four of the double salts are weakly photoluminescent at 78 K, each showing a series of three excitation peaks near 295, 340, and 380 nm and a broad emission near 440 nm

    Learning Data-Driven Representations for Robust Monocular Computer Vision Applications

    No full text
    For computer vision applications, one crucial step is the choice of a suitable representation of image data. Learning such representations from observed data using machine learning methods has allowed computer vision applications to be applied in a wider range of every-day scenarios. Three new representations for applications using data from a single camera are presented in this work together with algorithms for learning these from training data. The first two representations are applied to image sequences taken by a single camera located in a moving vehicle. By calculating optical flow and representing the resulting vector field as point in a learned linear subspace greatly simplifies the interpretation of the flow. It allows not only to estimate the vehicle's self-motion by means of a learned linear mapping, but also to identify independently moving objects, wrong flow vectors, and cope with missing vectors in homogeneous image regions. The second representation uses work in object detection and circular statistics to estimate the orientation of observed objects. Orientation knowledge is represented as a multi-modal probability distribution in a circular space, which allows to capture ambiguities in the mapping from appearance to orientation. This ambiguity can be resolved in further processing steps, the use of a particle filter for temporal integration and consistent orientation tracking is presented. Extending the filtering framework to include object position, orientation, speed and front wheel angle, results show improved tracking of other vehicles observed by a moving camera. The third new representation aims at capturing the gist of an image, mimicking the first stages of human visual processing. Having formed after only a few hundred milliseconds, this gist forms the basis for further visual processing. By combining algorithms for surface orientation estimation, object detection, scene type classification and viewpoint estimation with general knowledge in an iterative fashion, the proposed algorithm tries to form a consistent, general-purpose representation of a single image. In several psychophysical experiments, it is shown that the horizon is part of this visual gist in humans and that several queues are important for its estimation by human and machine

    Rekonstruktion und Parameteridentifikation verteilter PhÀnomene mittels Sensornetzwerken

    No full text

    Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine

    Get PDF
    We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types

    Experts of probabilistic flow subspaces for robust monocular odometry in urban areas

    No full text
    Visual odometry has been promoted as a fundamental component for intelligent vehicles. Relying solely on monocular image cues would be desirable. Nevertheless, this is a challenge especially in dynamically varying urban areas due to scale ambiguities, independent motions, and measurement noise. We propose to use probabilistic learning with auxiliar depth cues. Specifically, we developed an expert model that specializes monocular egomotion estimation units on typical scene structures, i.e. statistical variations of scene depth layouts. The framework adaptively selects the best fitting expert. For on-line estimation of egomotion, we adopted a probabilistic subspace flow estimation method. Learning in our framework consists of two components: 1) Partitioning of datasets of video and ground truth odometry data based on unsupervised clustering of dense stereo depth profiles and 2) training a cascade of subspace flow expert models. A probabilistic quality measure from the estimates of the experts provides a selection rule overall leading to improvements of egomotion estimation for long test sequences

    Horizon estimation: perceptual and computational experiments

    No full text
    The human visual system is able to quickly and robustly infer a wealth of scene information -- the scene "gist" - already after 100 milliseconds of image presentation. Here, we investigated the ability to estimate the position of the horizon in briefly shown images. Being able to judge the horizon position quickly and accurately will help in inferring viewer orientation and scene structure in general and thus might be an important factor of scene gist. In the first, perceptual study, we investigated participants' horizon estimates after a 150 millisecond, masked presentation of typical outdoor scenes from different scene categories. All images were shown in upright, blurred, inverted, and cropped conditions to investigate the influence of different information types on the perceptual decision. We found that despite individual variations, horizon estimates were fairly consistent across participants and conformed well to annotated data. In addition, inversion resulted in significant differences in performance, whereas blurring did not yield any different results, highlighting the importance of global, low-frequency information for making judgments about horizon position. In the second, computational experiment, we then correlated the performance of several algorithms for horizon estimation with the human data -- algorithms ranged from simple estimations of bright-dark-transitions to more sophisticated frequency spectrum analyses motivated by previous computational modeling of scene classification results. Surprisingly, the best fits to human data were obtained with one very simple gradient method and the most complex, trained method. Overall, global frequency spectrum analysis provided the best fit to human estimates, which together with the perceptual data suggests that the human visual system might use similar mechanisms to quickly judge horizon position as part of the scene gist

    Monocular Car Viewpoint Estimation with Circular Regression Forests

    No full text

    Monocular Heading Estimation in Non-stationary Urban Environment

    No full text
    Estimating heading information reliably from visual cues only is an important goal in human navigation research as well as in application areas ranging from robotics to automotive safety. The focus of expansion (FoE) is deemed to be important for this task. Yet, dynamic and unstructured environments like urban areas still pose an algorithmic challenge. We extend a robust learning framework that operates on optical flow and has at center stage a continuous Latent Variable Model (LVM) [1]. It accounts for missing measurements, erroneous correspondences and independent outlier motion in the visual field of view. The approach bypasses classical camera calibration through learning stages, that only require monocular video footage and corresponding platform motion information. To estimate the FoE we present both a numerical method acting on inferred optical flow fields and regression mapping, e.g. Gaussian-Process regression. We also present results for mapping to velocity, yaw, and even pitch and roll. Performance is demonstrated for car data recorded in non-stationary, urban environments
    corecore