6,823 research outputs found

    FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data

    Get PDF
    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories

    Driving Scene Perception Network: Real-time Joint Detection, Depth Estimation and Semantic Segmentation

    Full text link
    As the demand for enabling high-level autonomous driving has increased in recent years and visual perception is one of the critical features to enable fully autonomous driving, in this paper, we introduce an efficient approach for simultaneous object detection, depth estimation and pixel-level semantic segmentation using a shared convolutional architecture. The proposed network model, which we named Driving Scene Perception Network (DSPNet), uses multi-level feature maps and multi-task learning to improve the accuracy and efficiency of object detection, depth estimation and image segmentation tasks from a single input image. Hence, the resulting network model uses less than 850 MiB of GPU memory and achieves 14.0 fps on NVIDIA GeForce GTX 1080 with a 1024x512 input image, and both precision and efficiency have been improved over combination of single tasks.Comment: 9 pages, 7 figures, WACV'1

    Astrometry.net: Blind astrometric calibration of arbitrary astronomical images

    Full text link
    We have built a reliable and robust system that takes as input an astronomical image, and returns as output the pointing, scale, and orientation of that image (the astrometric calibration or WCS information). The system requires no first guess, and works with the information in the image pixels alone; that is, the problem is a generalization of the "lost in space" problem in which nothing--not even the image scale--is known. After robust source detection is performed in the input image, asterisms (sets of four or five stars) are geometrically hashed and compared to pre-indexed hashes to generate hypotheses about the astrometric calibration. A hypothesis is only accepted as true if it passes a Bayesian decision theory test against a background hypothesis. With indices built from the USNO-B Catalog and designed for uniformity of coverage and redundancy, the success rate is 99.9% for contemporary near-ultraviolet and visual imaging survey data, with no false positives. The failure rate is consistent with the incompleteness of the USNO-B Catalog; augmentation with indices built from the 2MASS Catalog brings the completeness to 100% with no false positives. We are using this system to generate consistent and standards-compliant meta-data for digital and digitized imaging from plate repositories, automated observatories, individual scientific investigators, and hobbyists. This is the first step in a program of making it possible to trust calibration meta-data for astronomical data of arbitrary provenance.Comment: submitted to A

    Implementation strategies for hyperspectral unmixing using Bayesian source separation

    Get PDF
    Bayesian Positive Source Separation (BPSS) is a useful unsupervised approach for hyperspectral data unmixing, where numerical non-negativity of spectra and abundances has to be ensured, such in remote sensing. Moreover, it is sensible to impose a sum-to-one (full additivity) constraint to the estimated source abundances in each pixel. Even though non-negativity and full additivity are two necessary properties to get physically interpretable results, the use of BPSS algorithms has been so far limited by high computation time and large memory requirements due to the Markov chain Monte Carlo calculations. An implementation strategy which allows one to apply these algorithms on a full hyperspectral image, as typical in Earth and Planetary Science, is introduced. Effects of pixel selection, the impact of such sampling on the relevance of the estimated component spectra and abundance maps, as well as on the computation times, are discussed. For that purpose, two different dataset have been used: a synthetic one and a real hyperspectral image from Mars.Comment: 10 pages, 6 figures, submitted to IEEE Transactions on Geoscience and Remote Sensing in the special issue on Hyperspectral Image and Signal Processing (WHISPERS

    Analytical modelling in Dynamo

    Get PDF
    BIM is applied as modern database for civil engineering. Its recent development allows to preserve both structure geometrical and analytical information. The analytical model described in the paper is derived directly from BIM model of a structure automatically but in most cases it requires manual improvements before being sent to FEM software. Dynamo visual programming language was used to handle the analytical data. Authors developed a program which corrects faulty analytical model obtained from BIM geometry, thus providing better automation for preparing FEM model. Program logic is explained and test cases shown

    Automated archiving of archaeological aerial images

    Get PDF
    The main purpose of any aerial photo archive is to allow quick access to images based on content and location. Therefore, next to a description of technical parameters and depicted content, georeferencing of every image is of vital importance. This can be done either by identifying the main photographed object (georeferencing of the image content) or by mapping the center point and/or the outline of the image footprint. The paper proposes a new image archiving workflow. The new pipeline is based on the parameters that are logged by a commercial, but cost-effective GNSS/IMU solution and processed with in-house-developed software. Together, these components allow one to automatically geolocate and rectify the (oblique) aerial images (by a simple planar rectification using the exterior orientation parameters) and to retrieve their footprints with reasonable accuracy, which is automatically stored as a vector file. The data of three test flights were used to determine the accuracy of the device, which turned out to be better than 1° for roll and pitch (mean between 0.0 and 0.21 with a standard deviation of 0.17–0.46) and better than 2.5° for yaw angles (mean between 0.0 and −0.14 with a standard deviation of 0.58–0.94). This turned out to be sufficient to enable a fast and almost automatic GIS-based archiving of all of the imagery

    Developing a Computer Vision-Based Decision Support System for Intersection Safety Monitoring and Assessment of Vulnerable Road Users

    Get PDF
    Vision-based trajectory analysis of road users enables identification of near-crash situations and proactive safety monitoring. The two most widely used sur-rogate safety measures (SSMs), time-to-collision (TTC) and post-encroachment time (PET)—and a recent variant form of TTC, relative time-to-collision (RTTC)—were investigated using real-world video data collected at ten signalized intersections in the city of San Diego, California. The performance of these SSMs was compared for the purpose of evaluating pedestrian and bicyclist safety. Prediction of potential trajectory intersection points was performed to calculate TTC for every interacting object, and the average of TTC for every two objects in critical situations was calculated. PET values were estimated by observing potential intersection points, and frequencies of events were estimated in three critical levels. Although RTTC provided useful information regarding the relative distance between objects in time, it was found that in certain conditions where objects are far from each other, the interaction between the objects was incorrectly flagged as critical based on a small RTTC. Comparison of PET, TTC, and RTTC for different critical classes also showed that several interactions were identified as critical using one SSM but not critical using a different SSM. These findings suggest that safety evaluations should not solely rely on a single SSM, and instead a combination of different SSMs should be considered to ensure the reliability of evaluations. Video data analysis was conducted to develop object detection and tracking models for automatic identification of vehicles, bicycles, and pedestrians. Outcomes of machine vision models were employed along with SSMs to build a decision support system for safety assessment of vulnerable road users at signalized intersections. Promising results from the decision support system showed that automated safety evaluations can be performed to proactively identify critical events. It also showed challenges as well as future directions to enhance the performance of the system

    AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR

    Get PDF
    Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level
    corecore