53 research outputs found

    BuFF: Burst Feature Finder for Light-Constrained 3D Reconstruction

    Full text link
    Robots operating at night using conventional vision cameras face significant challenges in reconstruction due to noise-limited images. Previous work has demonstrated that burst-imaging techniques can be used to partially overcome this issue. In this paper, we develop a novel feature detector that operates directly on image bursts that enhances vision-based reconstruction under extremely low-light conditions. Our approach finds keypoints with well-defined scale and apparent motion within each burst by jointly searching in a multi-scale and multi-motion space. Because we describe these features at a stage where the images have higher signal-to-noise ratio, the detected features are more accurate than the state-of-the-art on conventional noisy images and burst-merged images and exhibit high precision, recall, and matching performance. We show improved feature performance and camera pose estimates and demonstrate improved structure-from-motion performance using our feature detector in challenging light-constrained scenes. Our feature finder provides a significant step towards robots operating in low-light scenarios and applications including night-time operations.Comment: 7 pages, 9 figures, 2 tables, for associated project page, see https://roboticimaging.org/Projects/BuFF

    Recovering Depth from Still Images for Underwater Dehazing Using Deep Learning

    Get PDF
    Estimating depth from a single image is a challenging problem, but it is also interestingdue to the large amount of applications, such as underwater image dehazing. In this paper, a newperspective is provided; by taking advantage of the underwater haze that may provide a strong cue tothe depth of the scene, a neural network can be used to estimate it. Using this approach the depthmapcan be used in a dehazing method to enhance the image and recover original colors, offering abetter input to image recognition algorithms and, thus, improving the robot performance duringvision-based tasks such as object detection and characterization of the seafloor. Experiments areconducted on different datasets that cover a wide variety of textures and conditions, while using adense stereo depthmap as ground truth for training, validation and testing. The results show that theneural network outperforms other alternatives, such as the dark channel prior methods and it is ableto accurately estimate depth from a single image after a training stage with depth information

    SProtP: A Web Server to Recognize Those Short-Lived Proteins Based on Sequence-Derived Features in Human Cells

    Get PDF
    Protein turnover metabolism plays important roles in cell cycle progression, signal transduction, and differentiation. Those proteins with short half-lives are involved in various regulatory processes. To better understand the regulation of cell process, it is important to study the key sequence-derived factors affecting short-lived protein degradation. Until now, most of protein half-lives are still unknown due to the difficulties of traditional experimental methods in measuring protein half-lives in human cells. To investigate the molecular determinants that affect short-lived proteins, a computational method was proposed in this work to recognize short-lived proteins based on sequence-derived features in human cells. In this study, we have systematically analyzed many features that perhaps correlated with short-lived protein degradation. It is found that a large fraction of proteins with signal peptides and transmembrane regions in human cells are of short half-lives. We have constructed an SVM-based classifier to recognize short-lived proteins, due to the fact that short-lived proteins play pivotal roles in the control of various cellular processes. By employing the SVM model on human dataset, we achieved 80.8% average sensitivity and 79.8% average specificity, respectively, on ten testing dataset (TE1-TE10). We also obtained 89.9%, 99% and 83.9% of average accuracy on an independent validation datasets iTE1, iTE2 and iTE3 respectively. The approach proposed in this paper provides a valuable alternative for recognizing the short-lived proteins in human cells, and is more accurate than the traditional N-end rule. Furthermore, the web server SProtP (http://reprod.njmu.edu.cn/sprotp) has been developed and is freely available for users

    A Control-Theoretic Approach to Inertial SLAM

    Get PDF
    A team of Un-manned Aerial Vehicles (UAVs) is tasked to explore an unknown environment and to map the features they find, but must do so without the use of infrastructure based localisation systems such as the Global Positioning System (GPS), or any a-prior terrain data. The UAVs navigate using a statistical estimation technique known as Simultaneous Localisation And Mapping (SLAM) which allows for the simultaneous estimation of the location of the UAV as well as the location of the features it sees. SLAM offers a unique approach to vehicle localisation with potential applications including planetary exploration, or when GPS is denied (for example under intentional GPS jamming, or applications where GPS signals cannot be reached), but more importantly can be used to augment already existing systems to improve robustness to navigation failure. One key requirement for SLAM to work is that it must re-observe features, and this has two effects: firstly, the improvement of the location estimate of the feature; and secondly, the improvement of the location estimate of the platform because of the statistical correlations that link the platform to the feature. So our UAV has two options; should it explore more unknown terrain to find new features, or should it revisit known features to improve localisation quality. Additionally, it is known that the maneuvers the agent takes during feature observations affects the accuracy in localisation estimates and hence the accuracy of the constructed map. This thesis is concerned with studying the interaction and tight coupling between the processes of SLAM and motion planning/control of an autonomous intelligent agent. We focus on inertial-sensor based SLAM due to its applicability to several different vehicle modalities. Architectures for inertial SLAM are presented for both global and local-scale environments, with the estimation of inertial sensor biases, with both range/bearing and bearing-only terrain sensors and for both single and multiple vehicles. The aim is to demonstrate a valid theoretic implementation which is used as the foundation for a study of the algorithm. We begin by studying the observability properties of the inertial SLAM algorithm, focussing on the connection between vehicle dynamic maneuvers and the observability of the equations. We then consider the problem of ‘active SLAM’ , where the agent makes intelligent control decisions in order to exploit the coupling between SLAM accuracy and agent motion. The analysis is then extended to the multi-agent case and several control strategies are demonstrated. Simulation results of implementations of the SLAM algorithm, maneuver analysis and both single and multi-vehicle active SLAM architectures are presented using a six-degree of freedom, multi-UAV simulator

    Detection, Segmentation, and Model Fitting of Individual Tree Stems from Airborne Laser Scanning of Forests Using Deep Learning

    No full text
    Accurate measurements of the structural characteristics of trees such as height, diameter, sweep and taper are an important part of forest inventories in managed forests and commercial plantations. Both terrestrial and aerial LiDAR are currently employed to produce pointcloud data from which inventory metrics can be determined. Terrestrial/ground-based scanning typically provides pointclouds resolutions of many thousands of points per m 2 from which tree stems can be observed and inventory measurements made directly, whereas typical resolutions from aerial scanning (tens of points per m 2 ) require inventory metrics to be regressed from LiDAR variables using inventory reference data collected from the ground. Recent developments in miniaturised LiDAR sensors are enabling aerial capture of pointclouds from low-flying aircraft at high-resolutions (hundreds of points per m 2 ) from which tree stem information starts to become directly visible, enabling the possibility for plot-scale inventories that do not require access to the ground. In this paper, we develop new approaches to automated tree detection, segmentation and stem reconstruction using algorithms based on deep supervised machine learning which are designed for use with aerially acquired high-resolution LiDAR pointclouds. Our approach is able to isolate individual trees, determine tree stem points and further build a segmented model of the main tree stem that encompasses tree height, diameter, taper, and sweep. Through the use of deep learning models, our approach is able to adapt to variations in pointcloud densities and partial occlusions that are particularly prevalent when data is captured from the air. We present results of our algorithms using high-resolution LiDAR pointclouds captured from a helicopter over two Radiata pine forests in NSW, Australia
    corecore