553 research outputs found
Tracking a Line for Automatic Piloting System of Drone
Nowadays, the FAA is discussing about open the low area for registered flight, so the Drone for Amazon will be the possible. In addition, Amazon is developing its own drone for future delivery method. In this project, we implement automatic piloting system for AR. Drone by tracking a line with on-board camera. This project uses AR. Drone for our implementation of the line tracker because there exist several available SDKs based on Wi-Fi network and two HD quality cameras. For the SDK, YA. Drone from University in Hamburg, Germany is employed and it is combined with Image Processing Technology in order for a drone to track the line. The preliminary results show that a drone can successfully track various types of lines behind a drone, such as straight, 90 degree, crank, circle and arbitrary lines. Using the automatic piloting system in this project, a drone can send something from one room to other room, and eventually deliver an item outdoor
Recommended from our members
Understanding antecedents of TripAdvisor’s GreenLeader participation
The green concept and management have also been well-recognized strategies to promote a hotel’s competitive advantage. Based on a sample of 43,312 hotels from 328 destinations in 28 countries, we empirically examined antecedents of TripAdvisor’s GreenLeader participation. Our binary logit model indicates that hotels with better online reputations, larger size, lower level of urbanization economies, and higher reliance on business travelers are more likely to join the GreenLeader program. For GreenLeader hotels, online reputation factors and hotel size also explain their ranks. Lastly, implications are discussed
Gait based gender classification using Kinect sensor
In this project, we propose a novel method to recognize human gender based on their gaits. We collect samples of walking silhouettes with Microsoft Kinect sensor and extract gait features from Gait Energy Image (GEI). The samples are divided into two parts: training dataset and testing dataset. We train a SVM classifier using the training set and test with testing dataset. We use feature vector with a low dimension in this project. The experimental results show that our method has accuracy higher than 80%
Recommended from our members
ZNF335: A Novel Regulator of Stem Cell Proliferation and Cell Fate in the Cerebral Cortex
Though development of the cerebral cortex is of singular importance to human cognition, it remains very poorly understood. Microcephaly, or "small head," is a neurodevelopmental disorder causing significantly reduced cerebral cortex size, and the disease has proved to be a useful model system for elucidating the steps essential for proper cortical development and cognitive function. Many known microcephaly gene products localize to centrosomes, regulating cell fate and proliferation, however, the elucidation of different microcephaly genes with different functions may shed light on previously unidentified key steps of brain development. We identify and characterize a nuclear zinc finger protein, ZNF335/NIF-1, as a causative gene for severe microcephaly, small somatic size, and neonatal death. Znf335-null mice are embryonically lethal and conditional knockout leads to severely reduced cortical size. RNA-interference and postmortem human studies show that Znf335 is essential for neural progenitor self-renewal, neurogenesis, and neuronal differentiation. ZNF335 is a component of a vertebrate-specific, trithorax H3K4-methylation complex, directly regulating REST/NRSF, a master regulator of neural gene expression and cell fate, as well as other essential neural-specific genes. Our results reveal ZNF335 as an essential link between H3K4 complexes and REST/NRSF, and provide the first direct genetic evidence that this pathway regulates human neurogenesis and neuronal differentiation
Analysis Precipitation to Seek a Good Place for Farm
In this poster, we present how to employ Hadoop System including HDFS and MapReduce to analyze the precipitation data to find good places for farming. The precipitation data are collected from National Oceanic and Atmospheric Administration (NOAA) and some formulas form Food and Agriculture Organization of the United Nations (FAO) to help to find places which have good precipitation for specific plant. In order to address this, we employ Hadoop Distributed File System (HDFS) and MapReduce programming with banana as an example. Combining the weather data with the precipitation data, we can figure out the places which are good for banana grows. The implemented system uses two MR programs and Google Earth to implement the visualization
Look, Cast and Mold: Learning 3D Shape Manifold from Single-view Synthetic Data
Inferring the stereo structure of objects in the real world is a challenging
yet practical task. To equip deep models with this ability usually requires
abundant 3D supervision which is hard to acquire. It is promising that we can
simply benefit from synthetic data, where pairwise ground-truth is easy to
access. Nevertheless, the domain gaps are nontrivial considering the variant
texture, shape and context. To overcome these difficulties, we propose a
Visio-Perceptual Adaptive Network for single-view 3D reconstruction, dubbed
VPAN. To generalize the model towards a real scenario, we propose to fulfill
several aspects: (1) Look: visually incorporate spatial structure from the
single view to enhance the expressiveness of representation; (2) Cast:
perceptually align the 2D image features to the 3D shape priors with
cross-modal semantic contrastive mapping; (3) Mold: reconstruct stereo-shape of
target by transforming embeddings into the desired manifold. Extensive
experiments on several benchmarks demonstrate the effectiveness and robustness
of the proposed method in learning the 3D shape manifold from synthetic data
via a single-view. The proposed method outperforms state-of-the-arts on Pix3D
dataset with IoU 0.292 and CD 0.108, and reaches IoU 0.329 and CD 0.104 on
Pascal 3D+
Entangled View-Epipolar Information Aggregation for Generalizable Neural Radiance Fields
Generalizable NeRF can directly synthesize novel views across new scenes,
eliminating the need for scene-specific retraining in vanilla NeRF. A critical
enabling factor in these approaches is the extraction of a generalizable 3D
representation by aggregating source-view features. In this paper, we propose
an Entangled View-Epipolar Information Aggregation method dubbed EVE-NeRF.
Different from existing methods that consider cross-view and along-epipolar
information independently, EVE-NeRF conducts the view-epipolar feature
aggregation in an entangled manner by injecting the scene-invariant appearance
continuity and geometry consistency priors to the aggregation process. Our
approach effectively mitigates the potential lack of inherent geometric and
appearance constraint resulting from one-dimensional interactions, thus further
boosting the 3D representation generalizablity. EVE-NeRF attains
state-of-the-art performance across various evaluation scenarios. Extensive
experiments demonstate that, compared to prevailing single-dimensional
aggregation, the entangled network excels in the accuracy of 3D scene geometry
and appearance reconstruction. Our code is publicly available at
https://github.com/tatakai1/EVENeRF.Comment: Accepted by CVPR-202
- …