13 research outputs found

    Motorcycle Racer Training Device

    Get PDF
    To date, there are no devices in the motorcycle market that assist the rider in providing lean information. Our product is uniquely situated to exploit with oversight and provide motorcyclists with this information.  Our device will assist the enthusiast rider, providing information in real‐time on rider performance. For the race rider, a more sophisticated version of the device will couple GPS tracking capability with lean information and transmit that data to software running on a PC, providing real‐time information to a race crew. The crew can then analyze this information to make improvements to machine and rider and better their chances at achieving victory on the circuit

    Fusion of multimodal imaging techniques towards autonomous navigation

    No full text
    “Earth is the cradle of humanity, but one cannot live in a cradle forever.” -Konstantin E. Tsiolkovsky, an early pioneer of rocketry and astronautics. Space robotics enable humans to explore beyond our home planet. Traditional techniques for tele-operated robotic guidance make it possible for a driver to direct a rover that is up to 245.55Mkm away. However, relying on manual terrestrial operators for guidance is a key limitation for exploration missions today, as real-time communication between rovers and operators is delayed by long distances and limited uplink opportunities. Moreover, autonomous guidance techniques in use today are generally limited in scope and capacity; for example, some autonomous techniques presently in use require the application of special markers on targets in order to enable detection, while other techniques provide autonomous vision-based flight navigation but only at limited altitudes in ideal visibility conditions. Improving autonomy is thus essential to expanding the scope of viable space missions. In this thesis, a fusion of monocular visible and infrared imaging cameras is employed to estimate the relative pose of a nearby target while compensating for each spectrum's shortcomings. The robustness of the algorithm was tested in a number of different scenarios by simulating harsh space environments while imaging a subject of similar characteristics to a spacecraft in orbit. It is shown that the fusion of visual odometries from two spectrums performs well where knowledge of the target's physical characteristics is limited. The result of this thesis research is an autonomous, robust vision-based tracking system designed for space applications. This appealing solution can be used onboard most spacecraft and adapted for the specific application of any given mission

    A Comparison of prefilters in ORB-based object detection

    No full text
    In this paper, we study the effects of prefiltering on Oriented Fast and Rotated BRIEF (ORB)-based object detection. Specifically, we examine the trade-off between execution runtime and the minimum Hamming distance between matched feature descriptors, since ORB uses the minimum distance to determine whether the object is present. Furthermore, we introduce a covariance-based method of choosing the Hamming distance thresholds for each of the prefiltered ORB detectors which compares the minimum Hamming distance values for both positive and negative training images. We also use the same method to assess the prefilter performance

    Optimization Techniques for Feature Detection of Orbital Debris

    No full text
    Space debris is exponentially increasing and becoming a pressing concern, as demonstrated by the recent malfunctioning of the European Space Agency’s Envisat and the Japanese Space Agency’s Hitomi. In this study, we propose a novel vision-based feature detection technique for the next generation of satellites to perform autonomous obstacle avoidance. The study evaluates the strength and shortcomings of thermal and visible imaging to detect an approaching subject. For this purpose, the subject was imaged in various lighting conditions and contrasts of temperature relative to the ambient temperature. Preliminary results suggested that thermal imaging at distances greater than 2m offers a higher detection accuracy in contrast to using visible imaging. However, outdoors test proved that both images offered similar poor detection readings. Fluctuations of wind and turbulence may have contributed to the low performance of the thermal images, as well as extreme illuminations respectively impacted the detection results of the visible images which were observed

    A comparison of feature detection in low atmospheric pressure using thermal and visible images

    No full text
    Due to the proliferation of orbital debris, it is now more crucial for spacecraft to be equipped with autonomous obstacle avoidance capabilities. In this paper, we propose a novel optical navigation technique to detect nearby debris efficiently and accurately in real-time by combining feature detection results from thermal and visible images. Furthermore, to demonstrate the robustness of the proposed approach, we compare our results using only thermal or visible imagery at both sea level and at high altitude to show the accuracy gains that can be achieved. The performance of the object detection algorithm was evaluated by comparing the trade-off between object detection accuracy and overall detection runtime across different atmospheric pressures using the Oriented Fast and Rotated Brief (ORB) feature detector

    Robust 3D Object Detection

    No full text
    One of the major challenges for unmanned space exploration is the latency caused by communication delays, making tasks such as docking difficult due to limited possibility for human intervention. In this paper, we address this issue by proposing an image processing technique capable of real-time, lowpower, robust, full 3D object and orientation detection. Oriented Fast and Rotated Brief (ORB) feature detector was selected as the ideal object detection technique for this study. ORB requires just one 2D reference image of the subject for performing a robust object detection, which is desirable when limited available storage onboard a spacecraft enforce constraints. Additionally, Sharif and Hölzel in a recent study illustrated ORB's robustness and invariance to orientation, rotation, and illumination variations. Thus, ORB is an ideal technique to guide a malfunctioning satellite that has no sense of orientation relative to its surroundings. ORB feature detector is a robust algorithm for detecting the subject when external factors are unpredictable, uncontrollable, and quickly changing. However, ORB is a 2D feature detector and unable to differentiate between surfaces of a subject. Via Bayesian probabilistic theorem, we proposed a new approach to help improve the confidence in detection

    Analysis of Attitude Jitter on the Performance of Feature Detection

    No full text
    This study explored a vision algorithm's performance during a shaker test to help reproduce the effects of vibration caused by the reaction wheels of a spacecraft. In this paper, we analyze the robustness of the feature detection technique by submitting the thermal and visible imaging cameras to sinusoidal vibrations as they simultaneously execute feature detection of the target

    Autonomous rock classification using Bayesian image analysis for Rover-based planetary exploration

    No full text
    A robust classification system is proposed to support autonomous geological mapping of rocky outcrops using grayscale digital images acquired by a planetary exploration rover. The classifier uses 13 Haralick textural parameters to describe the surface of rock samples, automatically catalogues this information into a 5-bin data structure, computes Bayesian probabilities, and outputs an identification.The system has been demonstrated using a library of 30 digital images of igneous, sedimentary and metamorphic rocks. The images are 3.5×3.5cm2 in size and composed of 256×256 pixels with 256 grayscale levels. They are first converted to gray level co-occurrence matrices which quantify the number of times adjacent pixels of similar intensity are present. The Haralick parameters are computed from these matrices. When all 13 parameters are used, classification accuracy, defined using an empirical scoring system, is 65% due to a large number of false positives. When the number of parameters and the choice of parameter is optimized, classification accuracy increases to 80%. The best results were achieved with 3 parameters that can be interpreted visually (angular second moment, contrast, correlation) together with two statistical parameters (sum of squares variance and difference variance) and a parameter derived from information theory (information measure of correlation II).The system has been kept simple not to draw excessive computational power from the rover. It could, however, be easily extended to handle additional parameters such as images acquired at different wavelengths
    corecore