6 research outputs found

    ZigBee(2.4G) Wireless Sensor Network Application on Indoor Intrusion Detection

    Get PDF
    [[sponsorship]]IEEE[[conferencetype]]國際[[conferencedate]]20150606~20150608[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]台灣/台北 國立臺灣科技大

    Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

    Get PDF
    Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorith

    Exploiting the Nonlinear Stiffness of Origami Folding to Enhance Robotic Jumping Performance

    Get PDF
    This research investigates the effects of using origami folding techniques to develop a nonlinear jumping mechanism with optimized dynamic performance. A previous theoretical investigation has shown the benefits of using a nonlinear spring element compared to a linear spring for improving the dynamic performance of a jumper. This study sets out to experimentally verify the effectiveness of utilizing nonlinear stiffness to achieve optimized jumping performance. The Tachi-Miura Polyhedron (TMP) origami structure is used as the nonlinear energy-storage element connecting two end-point masses. The TMP bellow exhibits a “strain-softening” nonlinear force-displacement behavior resulting in an increased energy storage compared to a linear spring. The geometric parameters of the structure are optimized to improve air-time and maximum jumping height. An additional TMP structure was designed to exhibit a close-to-linear force-displacement response to serve as the representative linear spring element. A critical challenge in this study is to minimize the hysteresis and energy loss of TMP during its compression stage before jumping. To this end, plastically annealed lamina emergent origami (PALEO) concept is used to modify the creases of the structure in order to reduce hysteresis during the compression cycle. PALEO works by increasing the folding limit before plastic deformation occurs, thus improving the energy retention of the structure. Steel shim stock are secured to the facets of the TMP structure to serve as end-point masses, and the air-time and jumping height of both structures are measured and compared. The nonlinear TMP structure achieves roughly 9% improvement in air-time and a 12% improvement in jumping height when compared to the linear TMP structure. These results validate the theoretical benefits of utilizing nonlinear spring elements in jumping mechanisms and can lead to improved performance in dynamic systems which rely on springs as a method of energy storage and can lead to emergence of a new generation of more efficient jumping mechanisms with optimized performance in the future

    Using Origami Folding Techniques to Study the Effect of Non-Linear Stiffness on the Performance of Jumping Mechanism

    Get PDF
    This research uses Origami patterns and folding techniques to generate non-linear force displacement profiles and study their effect on jumping mechanisms. In this case, the jumping mechanism is comprised of two masses connected by a Tachi-Miura Polyhedron (TMP) with non-linear stiffness characteristics under tensile and compressive loads. The strain-softening behavior exhibited by the TMP enables us to optimize the design of the structure for improved jumping performance. I derive the equations of motion of the jumping process for the given mechanism and combine them with the kinematics of the TMP structure to obtain numerical solutions for the optimum design. The results correlate to given geometric configurations for the TMP that result in the two optimum objectives: The maximum time spent in the air and maximum clearance off the ground. I then physically manufacture the design and conduct compression tests to measure the force-displacement response and confirm it with the theoretical approach based on the kinematics. Experimental data from the compression tests show a hysteresis problem where the force-displacement profile exhibits different behavior whether the structure is being compressed or released. I investigate two methods to nullify the hysteresis when compressing or releasing the mechanism and then discuss their results. This research can lead to easily manufacturable jumping robotic mechanisms with improved energy storage and jumping performance. Additionally, I learn more about how to use origami techniques to harness unique stiffness properties and apply them to a variety of scenarios

    Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things

    Get PDF
    abstract: Video capture, storage, and distribution in wireless video sensor networks (WVSNs) critically depends on the resources of the nodes forming the sensor networks. In the era of big data, Internet of Things (IoT), and distributed demand and solutions, there is a need for multi-dimensional data to be part of the Sensor Network data that is easily accessible and consumable by humanity as well as machinery. Images and video are expected to become as ubiquitous as is the scalar data in traditional sensor networks. The inception of video-streaming over the Internet, heralded a relentless research for effective ways of distributing video in a scalable and cost effective way. There has been novel implementation attempts across several network layers. Due to the inherent complications of backward compatibility and need for standardization across network layers, there has been a refocused attention to address most of the video distribution over the application layer. As a result, a few video streaming solutions over the Hypertext Transfer Protocol (HTTP) have been proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These frameworks, do not address the typical and future WVSN use cases. A highly flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH) are introduced. The platform's goal is to usher video as a data element that can be integrated into traditional and non-Internet networks. A low cost, scalable node is built from the ground up to be fully compatible with the Internet of Things Machine to Machine (M2M) concept, as well as the ability to be easily re-targeted to new applications in a short time. Flexi-WVSNP design includes a multi-radio node, a middle-ware for sensor operation and communication, a cross platform client facing data retriever/player framework, scalable security as well as a cohesive but decoupled hardware and software design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div
    corecore