3 research outputs found

    Exploration of Deep Learning Applications on an Autonomous Embedded Platform (Bluebox 2.0)

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)An Autonomous vehicle depends on the combination of latest technology or the ADAS safety features such as Adaptive cruise control (ACC), Autonomous Emergency Braking (AEB), Automatic Parking, Blind Spot Monitor, Forward Collision Warning or Avoidance (FCW or FCA), Lane Departure Warning. The current trend follows incorporation of these technologies using the Artificial neural network or Deep neural network, as an imitation of the traditionally used algorithms. Recent research in the field of deep learning and development of competent processors for autonomous or self-driving car have shown amplitude of prospect, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Deployment of several mentioned ADAS safety feature using multiple sensors and individual processors, increases the integration complexity and also results in the distribution of the system, which is very pivotal for autonomous vehicles. This thesis attempts to tackle two important adas safety feature: Forward collision Warning, and Object Detection using the machine learning and Deep Neural Networks and there deployment in the autonomous embedded platform. 1. A machine learning based approach for the forward collision warning system in an autonomous vehicle. 2. 3-D object detection using Lidar and Camera which is primarily based on Lidar Point Clouds. The proposed forward collision warning model is based on the forward facing automotive radar providing the sensed input values such as acceleration, velocity and separation distance to a classifier algorithm which on the basis of supervised learning model, alerts the driver of possible collision. Decision Tress, Linear Regression, Support Vector Machine, Stochastic Gradient Descent, and a Fully Connected Neural Network is used for the prediction purpose. The second proposed methods uses object detection architecture, which combines the 2D object detectors and a contemporary 3D deep learning techniques. For this approach, the 2D object detectors is used first, which proposes a 2D bounding box on the images or video frames. Additionally a 3D object detection technique is used where the point clouds are instance segmented and based on raw point clouds density a 3D bounding box is predicted across the previously segmented objects

    Real-Time 3-D Segmentation on An Autonomous Embedded System: using Point Cloud and Camera

    Get PDF
    Present day autonomous vehicle relies on several sensor technologies for it's autonomous functionality. The sensors based on their type and mounted-location on the vehicle, can be categorized as: line of sight and non-line of sight sensors and are responsible for the different level of autonomy. These line of sight sensors are used for the execution of actions related to localization, object detection and the complete environment understanding. The surrounding or environment understanding for an autonomous vehicle can be achieved by segmentation. Several traditional and deep learning related techniques providing semantic segmentation for an input from camera is already available, however with the advancement in the computing processor, the progression is on developing the deep learning application replacing traditional methods. This paper presents an approach to combine the input of camera and lidar for semantic segmentation purpose. The proposed model for outdoor scene segmentation is based on the frustum pointnet, and ResNet which utilizes the 3d point cloud and camera input for the 3d bounding box prediction across the moving and non-moving object and thus finally recognizing and understanding the scenario at the point-cloud or pixel level. For real time application the model is deployed on the RTMaps framework with Bluebox (an embedded platform for autonomous vehicle). The proposed architecture is trained with the CITYScpaes and the KITTI dataset

    Adaptive approximate computing in Edge AI and IoT applications: a review

    No full text
    Recent advancements in hardware and software systems have been driven by the deployment of emerging smart health and mobility applications. These developments have modernized the traditional approaches by replacing conventional computing systems with cyber-physical and intelligent systems combining the Internet of Things (IoT) with Edge Artificial Intelligence. Despite the many advantages and opportunities of these systems within various application domains, the scarcity of energy, extensive computing needs, and limited communication must be considered when orchestrating their deployment. Inducing savings in these directions is central to the Approximate Computing (AxC) paradigm, in which the accuracy of some operations is traded off with energy, latency, and/or communication reductions. Unfortunately, the dynamics of the environments in which AxC-equipped IoT systems operate have been paid little attention. We bridge this gap by surveying adaptive AxC techniques applied to three emerging application domains, namely autonomous driving, smart sensing and wearables, and positioning, paying special attention to hardware acceleration. We discuss the challenges of such applications, how adaptive AxC can aid their deployment, and which savings it can bring based on traits of the data and devices involved. Insights arising thereof may serve as inspiration to researchers, engineers, and students active within the considered domains
    corecore