304 research outputs found

    Object Detection in Omnidirectional Images

    Get PDF
    Nowadays, computer vision (CV) is widely used to solve real-world problems, which pose increasingly higher challenges. In this context, the use of omnidirectional video in a growing number of applications, along with the fast development of Deep Learning (DL) algorithms for object detection, drives the need for further research to improve existing methods originally developed for conventional 2D planar images. However, the geometric distortion that common sphere-to-plane projections produce, mostly visible in objects near the poles, in addition to the lack of omnidirectional open-source labeled image datasets has made an accurate spherical image-based object detection algorithm a hard goal to achieve. This work is a contribution to develop datasets and machine learning models particularly suited for omnidirectional images, represented in planar format through the well-known Equirectangular Projection (ERP). To this aim, DL methods are explored to improve the detection of visual objects in omnidirectional images, by considering the inherent distortions of ERP. An experimental study was, firstly, carried out to find out whether the error rate and type of detection errors were related to the characteristics of ERP images. Such study revealed that the error rate of object detection using existing DL models with ERP images, actually, depends on the object spherical location in the image. Then, based on such findings, a new object detection framework is proposed to obtain a uniform error rate across the whole spherical image regions. The results show that the pre and post-processing stages of the implemented framework effectively contribute to reducing the performance dependency on the image region, evaluated by the above-mentioned metric

    IMAT: A Lightweight IoT Network Intrusion Detection System based on Machine Learning techniques

    Get PDF
    Internet of Things (IoT) is one of the fast-expanding technologies nowadays, and promises to be revolutionary for the near future. IoT systems are in fact an incredible convenience due to centralized and computerized control of any electronic device. This technology allows various physical devices, home applications, vehicles, appliances, etc., to be interconnected and exposed to the Internet. On the other hand, it entails the fundamental need to protect the network from adversarial and unwanted alterations. To prevent such threats it is necessary to appeal to Intrusion Detection Systems (IDS), which can be used in information environments to monitor identified threats or anomalies. The most recent and efficient IDS applications involve the use of Machine Learning (ML) techniques which can automatically detect and prevent malicious attacks, such as distributed denial-of-service (DDoS), which represents a recurring thread to IoT networks in the last years. The work presented on this thesis comes with double purpose: build and test different light Machine Learning models which achieve great performance by running on resource-constrained devices; and at the same time we present a novel Network-based Intrusion Detection System based on the latter devices which can automatically detect IoT attack traffic. Our proposed system consists on deploying small low-powered devices to each component of an IoT environment where each device performs Machine Learning based Intrusion Detection at network level. In this work we describe and train different light-ML models which are tested on Raspberry Pis and FPGAs boards. The performance of such classifiers detecting benign and malicious traffic is presented and compared by response time, accuracy, precision, recall, f1-score and ROC-AUC metrics. The aim of this work is to test these machine learning models on recent datasets with the purpose of finding the most performing ones which can be used for intrusion-defense over IoT environments characterized by high flexibility, easy-installation and efficiency. The obtained results are above 0.99\% of accuracy for different models and they indicate that the proposed system can bring a remarkable layer of security. We show how Machine Learning applied to small low-cost devices is an efficient and versatile combination characterized by a bright future ahead.Internet of Things (IoT) is one of the fast-expanding technologies nowadays, and promises to be revolutionary for the near future. IoT systems are in fact an incredible convenience due to centralized and computerized control of any electronic device. This technology allows various physical devices, home applications, vehicles, appliances, etc., to be interconnected and exposed to the Internet. On the other hand, it entails the fundamental need to protect the network from adversarial and unwanted alterations. To prevent such threats it is necessary to appeal to Intrusion Detection Systems (IDS), which can be used in information environments to monitor identified threats or anomalies. The most recent and efficient IDS applications involve the use of Machine Learning (ML) techniques which can automatically detect and prevent malicious attacks, such as distributed denial-of-service (DDoS), which represents a recurring thread to IoT networks in the last years. The work presented on this thesis comes with double purpose: build and test different light Machine Learning models which achieve great performance by running on resource-constrained devices; and at the same time we present a novel Network-based Intrusion Detection System based on the latter devices which can automatically detect IoT attack traffic. Our proposed system consists on deploying small low-powered devices to each component of an IoT environment where each device performs Machine Learning based Intrusion Detection at network level. In this work we describe and train different light-ML models which are tested on Raspberry Pis and FPGAs boards. The performance of such classifiers detecting benign and malicious traffic is presented and compared by response time, accuracy, precision, recall, f1-score and ROC-AUC metrics. The aim of this work is to test these machine learning models on recent datasets with the purpose of finding the most performing ones which can be used for intrusion-defense over IoT environments characterized by high flexibility, easy-installation and efficiency. The obtained results are above 0.99\% of accuracy for different models and they indicate that the proposed system can bring a remarkable layer of security. We show how Machine Learning applied to small low-cost devices is an efficient and versatile combination characterized by a bright future ahead

    Accelerating Time Series Analysis via Processing using Non-Volatile Memories

    Full text link
    Time Series Analysis (TSA) is a critical workload for consumer-facing devices. Accelerating TSA is vital for many domains as it enables the extraction of valuable information and predict future events. The state-of-the-art algorithm in TSA is the subsequence Dynamic Time Warping (sDTW) algorithm. However, sDTW's computation complexity increases quadratically with the time series' length, resulting in two performance implications. First, the amount of data parallelism available is significantly higher than the small number of processing units enabled by commodity systems (e.g., CPUs). Second, sDTW is bottlenecked by memory because it 1) has low arithmetic intensity and 2) incurs a large memory footprint. To tackle these two challenges, we leverage Processing-using-Memory (PuM) by performing in-situ computation where data resides, using the memory cells. PuM provides a promising solution to alleviate data movement bottlenecks and exposes immense parallelism. In this work, we present MATSA, the first MRAM-based Accelerator for Time Series Analysis. The key idea is to exploit magneto-resistive memory crossbars to enable energy-efficient and fast time series computation in memory. MATSA provides the following key benefits: 1) it leverages high levels of parallelism in the memory substrate by exploiting column-wise arithmetic operations, and 2) it significantly reduces the data movement costs performing computation using the memory cells. We evaluate three versions of MATSA to match the requirements of different environments (e.g., embedded, desktop, or HPC computing) based on MRAM technology trends. We perform a design space exploration and demonstrate that our HPC version of MATSA can improve performance by 7.35x/6.15x/6.31x and energy efficiency by 11.29x/4.21x/2.65x over server CPU, GPU and PNM architectures, respectively

    SYSTEM-ON-A-CHIP (SOC)-BASED HARDWARE ACCELERATION FOR HUMAN ACTION RECOGNITION WITH CORE COMPONENTS

    Get PDF
    Today, the implementation of machine vision algorithms on embedded platforms or in portable systems is growing rapidly due to the demand for machine vision in daily human life. Among the applications of machine vision, human action and activity recognition has become an active research area, and market demand for providing integrated smart security systems is growing rapidly. Among the available approaches, embedded vision is in the top tier; however, current embedded platforms may not be able to fully exploit the potential performance of machine vision algorithms, especially in terms of low power consumption. Complex algorithms can impose immense computation and communication demands, especially action recognition algorithms, which require various stages of preprocessing, processing and machine learning blocks that need to operate concurrently. The market demands embedded platforms that operate with a power consumption of only a few watts. Attempts have been mad to improve the performance of traditional embedded approaches by adding more powerful processors; this solution may solve the computation problem but increases the power consumption. System-on-a-chip eld-programmable gate arrays (SoC-FPGAs) have emerged as a major architecture approach for improving power eciency while increasing computational performance. In a SoC-FPGA, an embedded processor and an FPGA serving as an accelerator are fabricated in the same die to simultaneously improve power consumption and performance. Still, current SoC-FPGA-based vision implementations either shy away from supporting complex and adaptive vision algorithms or operate at very limited resolutions due to the immense communication and computation demands. The aim of this research is to develop a SoC-based hardware acceleration workflow for the realization of advanced vision algorithms. Hardware acceleration can improve performance for highly complex mathematical calculations or repeated functions. The performance of a SoC system can thus be improved by using hardware acceleration method to accelerate the element that incurs the highest performance overhead. The outcome of this research could be used for the implementation of various vision algorithms, such as face recognition, object detection or object tracking, on embedded platforms. The contributions of SoC-based hardware acceleration for hardware-software codesign platforms include the following: (1) development of frameworks for complex human action recognition in both 2D and 3D; (2) realization of a framework with four main implemented IPs, namely, foreground and background subtraction (foreground probability), human detection, 2D/3D point-of-interest detection and feature extraction, and OS-ELM as a machine learning algorithm for action identication; (3) use of an FPGA-based hardware acceleration method to resolve system bottlenecks and improve system performance; and (4) measurement and analysis of system specications, such as the acceleration factor, power consumption, and resource utilization. Experimental results show that the proposed SoC-based hardware acceleration approach provides better performance in terms of the acceleration factor, resource utilization and power consumption among all recent works. In addition, a comparison of the accuracy of the framework that runs on the proposed embedded platform (SoCFPGA) with the accuracy of other PC-based frameworks shows that the proposed approach outperforms most other approaches

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
    corecore