23 research outputs found

    LIDAR and beam steering tailored by neuromorphic metasurfaces dipped in a tunable surrounding medium

    Get PDF
    The control of amplitude, losses and deflection of light with elements of an optical array is of paramount importance for realizing dynamic beam steering for light detection and ranging applications (LIDAR). In this paper, we propose an optical beam steering device, operating at a wavelength of 1550 nm, based on high index material as molybdenum disulfide (MoS2) where the direction of the light is actively controlled by means of liquid crystal. The metasurface have been designed by a deep machine learning algorithm jointed with an optimizer in order to obtain univocal optical responses. The achieved numerical results represent a promising way for the realization of novel LIDAR for future applications with increase control and precision

    Machine Learning Algorithms for Smart Data Analysis in Internet of Things Environment: Taxonomies and Research Trends

    Get PDF
    Machine learning techniques will contribution towards making Internet of Things (IoT) symmetric applications among the most significant sources of new data in the future. In this context, network systems are endowed with the capacity to access varieties of experimental symmetric data across a plethora of network devices, study the data information, obtain knowledge, and make informed decisions based on the dataset at its disposal. This study is limited to supervised and unsupervised machine learning (ML) techniques, regarded as the bedrock of the IoT smart data analysis. This study includes reviews and discussions of substantial issues related to supervised and unsupervised machine learning techniques, highlighting the advantages and limitations of each algorithm, and discusses the research trends and recommendations for further study

    Vision-Based Eye Image Classification for Ophthalmic Measurement Systems

    Get PDF
    : The accuracy and the overall performances of ophthalmic instrumentation, where specific analysis of eye images is involved, can be negatively influenced by invalid or incorrect frames acquired during everyday measurements of unaware or non-collaborative human patients and non-technical operators. Therefore, in this paper, we investigate and compare the adoption of several vision-based classification algorithms belonging to different fields, i.e., Machine Learning, Deep Learning, and Expert Systems, in order to improve the performance of an ophthalmic instrument designed for the Pupillary Light Reflex measurement. To test the implemented solutions, we collected and publicly released PopEYE as one of the first datasets consisting of 15 k eye images belonging to 22 different subjects acquired through the aforementioned specialized ophthalmic device. Finally, we discuss the experimental results in terms of classification accuracy of the eye status, as well as computational load analysis, since the proposed solution is designed to be implemented in embedded boards, which have limited hardware resources in computational power and memory size

    Video Image Enhancement and Machine Learning Pipeline for Underwater Animal Detection and Classification at Cabled Observatories

    Get PDF
    Corrección de una afiliación en Sensors 2023, 23, 16. https://doi.org/10.3390/s23010016An understanding of marine ecosystems and their biodiversity is relevant to sustainable use of the goods and services they offer. Since marine areas host complex ecosystems, it is important to develop spatially widespread monitoring networks capable of providing large amounts of multiparametric information, encompassing both biotic and abiotic variables, and describing the ecological dynamics of the observed species. In this context, imaging devices are valuable tools that complement other biological and oceanographic monitoring devices. Nevertheless, large amounts of images or movies cannot all be manually processed, and autonomous routines for recognizing the relevant content, classification, and tagging are urgently needed. In this work, we propose a pipeline for the analysis of visual data that integrates video/image annotation tools for defining, training, and validation of datasets with video/image enhancement and machine and deep learning approaches. Such a pipeline is required to achieve good performance in the recognition and classification tasks of mobile and sessile megafauna, in order to obtain integrated information on spatial distribution and temporal dynamics. A prototype implementation of the analysis pipeline is provided in the context of deep-sea videos taken by one of the fixed cameras at the LoVe Ocean Observatory network of Lofoten Islands (Norway) at 260 m depth, in the Barents Sea, which has shown good classification results on an independent test dataset with an accuracy value of 76.18% and an area under the curve (AUC) value of 87.59%.This work was developed within the framework of the Tecnoterra (ICM-CSIC/UPC) and the following project activities: ARIM (Autonomous Robotic Sea-Floor Infrastructure for Benthopelagic Monitoring; MarTERA ERA-Net Cofound) and RESBIO (TEC2017-87861-R; Ministerio de Ciencia, Innovación y Universidades)

    Towards QoS-Based Embedded Machine Learning

    Get PDF
    Due to various breakthroughs and advancements in machine learning and computer architectures, machine learning models are beginning to proliferate through embedded platforms. Some of these machine learning models cover a range of applications including computer vision, speech recognition, healthcare efficiency, industrial IoT, robotics and many more. However, there is a critical limitation in implementing ML algorithms efficiently on embedded platforms: the computational and memory expense of many machine learning models can make them unsuitable in resource-constrained environments. Therefore, to efficiently implement these memory-intensive and computationally expensive algorithms in an embedded computing environment, innovative resource management techniques are required at the hardware, software and system levels. To this end, we present a novel quality-of-service based resource allocation scheme that uses feedback control to adjust compute resources dynamically to cope with the varying and unpredictable workloads of ML applications while still maintaining an acceptable level of service to the user. To evaluate the feasibility of our approach we implemented a feedback control scheduling simulator that was used to analyze our framework under various simulated workloads. We also implemented our framework as a Linux kernel module running on a virtual machine as well as a Raspberry Pi 4 single board computer. Results illustrate that our approach was able to maintain a sufficient level of service without overloading the processor as well as providing an energy savings of almost 20% as compared to the native resource management in Linux

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    Get PDF
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems

    Mobility management-based autonomous energy-aware framework using machine learning approach in dense mobile networks

    Get PDF
    A paramount challenge of prohibiting increased CO2 emissions for network densification is to deliver the Fifth Generation (5G) cellular capacity and connectivity demands, while maintaining a greener, healthier and prosperous environment. Energy consumption is a demanding consideration in the 5G era to combat several challenges such as reactive mode of operation, high latency wake up times, incorrect user association with the cells, multiple cross-functional operation of Self-Organising Networks (SON), etc. To address this challenge, we propose a novel Mobility Management-Based Autonomous Energy-Aware Framework for analysing bus passengers ridership through statistical Machine Learning (ML) and proactive energy savings coupled with CO2 emissions in Heterogeneous Network (HetNet) architecture using Reinforcement Learning (RL). Furthermore, we compare and report various ML algorithms using bus passengers ridership obtained from London Overground (LO) dataset. Extensive spatiotemporal simulations show that our proposed framework can achieve up to 98.82% prediction accuracy and CO2 reduction gains of up to 31.83%
    corecore