14 research outputs found

    Smartphone-based real-time object recognition architecture for portable and constrained systems

    Get PDF
    Machine learning algorithms based on convolutional neural networks (CNNs) have recently been explored in a myriad of object detection applications. Nonetheless, many devices with limited computation resources and strict power consumption constraints are not suitable to run such algorithms designed for high-performance computers. Hence, a novel smartphonebased architecture intended for portable and constrained systems is designed and implemented to run CNN-based object recognition in real time and with high efciency. The system is designed and optimised by leveraging the integration of the best of its kind from the state-of-the-art machine learning platforms including OpenCV, TensorFlow Lite, and Qualcomm Snapdragon informed by empirical testing and evaluation of each candidate framework in a comparable scenario with a high demanding neural network. The fnal system has been prototyped combining the strengths from these frameworks and led to a new machine learning-based object recognition execution environment embedded in a smartphone with advantageous performance compared with the previous frameworks

    Efficient CNN-based low-resolution facial detection from UAVs

    Get PDF
    Face detection in UAV imagery requires high accuracy and low execution time for real-time mission-critical operations in public safety, emergency management, disaster relief and other applications. This study presents UWS-YOLO, a new convolutional neural network (CNN)-based machine learning algorithm designed to address these demanding requirements. UWS-YOLO’s key strengths lie in its exceptional speed, remarkable accuracy and ability to handle complex UAV operations. This algorithm presents a balanced and portable solution for real-time face detection in UAV applications. Evaluation and comparison with the state-of-the-art algorithms using standard and UAV-specific datasets demonstrate UWS-YOLO’s superiority. It achieves 59.29% of accuracy compared with 27.43% in a state-of-the-art solution RetinaFace and 46.59% with YOLOv7. Additionally, UWS-YOLO operates at 11 milliseconds, which is 345% faster than RetinaFace and 373% than YOLOv7

    Machine learning-based carbon dioxide concentration prediction for hybrid vehicles

    Get PDF
    The current understanding of CO2 emission concentrations in hybrid vehicles (HVs) is limited, due to the complexity of the constant changes in their power-train sources. This study aims to address this problem by examining the accuracy, speed and size of traditional and advanced machine learning (ML) models for predicting CO2 emissions in HVs. A new long short-term memory (LSTM)-based model called UWS-LSTM has been developed to overcome the deficiencies of existing models. The dataset collected includes more than 20 parameters, and an extensive input feature optimization has been conducted to determine the most effective parameters. The results indicate that the UWS-LSTM model outperforms traditional ML and artificial neural network (ANN)-based models by achieving 97.5% accuracy. Furthermore, to demonstrate the efficiency of the proposed model, the CO2-concentration predictor has been implemented in a low-powered IoT device embedded in a commercial HV, resulting in rapid predictions with an average latency of 21.64 ms per prediction. The proposed algorithm is fast, accurate and computationally efficient, and it is anticipated that it will make a significant contribution to the field of smart vehicle applications

    Smartphone-based object recognition with embedded machine learning intelligence for unmanned aerial vehicles

    Get PDF
    Existing artificial intelligence solutions typically operate in powerful platforms with high computational resources availability. However, a growing number of emerging use cases such as those based on unmanned aerial systems (UAS) require new solutions with embedded artificial intelligence on a highly mobile platform. This paper proposes an innovative UAS that explores machine learning (ML) capabilities in a smartphone‐based mobile platform for object detection and recognition applications. A new system framework tailored to this challenging use case is designed with a customized workflow specified. Furthermore, the design of the embedded ML leverages TensorFlow, a cutting‐edge open‐source ML framework. The prototype of the system integrates all the architectural components in a fully functional system, and it is suitable for real‐world operational environments such as seek and rescue use cases. Experimental results validate the design and prototyping of the system and demonstrate an overall improved performance compared with the state of the art in terms of a wide range of metrics

    Illumination-aware image fusion for around-the-clock human detection in adverse environments from unmanned aerial vehicle

    No full text
    This study proposes a novel illumination-aware image fusion technique and a Convolutional Neural Network (CNN) called BlendNet to significantly enhance the robustness and real-time performance of small human objects detection from Unmanned Aerial Vehicles (UAVs) in harsh and adverse operation environments. The proposed solution is particular useful for missioncritical public safety applications such as search and rescue operations in rural areas. The operation environments of such missions are featured with poor illumination condition and complex background such as dense vegetation and undergrowth in diverse weather conditions, and the missions have to address the challenges of detecting humans from UAVs at high altitudes, with a moving platform and from various viewing angles. To overcome these challenges, the proposed solution register and fuse the images using Enhanced Correlation Coefficient (ECC) and arithmetic image addition with customized weights techniques. The result of this fusion is fuelled with our new BlendNet AI model achieving 95.01 % of accuracy with 42.2 Frames Per Second (FPS) on Titan X GPU with input size of 608 pixels. The effectiveness of the proposed fusion method has been evaluated and compared with other methods using the KAIST public dataset. The experimental results show competitive performance of BlendNet in terms of both visual quality as well as quantitative assessment of high detection accuracy at high speed
    corecore