178 research outputs found

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing

    Fuzzy Logic in Surveillance Big Video Data Analysis: Comprehensive Review, Challenges, and Research Directions

    Get PDF
    CCTV cameras installed for continuous surveillance generate enormous amounts of data daily, forging the term “Big Video Data” (BVD). The active practice of BVD includes intelligent surveillance and activity recognition, among other challenging tasks. To efficiently address these tasks, the computer vision research community has provided monitoring systems, activity recognition methods, and many other computationally complex solutions for the purposeful usage of BVD. Unfortunately, the limited capabilities of these methods, higher computational complexity, and stringent installation requirements hinder their practical implementation in real-world scenarios, which still demand human operators sitting in front of cameras to monitor activities or make actionable decisions based on BVD. The usage of human-like logic, known as fuzzy logic, has been employed emerging for various data science applications such as control systems, image processing, decision making, routing, and advanced safety-critical systems. This is due to its ability to handle various sources of real world domain and data uncertainties, generating easily adaptable and explainable data-based models. Fuzzy logic can be effectively used for surveillance as a complementary for huge-sized artificial intelligence models and tiresome training procedures. In this paper, we draw researchers’ attention towards the usage of fuzzy logic for surveillance in the context of BVD. We carry out a comprehensive literature survey of methods for vision sensory data analytics that resort to fuzzy logic concepts. Our overview highlights the advantages, downsides, and challenges in existing video analysis methods based on fuzzy logic for surveillance applications. We enumerate and discuss the datasets used by these methods, and finally provide an outlook towards future research directions derived from our critical assessment of the efforts invested so far in this exciting field

    An Efficient Deep Learning Framework for Intelligent Energy Management in IoT Networks

    Full text link
    [EN] Green energy management is an economical solution for better energy usage, but the employed literature lacks focusing on the potentials of edge intelligence in controllable Internet of Things (IoT). Therefore, in this article, we focus on the requirements of todays' smart grids, homes, and industries to propose a deep-learning-based framework for intelligent energy management. We predict future energy consumption for short intervals of time as well as provide an efficient way of communication between energy distributors and consumers. The key contributions include edge devices-based real-time energy management via common cloud-based data supervising server, optimal normalization technique selection, and a novel sequence learning-based energy forecasting mechanism with reduced time complexity and lowest error rates. In the proposed framework, edge devices relate to a common cloud server in an IoT network that communicates with the associated smart grids to effectively continue the energy demand and response phenomenon. We apply several preprocessing techniques to deal with the diverse nature of electricity data, followed by an efficient decision-making algorithm for short-term forecasting and implement it over resource-constrained devices. We perform extensive experiments and witness 0.15 and 3.77 units reduced mean-square error (MSE) and root MSE (RMSE) for residential and commercial datasets, respectively.This work was supported in part by the National Research Foundation of Korea Grant Funded by the Korea Government (MSIT) under Grant 2019M3F2A1073179; in part by the "Ministerio de Economia y Competitividad" in the "Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia, Subprograma Estatal de Generacion de Conocimiento" Within the Project under Grant TIN2017-84802-C2-1-P; and in part by the European Union through the ERANETMED (Euromediterranean Cooperation through ERANET Joint Activities and Beyond) Project ERANETMED3-227 SMARTWATIR.Han, T.; Muhammad, K.; Hussain, T.; Lloret, J.; Baik, SW. (2021). An Efficient Deep Learning Framework for Intelligent Energy Management in IoT Networks. IEEE Internet of Things. 8(5):3170-3179. https://doi.org/10.1109/JIOT.2020.3013306S317031798

    Accelerating temporal action proposal generation via high performance computing

    Full text link
    Temporal action recognition always depends on temporal action proposal generation to hypothesize actions and algorithms usually need to process very long video sequences and output the starting and ending times of each potential action in each video suffering from high computation cost. To address this, based on boundary sensitive network we propose a new temporal convolution network called Multipath Temporal ConvNet (MTN), which consists of two parts i.e. Multipath DenseNet and SE-ConvNet. In this work, one novel high performance ring parallel architecture based on Message Passing Interface (MPI) is further introduced into temporal action proposal generation, which is a reliable communication protocol, in order to respond to the requirements of large memory occupation and a large number of videos. Remarkably, the total data transmission is reduced by adding a connection between multiple computing load in the newly developed architecture. It is found that, compared to the traditional Parameter Server architecture, our parallel architecture has higher efficiency on temporal action detection task with multiple GPUs, which is suitable for dealing with the tasks of temporal action proposal generation, especially for large datasets of millions of videos. We conduct experiments on ActivityNet-1.3 and THUMOS14, where our method outperforms other state-of-art temporal action detection methods with high recall and high temporal precision. In addition, a time metric is further proposed here to evaluate the speed performance in the distributed training process.Comment: 11 pages, 12 figure

    Deep Learning for Safe Autonomous Driving: Current Challenges and Future Directions

    Full text link
    [EN] Advances in information and signal processing technologies have a significant impact on autonomous driving (AD), improving driving safety while minimizing the efforts of human drivers with the help of advanced artificial intelligence (AI) techniques. Recently, deep learning (DL) approaches have solved several real-world problems of complex nature. However, their strengths in terms of control processes for AD have not been deeply investigated and highlighted yet. This survey highlights the power of DL architectures in terms of reliability and efficient real-time performance and overviews state-of-the-art strategies for safe AD, with their major achievements and limitations. Furthermore, it covers major embodiments of DL along the AD pipeline including measurement, analysis, and execution, with a focus on road, lane, vehicle, pedestrian, drowsiness detection, collision avoidance, and traffic sign detection through sensing and vision-based DL methods. In addition, we discuss on the performance of several reviewed methods by using different evaluation metrics, with critics on their pros and cons. Finally, this survey highlights the current issues of safe DL-based AD with a prospect of recommendations for future research, rounding up a reference material for newcomers and researchers willing to join this vibrant area of Intelligent Transportation Systems.This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) Grant funded by the Korea Government (MSIT) (2019-0-00136, Development of AI-Convergence Technologies for Smart City Industry Productivity Innovation); The work of Javier Del Ser was supported by the Basque Government through the EMAITEK and ELKARTEK Programs, as well as by the Department of Education of this institution (Consolidated Research Group MATHMODE, IT1294-19); VHCA received support from the Brazilian National Council for Research and Development (CNPq, Grant #304315/2017-6 and #430274/2018-1).Muhammad, K.; Ullah, A.; Lloret, J.; Del Ser, J.; De Albuquerque, VHC. (2021). Deep Learning for Safe Autonomous Driving: Current Challenges and Future Directions. IEEE Transactions on Intelligent Transportation Systems. 22(7):4316-4336. https://doi.org/10.1109/TITS.2020.30322274316433622

    Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms

    Full text link
    Object detection models commonly deployed on uncrewed aerial systems (UAS) focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB) imagery. However, there is growing interest in fusing RGB with thermal long wave infrared (LWIR) images to increase the performance of object detection machine learning (ML) models. Currently LWIR ML models have received less research attention, especially for both ground- and air-based platforms, leading to a lack of baseline performance metrics evaluating LWIR, RGB and LWIR-RGB fused object detection models. Therefore, this research contributes such quantitative metrics to the literature. The results found that the ground-based blended RGB-LWIR model exhibited superior performance compared to the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended RGB-LWIR model was also the only object detection model to work in both day and night conditions, providing superior operational capabilities. This research additionally contributes a novel labelled training dataset of 12,600 images for RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and air-based platforms, enabling further multispectral machine-driven object detection research.Comment: 18 pages, 12 figures, 2 table

    AI-enabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges

    Get PDF
    The COVID-19 pandemic has overwhelmed the existing healthcare infrastructure in many parts of the world. Healthcare professionals are not only over-burdened but also at a high risk of nosocomial transmission from COVID-19 patients. Screening and monitoring the health of a large number of susceptible or infected individuals is a challenging task. Although professional medical attention and hospitalization are necessary for high-risk COVID-19 patients, home isolation is an effective strategy for low and medium risk patients as well as for those who are at risk of infection and have been quarantined. However, this necessitates effective techniques for remotely monitoring the patients’ symptoms. Recent advances in Machine Learning (ML) and Deep Learning (DL) have strengthened the power of imaging techniques and can be used to remotely perform several tasks that previously required the physical presence of a medical professional. In this work, we study the prospects of vital signs monitoring for COVID-19 infected as well as quarantined individuals by using DL and image/signal-processing techniques, many of which can be deployed using simple cameras and sensors available on a smartphone or a personal computer, without the need of specialized equipment. We demonstrate the potential of ML-enabled workflows for several vital signs such as heart and respiratory rates, cough, blood pressure, and oxygen saturation. We also discuss the challenges involved in implementing ML-enabled techniques
    corecore