63,283 research outputs found
Realizing Video Analytic Service in the Fog-Based Infrastructure-Less Environments
Deep learning has unleashed the great potential in many fields and now is the most significant facilitator for video analytics owing to its capability to providing more intelligent services in a complex scenario. Meanwhile, the emergence of fog computing has brought unprecedented opportunities to provision intelligence services in infrastructure-less environments like remote national parks and rural farms. However, most of the deep learning algorithms are computationally intensive and impossible to be executed in such environments due to the needed supports from the cloud. In this paper, we develop a video analytic framework, which is tailored particularly for the fog devices to realize video analytic service in a rapid manner. Also, the convolution neural networks are used as the core processing unit in the framework to facilitate the image analysing process
Engineering a QoS Provider Mechanism for Edge Computing with Deep Reinforcement Learning
With the development of new system solutions that integrate traditional cloud
computing with the edge/fog computing paradigm, dynamic optimization of service
execution has become a challenge due to the edge computing resources being more
distributed and dynamic. How to optimize the execution to provide Quality of
Service (QoS) in edge computing depends on both the system architecture and the
resource allocation algorithms in place. We design and develop a QoS provider
mechanism, as an integral component of a fog-to-cloud system, to work in
dynamic scenarios by using deep reinforcement learning. We choose reinforcement
learning since it is particularly well suited for solving problems in dynamic
and adaptive environments where the decision process needs to be frequently
updated. We specifically use a Deep Q-learning algorithm that optimizes QoS by
identifying and blocking devices that potentially cause service disruption due
to dynamicity. We compare the reinforcement learning based solution with
state-of-the-art heuristics that use telemetry data, and analyze pros and cons
Adaptive Deep Learning Detection Model for Multi-Foggy Images
The fog has different features and effects within every single environment. Detection whether there is fog in the image is considered a challenge and giving the type of fog has a substantial enlightening effect on image defogging. Foggy scenes have different types such as scenes based on fog density level and scenes based on fog type. Machine learning techniques have a significant contribution to the detection of foggy scenes. However, most of the existing detection models are based on traditional machine learning models, and only a few studies have adopted deep learning models. Furthermore, most of the existing machines learning detection models are based on fog density-level scenes. However, to the best of our knowledge, there is no such detection model based on multi-fog type scenes have presented yet. Therefore, the main goal of our study is to propose an adaptive deep learning model for the detection of multi-fog types of images. Moreover, due to the lack of a publicly available dataset for inhomogeneous, homogenous, dark, and sky foggy scenes, a dataset for multi-fog scenes is presented in this study (https://github.com/Karrar-H-Abdulkareem/Multi-Fog-Dataset). Experiments were conducted in three stages. First, the data collection phase is based on eight resources to obtain the multi-fog scene dataset. Second, a classification experiment is conducted based on the ResNet-50 deep learning model to obtain detection results. Third, evaluation phase where the performance of the ResNet-50 detection model has been compared against three different models. Experimental results show that the proposed model has presented a stable classification performance for different foggy images with a 96% score for each of Classification Accuracy Rate (CAR), Recall, Precision, F1-Score which has specific theoretical and practical significance. Our proposed model is suitable as a pre-processing step and might be considered in different real-time applications
Multi-level Adversarial Spatio-temporal Learning for Footstep Pressure based FoG Detection
Freezing of gait (FoG) is one of the most common symptoms of Parkinson's
disease, which is a neurodegenerative disorder of the central nervous system
impacting millions of people around the world. To address the pressing need to
improve the quality of treatment for FoG, devising a computer-aided detection
and quantification tool for FoG has been increasingly important. As a
non-invasive technique for collecting motion patterns, the footstep pressure
sequences obtained from pressure sensitive gait mats provide a great
opportunity for evaluating FoG in the clinic and potentially in the home
environment. In this study, FoG detection is formulated as a sequential
modelling task and a novel deep learning architecture, namely Adversarial
Spatio-temporal Network (ASTN), is proposed to learn FoG patterns across
multiple levels. A novel adversarial training scheme is introduced with a
multi-level subject discriminator to obtain subject-independent FoG
representations, which helps to reduce the over-fitting risk due to the high
inter-subject variance. As a result, robust FoG detection can be achieved for
unseen subjects. The proposed scheme also sheds light on improving
subject-level clinical studies from other scenarios as it can be integrated
with many existing deep architectures. To the best of our knowledge, this is
one of the first studies of footstep pressure-based FoG detection and the
approach of utilizing ASTN is the first deep neural network architecture in
pursuit of subject-independent representations. Experimental results on 393
trials collected from 21 subjects demonstrate encouraging performance of the
proposed ASTN for FoG detection with an AUC 0.85
FogLearn: Leveraging Fog-based Machine Learning for Smart System Big Data Analytics
Big data analytics with the cloud computing are one of the emerging area for processing and analytics. Fog computing is the paradigm where fog devices help to reduce latency and increase throughput for assisting at the edge of the client. This paper discussed the emergence of fog computing for mining analytics in big data from geospatial and medical health applications. This paper proposed and developed fog computing based framework i.e. FogLearn for application of K-means clustering in Ganga River Basin Management and realworld feature data for detecting diabetes patients suffering from diabetes mellitus. Proposed architecture employed machine learning on deep learning framework for analysis of pathological feature data that obtained from smart watches worn by the patients with diabetes and geographical parameters of River Ganga basin geospatial database. The results showed that fog computing hold an immense promise for analysis of medical and geospatial big data
Freezing of Gait Prediction From Accelerometer Data Using a Simple 1D-Convolutional Neural Network -- 8th Place Solution for Kaggle's Parkinson's Freezing of Gait Prediction Competition
Freezing of Gait (FOG) is a common motor symptom in patients with Parkinson's
disease (PD). During episodes of FOG, patients suddenly lose their ability to
stride as intended. Patient-worn accelerometers can capture information on the
patient's movement during these episodes and machine learning algorithms can
potentially classify this data. The combination therefore holds the potential
to detect FOG in real-time. In this work I present a simple 1-D convolutional
neural network that was trained to detect FOG events in accelerometer data.
Model performance was assessed by measuring the success of the model to
discriminate normal movement from FOG episodes and resulted in a mean average
precision of 0.356 on the private leaderboard on Kaggle. Ultimately, the model
ranked 8th out of 1379 teams in the Parkinson's Freezing of Gait Prediction
competition. The results underscore the potential of Deep Learning-based
solutions in advancing the field of FOG detection, contributing to improved
interventions and management strategies for PD patients.Comment: 5 pages, 2 figures, competition report, for associated code see:
https://github.com/janbrederecke/fo
Mobile learning architecture using fog computing and adaptive data streaming
With the huge development in mobile and network fields, sensor technologies and fog computing help the students for more effective learning, flexible and in and effective manner from anywhere. Using the mobile device for learn encourage the transition to mobile computing (cloud and fog computing) which is led to the ability to design customized system that help student to learn via context aware learning which can be done by set the user preference and use proper methods to show only related manner subject. The presented study works on developing a system of e-learning which has been on the basis of fog computing concepts with deep learning approaches utilized for classification to the data content for accomplishing the context aware learning and use the adaptation of video quality using special equation and the data encrypted and decrypted using 3DES algorithm to ensure the security side of the operation
Deep Reinforcement Learning-based Scheduling in Edge and Fog Computing Environments
Edge/fog computing, as a distributed computing paradigm, satisfies the
low-latency requirements of ever-increasing number of IoT applications and has
become the mainstream computing paradigm behind IoT applications. However,
because large number of IoT applications require execution on the edge/fog
resources, the servers may be overloaded. Hence, it may disrupt the edge/fog
servers and also negatively affect IoT applications' response time. Moreover,
many IoT applications are composed of dependent components incurring extra
constraints for their execution. Besides, edge/fog computing environments and
IoT applications are inherently dynamic and stochastic. Thus, efficient and
adaptive scheduling of IoT applications in heterogeneous edge/fog computing
environments is of paramount importance. However, limited computational
resources on edge/fog servers imposes an extra burden for applying optimal but
computationally demanding techniques. To overcome these challenges, we propose
a Deep Reinforcement Learning-based IoT application Scheduling algorithm,
called DRLIS to adaptively and efficiently optimize the response time of
heterogeneous IoT applications and balance the load of the edge/fog servers. We
implemented DRLIS as a practical scheduler in the FogBus2 function-as-a-service
framework for creating an edge-fog-cloud integrated serverless computing
environment. Results obtained from extensive experiments show that DRLIS
significantly reduces the execution cost of IoT applications by up to 55%, 37%,
and 50% in terms of load balancing, response time, and weighted cost,
respectively, compared with metaheuristic algorithms and other reinforcement
learning techniques
- …