36 research outputs found
ChatGPT: Vision and Challenges
Artificial intelligence (AI) and machine learning have changed the nature of
scientific inquiry in recent years. Of these, the development of virtual
assistants has accelerated greatly in the past few years, with ChatGPT becoming
a prominent AI language model. In this study, we examine the foundations,
vision, research challenges of ChatGPT. This article investigates into the
background and development of the technology behind it, as well as its popular
applications. Moreover, we discuss the advantages of bringing everything
together through ChatGPT and Internet of Things (IoT). Further, we speculate on
the future of ChatGPT by considering various possibilities for study and
development, such as energy-efficiency, cybersecurity, enhancing its
applicability to additional technologies (Robotics and Computer Vision),
strengthening human-AI communications, and bridging the technological gap.
Finally, we discuss the important ethics and current trends of ChatGPT
ROUTER:Fog Enabled Cloud based Intelligent Resource Management Approach for Smart Home IoT Devices
There is a growing requirement for Internet of Things (IoT) infrastructure to ensure low response time to provision latency-sensitive real-time applications such as health monitoring, disaster management, and smart homes. Fog computing offers a means to provide such requirements, via a virtualized intermediate layer to provide data, computation, storage, and networking services between Cloud datacenters and end users. A key element within such Fog computing environments is resource management. While there are existing resource manager in Fog computing, they only focus on a subset of parameters important to Fog resource management encompassing system response time, network bandwidth, energy consumption and latency. To date no existing Fog resource manager considers these parameters simultaneously for decision making, which in the context of smart homes will become increasingly key. In this paper, we propose a novel resource management technique (ROUTER) for fog-enabled Cloud computing environments, which leverages Particle Swarm Optimization to optimize simultaneously. The approach is validated within an IoT-based smart home automation scenario, and evaluated within iFogSim toolkit driven by empirical models within a small-scale smart home experiment. Results demonstrate our approach results a reduction of 12% network bandwidth, 10% response time, 14% latency and 12.35% in energy consumption
A Meta-learning based Stacked Regression Approach for Customer Lifetime Value Prediction
Companies across the globe are keen on targeting potential high-value
customers in an attempt to expand revenue and this could be achieved only by
understanding the customers more. Customer Lifetime Value (CLV) is the total
monetary value of transactions/purchases made by a customer with the business
over an intended period of time and is used as means to estimate future
customer interactions. CLV finds application in a number of distinct business
domains such as Banking, Insurance, Online-entertainment, Gaming, and
E-Commerce. The existing distribution-based and basic (recency, frequency &
monetary) based models face a limitation in terms of handling a wide variety of
input features. Moreover, the more advanced Deep learning approaches could be
superfluous and add an undesirable element of complexity in certain application
areas. We, therefore, propose a system which is able to qualify both as
effective, and comprehensive yet simple and interpretable. With that in mind,
we develop a meta-learning-based stacked regression model which combines the
predictions from bagging and boosting models that each is found to perform well
individually. Empirical tests have been carried out on an openly available
Online Retail dataset to evaluate various models and show the efficacy of the
proposed approach.Comment: 11 pages, 7 figure
ChainsFormer: A Chain Latency-aware Resource Provisioning Approach for Microservices Cluster
The trend towards transitioning from monolithic applications to microservices
has been widely embraced in modern distributed systems and applications. This
shift has resulted in the creation of lightweight, fine-grained, and
self-contained microservices. Multiple microservices can be linked together via
calls and inter-dependencies to form complex functions. One of the challenges
in managing microservices is provisioning the optimal amount of resources for
microservices in the chain to ensure application performance while improving
resource usage efficiency. This paper presents ChainsFormer, a framework that
analyzes microservice inter-dependencies to identify critical chains and nodes,
and provision resources based on reinforcement learning. To analyze chains,
ChainsFormer utilizes light-weight machine learning techniques to address the
dynamic nature of microservice chains and workloads. For resource provisioning,
a reinforcement learning approach is used that combines vertical and horizontal
scaling to determine the amount of allocated resources and the number of
replicates. We evaluate the effectiveness of ChainsFormer using realistic
applications and traces on a real testbed based on Kubernetes. Our experimental
results demonstrate that ChainsFormer can reduce response time by up to 26% and
improve processed requests per second by 8% compared with state-of-the-art
techniques.Comment: 15 page
Deep Learning Based Forecasting of Indian Summer Monsoon Rainfall
Accurate short range weather forecasting has significant implications for
various sectors. Machine learning based approaches, e.g., deep learning, have
gained popularity in this domain where the existing numerical weather
prediction (NWP) models still have modest skill after a few days. Here we use a
ConvLSTM network to develop a deep learning model for precipitation
forecasting. The crux of the idea is to develop a forecasting model which
involves convolution based feature selection and uses long term memory in the
meteorological fields in conjunction with gradient based learning algorithm.
Prior to using the input data, we explore various techniques to overcome
dataset difficulties. We follow a strategic approach to deal with missing
values and discuss the models fidelity to capture realistic precipitation. The
model resolution used is (25 km). A comparison between 5 years of predicted
data and corresponding observational records for 2 days lead time forecast show
correlation coefficients of 0.67 and 0.42 for lead day 1 and 2 respectively.
The patterns indicate higher correlation over the Western Ghats and Monsoon
trough region (0.8 and 0.6 for lead day 1 and 2 respectively). Further, the
model performance is evaluated based on skill scores, Mean Square Error,
correlation coefficient and ROC curves. This study demonstrates that the
adopted deep learning approach based only on a single precipitation variable,
has a reasonable skill in the short range. Incorporating multivariable based
deep learning has the potential to match or even better the short range
precipitation forecasts based on the state of the art NWP models.Comment: 14 pages, 14 figures. The manuscript is under review with journal
'Transactions on Geoscience and Remote Sensing