2,920 research outputs found

    An Adaptive Lightweight Security Framework Suited for IoT

    Get PDF
    Standard security systems are widely implemented in the industry. These systems consume considerable computational resources. Devices in the Internet of Things [IoT] are very limited with processing capacity, memory and storage. Therefore, existing security systems are not applicable for IoT. To cope with it, we propose downsizing of existing security processes. In this chapter, we describe three areas, where we reduce the required storage space and processing power. The first is the classification process required for ongoing anomaly detection, whereby values accepted or generated by a sensor are classified as valid or abnormal. We collect historic data and analyze it using machine learning techniques to draw a contour, where all streaming values are expected to fall within the contour space. Hence, the detailed collected data from the sensors are no longer required for real-time anomaly detection. The second area involves the implementation of the Random Forest algorithm to apply distributed and parallel processing for anomaly discovery. The third area is downsizing cryptography calculations, to fit IoT limitations without compromising security. For each area, we present experimental results supporting our approach and implementation

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Attributes of Big Data Analytics for Data-Driven Decision Making in Cyber-Physical Power Systems

    Get PDF
    Big data analytics is a virtually new term in power system terminology. This concept delves into the way a massive volume of data is acquired, processed, analyzed to extract insight from available data. In particular, big data analytics alludes to applications of artificial intelligence, machine learning techniques, data mining techniques, time-series forecasting methods. Decision-makers in power systems have been long plagued by incapability and weakness of classical methods in dealing with large-scale real practical cases due to the existence of thousands or millions of variables, being time-consuming, the requirement of a high computation burden, divergence of results, unjustifiable errors, and poor accuracy of the model. Big data analytics is an ongoing topic, which pinpoints how to extract insights from these large data sets. The extant article has enumerated the applications of big data analytics in future power systems through several layers from grid-scale to local-scale. Big data analytics has many applications in the areas of smart grid implementation, electricity markets, execution of collaborative operation schemes, enhancement of microgrid operation autonomy, management of electric vehicle operations in smart grids, active distribution network control, district hub system management, multi-agent energy systems, electricity theft detection, stability and security assessment by PMUs, and better exploitation of renewable energy sources. The employment of big data analytics entails some prerequisites, such as the proliferation of IoT-enabled devices, easily-accessible cloud space, blockchain, etc. This paper has comprehensively conducted an extensive review of the applications of big data analytics along with the prevailing challenges and solutions

    Ami-deu : un cadre sémantique pour des applications adaptables dans des environnements intelligents

    Get PDF
    Cette thèse vise à étendre l’utilisation de l'Internet des objets (IdO) en facilitant le développement d’applications par des personnes non experts en développement logiciel. La thèse propose une nouvelle approche pour augmenter la sémantique des applications d’IdO et l’implication des experts du domaine dans le développement d’applications sensibles au contexte. Notre approche permet de gérer le contexte changeant de l’environnement et de générer des applications qui s’exécutent dans plusieurs environnements intelligents pour fournir des actions requises dans divers contextes. Notre approche est mise en œuvre dans un cadriciel (AmI-DEU) qui inclut les composants pour le développement d’applications IdO. AmI-DEU intègre les services d’environnement, favorise l’interaction de l’utilisateur et fournit les moyens de représenter le domaine d’application, le profil de l’utilisateur et les intentions de l’utilisateur. Le cadriciel permet la définition d’applications IoT avec une intention d’activité autodécrite qui contient les connaissances requises pour réaliser l’activité. Ensuite, le cadriciel génère Intention as a Context (IaaC), qui comprend une intention d’activité autodécrite avec des connaissances colligées à évaluer pour une meilleure adaptation dans des environnements intelligents. La sémantique de l’AmI-DEU est basée sur celle du ContextAA (Context-Aware Agents) – une plateforme pour fournir une connaissance du contexte dans plusieurs environnements. Le cadriciel effectue une compilation des connaissances par des règles et l'appariement sémantique pour produire des applications IdO autonomes capables de s’exécuter en ContextAA. AmI- DEU inclut également un outil de développement visuel pour le développement et le déploiement rapide d'applications sur ContextAA. L'interface graphique d’AmI-DEU adopte la métaphore du flux avec des aides visuelles pour simplifier le développement d'applications en permettant des définitions de règles étape par étape. Dans le cadre de l’expérimentation, AmI-DEU comprend un banc d’essai pour le développement d’applications IdO. Les résultats expérimentaux montrent une optimisation sémantique potentielle des ressources pour les applications IoT dynamiques dans les maisons intelligentes et les villes intelligentes. Notre approche favorise l'adoption de la technologie pour améliorer le bienêtre et la qualité de vie des personnes. Cette thèse se termine par des orientations de recherche que le cadriciel AmI-DEU dévoile pour réaliser des environnements intelligents omniprésents fournissant des adaptations appropriées pour soutenir les intentions des personnes.Abstract: This thesis aims at expanding the use of the Internet of Things (IoT) by facilitating the development of applications by people who are not experts in software development. The thesis proposes a new approach to augment IoT applications’ semantics and domain expert involvement in context-aware application development. Our approach enables us to manage the changing environment context and generate applications that run in multiple smart environments to provide required actions in diverse settings. Our approach is implemented in a framework (AmI-DEU) that includes the components for IoT application development. AmI- DEU integrates environment services, promotes end-user interaction, and provides the means to represent the application domain, end-user profile, and end-user intentions. The framework enables the definition of IoT applications with a self-described activity intention that contains the required knowledge to achieve the activity. Then, the framework generates Intention as a Context (IaaC), which includes a self-described activity intention with compiled knowledge to be assessed for augmented adaptations in smart environments. AmI-DEU framework semantics adopts ContextAA (Context-Aware Agents) – a platform to provide context-awareness in multiple environments. The framework performs a knowledge compilation by rules and semantic matching to produce autonomic IoT applications to run in ContextAA. AmI-DEU also includes a visual tool for quick application development and deployment to ContextAA. The AmI-DEU GUI adopts the flow metaphor with visual aids to simplify developing applications by allowing step-by-step rule definitions. As part of the experimentation, AmI-DEU includes a testbed for IoT application development. Experimental results show a potential semantic optimization for dynamic IoT applications in smart homes and smart cities. Our approach promotes technology adoption to improve people’s well-being and quality of life. This thesis concludes with research directions that the AmI-DEU framework uncovers to achieve pervasive smart environments providing suitable adaptations to support people’s intentions

    Predictive intelligence to the edge through approximate collaborative context reasoning

    Get PDF
    We focus on Internet of Things (IoT) environments where a network of sensing and computing devices are responsible to locally process contextual data, reason and collaboratively infer the appearance of a specific phenomenon (event). Pushing processing and knowledge inference to the edge of the IoT network allows the complexity of the event reasoning process to be distributed into many manageable pieces and to be physically located at the source of the contextual information. This enables a huge amount of rich data streams to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized Cloud system. We propose a lightweight, energy-efficient, distributed, adaptive, multiple-context perspective event reasoning model under uncertainty on each IoT device (sensor/actuator). Each device senses and processes context data and infers events based on different local context perspectives: (i) expert knowledge on event representation, (ii) outliers inference, and (iii) deviation from locally predicted context. Such novel approximate reasoning paradigm is achieved through a contextualized, collaborative belief-driven clustering process, where clusters of devices are formed according to their belief on the presence of events. Our distributed and federated intelligence model efficiently identifies any localized abnormality on the contextual data in light of event reasoning through aggregating local degrees of belief, updates, and adjusts its knowledge to contextual data outliers and novelty detection. We provide comprehensive experimental and comparison assessment of our model over real contextual data with other localized and centralized event detection models and show the benefits stemmed from its adoption by achieving up to three orders of magnitude less energy consumption and high quality of inference

    Toward enhancement of deep learning techniques using fuzzy logic: a survey

    Get PDF
    Deep learning has emerged recently as a type of artificial intelligence (AI) and machine learning (ML), it usually imitates the human way in gaining a particular knowledge type. Deep learning is considered an essential data science element, which comprises predictive modeling and statistics. Deep learning makes the processes of collecting, interpreting, and analyzing big data easier and faster. Deep neural networks are kind of ML models, where the non-linear processing units are layered for the purpose of extracting particular features from the inputs. Actually, the training process of similar networks is very expensive and it also depends on the used optimization method, hence optimal results may not be provided. The techniques of deep learning are also vulnerable to data noise. For these reasons, fuzzy systems are used to improve the performance of deep learning algorithms, especially in combination with neural networks. Fuzzy systems are used to improve the representation accuracy of deep learning models. This survey paper reviews some of the deep learning based fuzzy logic models and techniques that were presented and proposed in the previous studies, where fuzzy logic is used to improve deep learning performance. The approaches are divided into two categories based on how both of the samples are combined. Furthermore, the models' practicality in the actual world is revealed

    A framework to manage uncertainties in cloud manufacturing environment

    Get PDF
    This research project aims to develop a framework to manage uncertainty in cloud manufacturing for small and medium enterprises (SMEs). The framework includes a cloud manufacturing taxonomy; guidance to deal with uncertainty in cloud manufacturing, by providing a process to identify uncertainties; a detailed step-by-step approach to managing the uncertainties; a list of uncertainties; and response strategies to security and privacy uncertainties in cloud manufacturing. Additionally, an online assessment tool has been developed to implement the uncertainty management framework into a real life context. To fulfil the aim and objectives of the research, a comprehensive literature review was performed in order to understand the research aspects. Next, an uncertainty management technique was applied to identify, assess, and control uncertainties in cloud manufacturing. Two well-known approaches were used in the evaluation of the uncertainties in this research: Simple Multi-Attribute Rating Technique (SMART) to prioritise uncertainties; and a fuzzy rule-based system to quantify security and privacy uncertainties. Finally, the framework was embedded into an online assessment tool and validated through expert opinion and case studies. Results from this research are useful for both academia and industry in understanding aspects of cloud manufacturing. The main contribution is a framework that offers new insights for decisions makers on how to deal with uncertainty at adoption and implementation stages of cloud manufacturing. The research also introduced a novel cloud manufacturing taxonomy, a list of uncertainty factors, an assessment process to prioritise uncertainties and quantify security and privacy related uncertainties, and a knowledge base for providing recommendations and solutions

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies
    • …
    corecore