743 research outputs found

    Maximizing Model Generalization for Machine Condition Monitoring with Self-Supervised Learning and Federated Learning

    Full text link
    Deep Learning (DL) can diagnose faults and assess machine health from raw condition monitoring data without manually designed statistical features. However, practical manufacturing applications remain extremely difficult for existing DL methods. Machine data is often unlabeled and from very few health conditions (e.g., only normal operating data). Furthermore, models often encounter shifts in domain as process parameters change and new categories of faults emerge. Traditional supervised learning may struggle to learn compact, discriminative representations that generalize to these unseen target domains since it depends on having plentiful classes to partition the feature space with decision boundaries. Transfer Learning (TL) with domain adaptation attempts to adapt these models to unlabeled target domains but assumes similar underlying structure that may not be present if new faults emerge. This study proposes focusing on maximizing the feature generality on the source domain and applying TL via weight transfer to copy the model to the target domain. Specifically, Self-Supervised Learning (SSL) with Barlow Twins may produce more discriminative features for monitoring health condition than supervised learning by focusing on semantic properties of the data. Furthermore, Federated Learning (FL) for distributed training may also improve generalization by efficiently expanding the effective size and diversity of training data by sharing information across multiple client machines. Results show that Barlow Twins outperforms supervised learning in an unlabeled target domain with emerging motor faults when the source training data contains very few distinct categories. Incorporating FL may also provide a slight advantage by diffusing knowledge of health conditions between machines

    Federated and autonomic management of multimedia services

    Get PDF

    Large-Scale Data Management and Analysis (LSDMA) - Big Data in Science

    Get PDF

    Security in Cloud Computing: Evaluation and Integration

    Get PDF
    Au cours de la derniĂšre dĂ©cennie, le paradigme du Cloud Computing a rĂ©volutionnĂ© la maniĂšre dont nous percevons les services de la Technologie de l’Information (TI). Celui-ci nous a donnĂ© l’opportunitĂ© de rĂ©pondre Ă  la demande constamment croissante liĂ©e aux besoins informatiques des usagers en introduisant la notion d’externalisation des services et des donnĂ©es. Les consommateurs du Cloud ont gĂ©nĂ©ralement accĂšs, sur demande, Ă  un large Ă©ventail bien rĂ©parti d’infrastructures de TI offrant une plĂ©thore de services. Ils sont Ă  mĂȘme de configurer dynamiquement les ressources du Cloud en fonction des exigences de leurs applications, sans toutefois devenir partie intĂ©grante de l’infrastructure du Cloud. Cela leur permet d’atteindre un degrĂ© optimal d’utilisation des ressources tout en rĂ©duisant leurs coĂ»ts d’investissement en TI. Toutefois, la migration des services au Cloud intensifie malgrĂ© elle les menaces existantes Ă  la sĂ©curitĂ© des TI et en crĂ©e de nouvelles qui sont intrinsĂšques Ă  l’architecture du Cloud Computing. C’est pourquoi il existe un rĂ©el besoin d’évaluation des risques liĂ©s Ă  la sĂ©curitĂ© du Cloud durant le procĂ©dĂ© de la sĂ©lection et du dĂ©ploiement des services. Au cours des derniĂšres annĂ©es, l’impact d’une efficace gestion de la satisfaction des besoins en sĂ©curitĂ© des services a Ă©tĂ© pris avec un sĂ©rieux croissant de la part des fournisseurs et des consommateurs. Toutefois, l’intĂ©gration rĂ©ussie de l’élĂ©ment de sĂ©curitĂ© dans les opĂ©rations de la gestion des ressources du Cloud ne requiert pas seulement une recherche mĂ©thodique, mais aussi une modĂ©lisation mĂ©ticuleuse des exigences du Cloud en termes de sĂ©curitĂ©. C’est en considĂ©rant ces facteurs que nous adressons dans cette thĂšse les dĂ©fis liĂ©s Ă  l’évaluation de la sĂ©curitĂ© et Ă  son intĂ©gration dans les environnements indĂ©pendants et interconnectĂ©s du Cloud Computing. D’une part, nous sommes motivĂ©s Ă  offrir aux consommateurs du Cloud un ensemble de mĂ©thodes qui leur permettront d’optimiser la sĂ©curitĂ© de leurs services et, d’autre part, nous offrons aux fournisseurs un Ă©ventail de stratĂ©gies qui leur permettront de mieux sĂ©curiser leurs services d’hĂ©bergements du Cloud. L’originalitĂ© de cette thĂšse porte sur deux aspects : 1) la description innovatrice des exigences des applications du Cloud relativement Ă  la sĂ©curitĂ© ; et 2) la conception de modĂšles mathĂ©matiques rigoureux qui intĂšgrent le facteur de sĂ©curitĂ© dans les problĂšmes traditionnels du dĂ©ploiement des applications, d’approvisionnement des ressources et de la gestion de la charge de travail au coeur des infrastructures actuelles du Cloud Computing. Le travail au sein de cette thĂšse est rĂ©alisĂ© en trois phases.----------ABSTRACT: Over the past decade, the Cloud Computing paradigm has revolutionized the way we envision IT services. It has provided an opportunity to respond to the ever increasing computing needs of the users by introducing the notion of service and data outsourcing. Cloud consumers usually have online and on-demand access to a large and distributed IT infrastructure providing a plethora of services. They can dynamically configure and scale the Cloud resources according to the requirements of their applications without becoming part of the Cloud infrastructure, which allows them to reduce their IT investment cost and achieve optimal resource utilization. However, the migration of services to the Cloud increases the vulnerability to existing IT security threats and creates new ones that are intrinsic to the Cloud Computing architecture, thus the need for a thorough assessment of Cloud security risks during the process of service selection and deployment. Recently, the impact of effective management of service security satisfaction has been taken with greater seriousness by the Cloud Service Providers (CSP) and stakeholders. Nevertheless, the successful integration of the security element into the Cloud resource management operations does not only require methodical research, but also necessitates the meticulous modeling of the Cloud security requirements. To this end, we address throughout this thesis the challenges to security evaluation and integration in independent and interconnected Cloud Computing environments. We are interested in providing the Cloud consumers with a set of methods that allow them to optimize the security of their services and the CSPs with a set of strategies that enable them to provide security-aware Cloud-based service hosting. The originality of this thesis lies within two aspects: 1) the innovative description of the Cloud applications’ security requirements, which paved the way for an effective quantification and evaluation of the security of Cloud infrastructures; and 2) the design of rigorous mathematical models that integrate the security factor into the traditional problems of application deployment, resource provisioning, and workload management within current Cloud Computing infrastructures. The work in this thesis is carried out in three phases

    Lookahead Computation in G-DEVS/HLA Environment

    No full text
    International audienceIn this article, we present new methods to evaluate lookahead of DEVS/G-DEVS federates participating in a HLA federation. We propose first an algorithm to compute the lookahead according to the current state of a DEVS/G-DEVS model. This solution is designed for models with lifetime function depending on one state variable. Then, we extend this computation to models with lifetime functions defined with several state variables. We use the Dijkstra graph theory search to compute the different values of state variables and a mathematical function analysis to determine the lookahead for the model states. Finally, we illustrate with an example how this solution extends the range of DEVS/G-DEVS models that can be involved into distributed simulations and we present some simulation results

    Computer-Mediated Communication

    Get PDF
    This book is an anthology of present research trends in Computer-mediated Communications (CMC) from the point of view of different application scenarios. Four different scenarios are considered: telecommunication networks, smart health, education, and human-computer interaction. The possibilities of interaction introduced by CMC provide a powerful environment for collaborative human-to-human, computer-mediated interaction across the globe

    Theoretical and Applied Foundations for Intrusion Detection in Single and Federated Clouds

    Get PDF
    Les systĂšmes infonuagiques deviennent de plus en plus complexes, plus dynamiques et hĂ©tĂ©rogĂšnes. Un tel environnement produit souvent des donnĂ©es complexes et bruitĂ©es, empĂȘchant les systĂšmes de dĂ©tection d’intrusion (IDS) de dĂ©tecter des variantes d’attaques connues. Une seule intrusion ou une attaque dans un tel systĂšme hĂ©tĂ©rogĂšne peut se prĂ©senter sous des formes diffĂ©rentes, logiquement mais non synthĂ©tiquement similaires. Les IDS traditionnels sont incapables d’identifier ces attaques, car ils sont conçus pour des infrastructures spĂ©cifiques et limitĂ©es. Par consĂ©quent, une dĂ©tection prĂ©cise dans le nuage ne sera absolument pas identifiĂ©e. Outre le problĂšme de l’infonuagique, les cyber-attaques sont de plus en plus sophistiquĂ©es et difficiles Ă  dĂ©tecter. Il est donc extrĂȘmement compliquĂ© pour un unique IDS d’un nuage de dĂ©tecter toutes les attaques, en raison de leurs implications, et leurs connaissances limitĂ©es et insuffisantes de celles-ci. Les solutions IDS actuelles de l’infonuagique rĂ©sident dans le fait qu’elles ne tiennent pas compte des aspects dynamiques et hĂ©tĂ©rogĂšnes de l’infonuagique. En outre, elles s’appuient fondamentalement sur les connaissances et l’expĂ©rience locales pour identifier les attaques et les modĂšles existants. Cela rend le nuage vulnĂ©rable aux attaques «Zero-Day». À cette fin, nous rĂ©solvons dans cette thĂšse deux dĂ©fis associĂ©s Ă  l’IDS de l’infonuagique : la dĂ©tection des cyberattaques dans des environnements complexes, dynamiques et hĂ©tĂ©rogĂšnes, et la dĂ©tection des cyberattaques ayant des informations limitĂ©es et/ou incomplĂštes sur les intrusions et leurs consĂ©quences. Dans cette thĂšse, nous sommes intĂ©ressĂ©s aux IDS gĂ©nĂ©riques de l’infonuagique afin d’identifier les intrusions qui sont indĂ©pendantes de l’infrastructure utilisĂ©e. Par consĂ©quent, Ă  chaque fois qu’un pressentiment d’attaque est identifiĂ©, le systĂšme de dĂ©tection d’intrusion doit ĂȘtre capable de reconnaĂźtre toutes les variantes d’une telle attaque, quelle que soit l’infrastructure utilisĂ©e. De plus, les IDS de l’infonuagique coopĂšrent et Ă©changent des informations afin de faire bĂ©nĂ©ficier chacun des expertises des autres, pour identifier des modĂšles d’attaques inconnues.----------ABSTRACT: Cloud Computing systems are becoming more and more complex, dynamic and heterogeneous. Such an environment frequently produces complex and noisy data that make Intrusion Detection Systems (IDSs) unable to detect unknown variants of known attacks. A single intrusion or an attack in such a heterogeneous system could take various forms that are logically but not synthetically similar. This, in turn, makes traditional IDSs unable to identify these attacks, since they are designed for specific and limited infrastructures. Therefore, the accuracy of the detection in the cloud will be very negatively affected. In addition to the problem of the cloud computing environment, cyber attacks are getting more sophisticated and harder to detect. Thus, it is becoming increasingly difficult for a single cloud-based IDS to detect all attacks, because of limited and incomplete knowledge about attacks and implications. The problem of the existing cloud-based IDS solutions is that they overlook the dynamic and changing nature of the cloud. Moreover, they are fundamentally based on the local knowledge and experience to perform the classification of attacks and normal patterns. This renders the cloud vulnerable to “Zero-Day” attacks. To this end, we address throughout this thesis two challenges associated with the cloud-based IDS which are: the detection of cyber attacks under complex, dynamic and heterogeneous environments; and the detection of cyber attacks under limited and/or incomplete information about intrusions and implications. We are interested in this thesis in allowing cloud-based IDSs to be generic, in order to identify intrusions regardless of the infrastructure used. Therefore, whenever an intrusion has been identified, an IDS should be able to recognize all the different structures of such an attack, regardless of the infrastructure that is being used. Moreover, we are interested in allowing cloud-based IDSs to cooperate and share knowledge with each other, in order to make them benefit from each other’s expertise to cover unknown attack patterns. The originality of this thesis lies within two aspects: 1) the design of a generic cloud-based IDS that allows the detection under changing and heterogeneous environments and 2) the design of a multi-cloud cooperative IDS that ensures trustworthiness, fairness and sustainability. By trustworthiness, we mean that the cloud-based IDS should be able to ensure that it will consult, cooperate and share knowledge with trusted parties (i.e., cloud-based IDSs). By fairness, we mean that the cloud-based IDS should be able to guarantee that mutual benefits will be achieved through minimising the chance of cooperating with selfish IDSs. This is useful to give IDSs the motivation to participate in the community

    Evaluation Theory for Characteristics of Cloud Identity Trust Framework

    Get PDF
    Trust management is a prominent area of security in cloud computing because insufficient trust management hinders cloud growth. Trust management systems can help cloud users to make the best decision regarding the security, privacy, Quality of Protection (QoP), and Quality of Service (QoS). A Trust model acts as a security strength evaluator and ranking service for the cloud and cloud identity applications and services. It might be used as a benchmark to setup the cloud identity service security and to find the inadequacies and enhancements in cloud infrastructure. This chapter addresses the concerns of evaluating cloud trust management systems, data gathering, and synthesis of theory and data. The conclusion is that the relationship between cloud identity providers and Cloud identity users can greatly benefit from the evaluation and critical review of current trust models

    A Fog Computing Approach for Cognitive, Reliable and Trusted Distributed Systems

    Get PDF
    In the Internet of Things era, a big volume of data is generated/gathered every second from billions of connected devices. The current network paradigm, which relies on centralised data centres (a.k.a. Cloud computing), becomes an impractical solution for IoT data storing and processing due to the long distance between the data source (e.g., sensors) and designated data centres. It worth noting that the long distance in this context refers to the physical path and time interval of when data is generated and when it get processed. To explain more, by the time the data reaches a far data centre, the importance of the data can be depreciated. Therefore, the network topologies have evolved to permit data processing and storage at the edge of the network, introducing what so-called fog Computing. The later will obviously lead to improvements in quality of service via processing and responding quickly and efficiently to varieties of data processing requests. Although fog computing is recognized as a promising computing paradigm, it suffers from challenging issues that involve: i) concrete adoption and management of fogs for decentralized data processing. ii) resources allocation in both cloud and fog layers. iii) having a sustainable performance since fog have a limited capacity in comparison with cloud. iv) having a secure and trusted networking environment for fogs to share resources and exchange data securely and efficiently. Hence, the thesis focus is on having a stable performance for fog nodes by enhancing resources management and allocation, along with safety procedures, to aid the IoT-services delivery and cloud computing in the ever growing industry of smart things. The main aspects related to the performance stability of fog computing involves the development of cognitive fog nodes that aim at provide fast and reliable services, efficient resources managements, and trusted networking, and hence ensure the best Quality of Experience, Quality of Service and Quality of Protection to end-users. Therefore the contribution of this thesis in brief is a novel Fog Resource manAgeMEnt Scheme (FRAMES) which has been proposed to crystallise fog distribution and resource management with an appropriate service's loads distribution and allocation based on the Fog-2-Fog coordination. Also, a novel COMputIng Trust manageMENT (COMITMENT) which is a software-based approach that is responsible for providing a secure and trusted environment for fog nodes to share their resources and exchange data packets. Both FRAMES and COMITMENT are encapsulated in the proposed Cognitive Fog (CF) computing which aims at making fog able to not only act on the data but also interpret the gathered data in a way that mimics the process of cognition in the human mind. Hence, FRAMES provide CF with elastic resource managements for load balancing and resolving congestion, while the COMITMENT employ trust and recommendations models to avoid malicious fog nodes in the Fog-2-Fog coordination environment. The proposed algorithms for FRAMES and COMITMENT have outperformed the competitive benchmark algorithms, namely Random Walks Offloading (RWO) and Nearest Fog Offloading (NFO) in the experiments to verify the validity and performance. The experiments were conducted on the performance (in terms of latency), load balancing among fog nodes and fogs trustworthiness along with detecting malicious events and attacks in the Fog-2-Fog environment. The performance of the proposed FRAMES's offloading algorithms has the lowest run-time (i.e., latency) against the benchmark algorithms (RWO and NFO) for processing equal-number of packets. Also, COMITMENT's algorithms were able to detect the collaboration requests whether they are secure, malicious or anonymous. The proposed work shows potential in achieving a sustainable fog networking paradigm and highlights significant benefits of fog computing in the computing ecosystem
    • 

    corecore