313 research outputs found

    An architecture for user preference-based IoT service selection in cloud computing using mobile devices for smart campus

    Get PDF
    The Internet of things refers to the set of objects that have identities and virtual personalities operating in smart spaces using intelligent interfaces to connect and communicate within social environments and user context. Interconnected devices communicating to each other or to other machines on the network have increased the number of services. The concepts of discovery, brokerage, selection and reliability are important in dynamic environments. These concepts have emerged as an important field distinguished from conventional distributed computing by its focus on large-scale resource sharing, delivery and innovative applications. The usage of Internet of Things technology across different service provisioning environments has increased the challenges associated with service selection and discovery. Although a set of terms can be used to express requirements for the desired service, a more detailed and specific user interface would make it easy for the users to express their requirements using high-level constructs. In order to address the challenge of service selection and discovery, we developed an architecture that enables a representation of user preferences and manipulates relevant descriptions of available services. To ensure that the key components of the architecture work, algorithms (content-based and collaborative filtering) derived from the architecture were proposed. The architecture was tested by selecting services using content-based as well as collaborative algorithms. The performances of the algorithms were evaluated using response time. Their effectiveness was evaluated using recall and precision. The results showed that the content-based recommender system is more effective than the collaborative filtering recommender system. Furthermore, the results showed that the content-based technique is more time-efficient than the collaborative filtering technique

    Securing emerging IoT systems through systematic analysis and design

    Get PDF
    The Internet of Things (IoT) is growing very rapidly. A variety of IoT systems have been developed and employed in many domains such as smart home, smart city and industrial control, providing great benefits to our everyday lives. However, as IoT becomes increasingly prevalent and complicated, it is also introducing new attack surfaces and security challenges. We are seeing numerous IoT attacks exploiting the vulnerabilities in IoT systems everyday. Security vulnerabilities may manifest at different layers of the IoT stack. There is no single security solution that can work for the whole ecosystem. In this dissertation, we explore the limitations of emerging IoT systems at different layers and develop techniques and systems to make them more secure. More specifically, we focus on three of the most important layers: the user rule layer, the application layer and the device layer. First, on the user rule layer, we characterize the potential vulnerabilities introduced by the interaction of user-defined automation rules. We introduce iRuler, a static analysis system that uses model checking to detect inter-rule vulnerabilities that exist within trigger-action platforms such as IFTTT in an IoT deployment. Second, on the application layer, we design and build ProvThings, a system that instruments IoT apps to generate data provenance that provides a holistic explanation of system activities, including malicious behaviors. Lastly, on the device layer, we develop ProvDetector and SplitBrain to detect malicious processes using kernel-level provenance tracking and analysis. ProvDetector is a centralized approach that collects all the audit data from the clients and performs detection on the server. SplitBrain extends ProvDetector with collaborative learning, where the clients collaboratively build the detection model and performs detection on the client device

    Research and development of accounting system in grid environment

    Get PDF
    The Grid has been recognised as the next-generation distributed computing paradigm by seamlessly integrating heterogeneous resources across administrative domains as a single virtual system. There are an increasing number of scientific and business projects that employ Grid computing technologies for large-scale resource sharing and collaborations. Early adoptions of Grid computing technologies have custom middleware implemented to bridge gaps between heterogeneous computing backbones. These custom solutions form the basis to the emerging Open Grid Service Architecture (OGSA), which aims at addressing common concerns of Grid systems by defining a set of interoperable and reusable Grid services. One of common concerns as defined in OGSA is the Grid accounting service. The main objective of the Grid accounting service is to ensure resources to be shared within a Grid environment in an accountable manner by metering and logging accurate resource usage information. This thesis discusses the origins and fundamentals of Grid computing and accounting service in the context of OGSA profile. A prototype was developed and evaluated based on OGSA accounting-related standards enabling sharing accounting data in a multi-Grid environment, the World-wide Large Hadron Collider Grid (WLCG). Based on this prototype and lessons learned, a generic middleware solution was also implemented as a toolkit that eases migration of existing accounting system to be standard compatible.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (EPSRC)Stanford UniversityGBUnited Kingdo

    Building the Infrastructure for Cloud Security

    Get PDF
    Computer scienc

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    La contextualisation en entreprise (mettre en avant utilisateurs et développeurs)

    Get PDF
    Les applications contextuelles doivent gérer un flux contenu de contexte selon une logique approprié. Les travaux de recherche en contextualisation se limitent à proposer des plateformes de développement proposant des mécanismes d adaptation prédéfinie. Cette thÚse se propose d étende l état de l art en proposant des nouveaux concepts formant la fondation pour la création d application contextuelles en adoptant des principes de l ingénierie logicielle et une décomposition fonctionnelle. Aussi, cela permet l intégration de comportements contextualisés à des applications non initialement conçus pour cela. La thÚse propose une autre maniÚre centrée-contexte permettant de séparer la représentation du contexte de son interprétation, offrant encore plus de flexibilité à la gestion de contexte. Les propositions sont analysées aux lumiÚres d étude de cas et de simulations. Le résultat de la thÚse est l introduction de nouvelle approche de création d applications contextuelles qui met en avant le développeur mais aussi l utilisateurContext-aware applications must manage a continuous stream of context according to dedicated business logic. Research was limited on proposing frameworks and platforms that have predefined behavior toward applications. This thesis attempts to extend background works by proposing new concepts serving as foundation for a flexible approach for building context-aware applications. The thesis examines the state of the art of context-aware computing, then adopts well-established software design principles and a functional decomposition for designing a reference model for context management enabling seamless integration of context-awareness into applications. Also, the thesis studies the use of context in common applications and proposes a context-centric modeling approach which allows the creation of a graph-based representation where entities are connected to each other through links representing context. Furthermore, the context graph decouples the presentation and the semantics of context, leaving each application to manage the appropriate semantic for their context data. Case studies are conducted for the evaluation of the proposed system in terms of its support for the creation of applications enhanced with context-awareness. A simulation study is performed to analyze the performance properties of the proposed system. The result of this thesis is the introduction of a novel approach for supporting the creation of context-aware applications that supports the integration of context-awareness to existing applications. It empowers developers as well as users to participate in the creation process, thereby reducing usability issuesEVRY-INT (912282302) / SudocSudocFranceF

    Building Blocks for IoT Analytics Internet-of-Things Analytics

    Get PDF
    Internet-of-Things (IoT) Analytics are an integral element of most IoT applications, as it provides the means to extract knowledge, drive actuation services and optimize decision making. IoT analytics will be a major contributor to IoT business value in the coming years, as it will enable organizations to process and fully leverage large amounts of IoT data, which are nowadays largely underutilized. The Building Blocks of IoT Analytics is devoted to the presentation the main technology building blocks that comprise advanced IoT analytics systems. It introduces IoT analytics as a special case of BigData analytics and accordingly presents leading edge technologies that can be deployed in order to successfully confront the main challenges of IoT analytics applications. Special emphasis is paid in the presentation of technologies for IoT streaming and semantic interoperability across diverse IoT streams. Furthermore, the role of cloud computing and BigData technologies in IoT analytics are presented, along with practical tools for implementing, deploying and operating non-trivial IoT applications. Along with the main building blocks of IoT analytics systems and applications, the book presents a series of practical applications, which illustrate the use of these technologies in the scope of pragmatic applications. Technical topics discussed in the book include: Cloud Computing and BigData for IoT analyticsSearching the Internet of ThingsDevelopment Tools for IoT Analytics ApplicationsIoT Analytics-as-a-ServiceSemantic Modelling and Reasoning for IoT AnalyticsIoT analytics for Smart BuildingsIoT analytics for Smart CitiesOperationalization of IoT analyticsEthical aspects of IoT analyticsThis book contains both research oriented and applied articles on IoT analytics, including several articles reflecting work undertaken in the scope of recent European Commission funded projects in the scope of the FP7 and H2020 programmes. These articles present results of these projects on IoT analytics platforms and applications. Even though several articles have been contributed by different authors, they are structured in a well thought order that facilitates the reader either to follow the evolution of the book or to focus on specific topics depending on his/her background and interest in IoT and IoT analytics technologies. The compilation of these articles in this edited volume has been largely motivated by the close collaboration of the co-authors in the scope of working groups and IoT events organized by the Internet-of-Things Research Cluster (IERC), which is currently a part of EU's Alliance for Internet of Things Innovation (AIOTI)
    • 

    corecore