1,244 research outputs found

    An architecture for user preference-based IoT service selection in cloud computing using mobile devices for smart campus

    Get PDF
    The Internet of things refers to the set of objects that have identities and virtual personalities operating in smart spaces using intelligent interfaces to connect and communicate within social environments and user context. Interconnected devices communicating to each other or to other machines on the network have increased the number of services. The concepts of discovery, brokerage, selection and reliability are important in dynamic environments. These concepts have emerged as an important field distinguished from conventional distributed computing by its focus on large-scale resource sharing, delivery and innovative applications. The usage of Internet of Things technology across different service provisioning environments has increased the challenges associated with service selection and discovery. Although a set of terms can be used to express requirements for the desired service, a more detailed and specific user interface would make it easy for the users to express their requirements using high-level constructs. In order to address the challenge of service selection and discovery, we developed an architecture that enables a representation of user preferences and manipulates relevant descriptions of available services. To ensure that the key components of the architecture work, algorithms (content-based and collaborative filtering) derived from the architecture were proposed. The architecture was tested by selecting services using content-based as well as collaborative algorithms. The performances of the algorithms were evaluated using response time. Their effectiveness was evaluated using recall and precision. The results showed that the content-based recommender system is more effective than the collaborative filtering recommender system. Furthermore, the results showed that the content-based technique is more time-efficient than the collaborative filtering technique

    Toward Customizable Multi-tenant SaaS Applications

    Get PDF
    abstract: Nowadays, Computing is so pervasive that it has become indeed the 5th utility (after water, electricity, gas, telephony) as Leonard Kleinrock once envisioned. Evolved from utility computing, cloud computing has emerged as a computing infrastructure that enables rapid delivery of computing resources as a utility in a dynamically scalable, virtualized manner. However, the current industrial cloud computing implementations promote segregation among different cloud providers, which leads to user lockdown because of prohibitive migration cost. On the other hand, Service-Orented Computing (SOC) including service-oriented architecture (SOA) and Web Services (WS) promote standardization and openness with its enabling standards and communication protocols. This thesis proposes a Service-Oriented Cloud Computing Architecture by combining the best attributes of the two paradigms to promote an open, interoperable environment for cloud computing development. Mutil-tenancy SaaS applicantions built on top of SOCCA have more flexibility and are not locked down by a certain platform. Tenants residing on a multi-tenant application appear to be the sole owner of the application and not aware of the existence of others. A multi-tenant SaaS application accommodates each tenant’s unique requirements by allowing tenant-level customization. A complex SaaS application that supports hundreds, even thousands of tenants could have hundreds of customization points with each of them providing multiple options, and this could result in a huge number of ways to customize the application. This dissertation also proposes innovative customization approaches, which studies similar tenants’ customization choices and each individual users behaviors, then provides guided semi-automated customization process for the future tenants. A semi-automated customization process could enable tenants to quickly implement the customization that best suits their business needs.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Acquisition Data Analytics for Supply Chain Cybersecurity

    Get PDF
    Acquisition Research Program Sponsored Report SeriesSponsored Acquisition Research & Technical ReportsCybersecurity is a national priority, but the analysis required for acquisition personnel to objectively assess the integrity of the supply chain for cyber compromise is highly complex. This paper presents a process for supply chain data analytics for acquisition decision makers, addressing data collection, assessment, and reporting. The method includes workflows from initial purchase request through vendor selection and maintenance to audits across the lifecycle of an asset. Artificial intelligence can help acquisition decision makers automate the complexity of supply chain information assurance.Approved for public release; distribution is unlimited.Approved for public release; distribution is unlimited

    A service oriented architecture to provide data mining services for non-expert data miners

    Get PDF
    In today's competitive market, companies need to use discovery knowledge techniques to make better, more informed decisions. But these techniques are out of the reach of most users as the knowledge discovery process requires an incredible amount of expertise. Additionally, business intelligence vendors are moving their systems to the cloud in order to provide services which offer companies cost-savings, better performance and faster access to new applications. This work joins both facets. It describes a data mining service addressed to non-expert data miners which can be delivered as Software-as-a-Service. Its main advantage is that by simply indicating where the data file is, the service itself is able to perform all the process. © 2012 Elsevier B.V. All rights reserved

    Toward Business Integrity Modeling and Analysis Framework for Risk Measurement and Analysis

    Get PDF
    Financialization has contributed to economic growth but has caused scandals, misselling, rogue trading, tax evasion, and market speculation. To a certain extent, it has also created problems in social and economic instability. It is an important aspect of Enterprise Security, Privacy, and Risk (ESPR), particularly in risk research and analysis. In order to minimize the damaging impacts caused by the lack of regulatory compliance, governance, ethical responsibilities, and trust, we propose a Business Integrity Modeling and Analysis (BIMA) framework to unify business integrity with performance using big data predictive analytics and business intelligence. Comprehensive services include modeling risk and asset prices, and consequently, aligning them with business strategies, making our services, according to market trend analysis, both transparent and fair. The BIMA framework uses Monte Carlo simulation, the Black–Scholes–Merton model, and the Heston model for performing financial, operational, and liquidity risk analysis and present outputs in the form of analytics and visualization. Our results and analysis demonstrate supplier bankruptcy modeling, risk pricing, high-frequency pricing simulations, London Interbank Offered Rate (LIBOR) rate simulation, and speculation detection results to provide a variety of critical risk analysis. Our approaches to tackle problems caused by financial services and the operational risk clearly demonstrate that the BIMA framework, as the outputs of our data analytics research, can effectively combine integrity and risk analysis together with overall business performance and can contribute to operational risk research

    A lightweight secure adaptive approach for internet-of-medical-things healthcare applications in edge-cloud-based networks

    Get PDF
    Mobile-cloud-based healthcare applications are increasingly growing in practice. For instance, healthcare, transport, and shopping applications are designed on the basis of the mobile cloud. For executing mobile-cloud applications, offloading and scheduling are fundamental mechanisms. However, mobile healthcare workflow applications with these methods are widely ignored, demanding applications in various aspects for healthcare monitoring, live healthcare service, and biomedical firms. However, these offloading and scheduling schemes do not consider the workflow applications' execution in their models. This paper develops a lightweight secure efficient offloading scheduling (LSEOS) metaheuristic model. LSEOS consists of light weight, and secure offloading and scheduling methods whose execution offloading delay is less than that of existing methods. The objective of LSEOS is to run workflow applications on other nodes and minimize the delay and security risk in the system. The metaheuristic LSEOS consists of the following components: adaptive deadlines, sorting, and scheduling with neighborhood search schemes. Compared to current strategies for delay and security validation in a model, computational results revealed that the LSEOS outperformed all available offloading and scheduling methods for process applications by 10% security ratio and by 29% regarding delays

    An Intermediate Data-driven Methodology for Scientific Workflow Management System to Support Reusability

    Get PDF
    Automatic processing of different logical sub-tasks by a set of rules is a workflow. A workflow management system (WfMS) is a system that helps us accomplish a complex scientific task through making a sequential arrangement of sub-tasks available as tools. Workflows are formed with modules from various domains in a WfMS, and many collaborators of the domains are involved in the workflow design process. Workflow Management Systems (WfMSs) have been gained popularity in recent years for managing various tools in a system and ensuring dependencies while building a sequence of executions for scientific analyses. As a result of heterogeneous tools involvement and collaboration requirement, Collaborative Scientific Workflow Management Systems (CSWfMS) have gained significant interest in the scientific analysis community. In such systems, big data explosion issues exist with massive velocity and variety characteristics for the heterogeneous large amount of data from different domains. Therefore a large amount of heterogeneous data need to be managed in a Scientific Workflow Management System (SWfMS) with a proper decision mechanism. Although a number of studies addressed the cost management of data, none of the existing studies are related to real- time decision mechanism or reusability mechanism. Besides, frequent execution of workflows in a SWfMS generates a massive amount of data and characteristics of such data are always incremental. Input data or module outcomes of a workflow in a SWfMS are usually large in size. Processing of such data-intensive workflows is usually time-consuming where modules are computationally expensive for their respective inputs. Besides, lack of data reusability, limitation of error recovery, inefficient workflow processing, inefficient storing of derived data, lacking in metadata association and lacking in validation of the effectiveness of a technique of existing systems need to be addressed in a SWfMS for efficient workflow building by maintaining the big data explosion. To address the issues, in this thesis first we propose an intermediate data management scheme for a SWfMS. In our second attempt, we explored the possibilities and introduced an automatic recommendation technique for a SWfMS from real-world workflow data (i.e Galaxy [1] workflows) where our investigations show that the proposed technique can facilitate 51% of workflow building in a SWfMS by reusing intermediate data of previous workflows and can reduce 74% execution time of workflow buildings in a SWfMS. Later we propose an adaptive version of our technique by considering the states of tools in a SWfMS, which shows around 40% reusability for workflows. Consequently, in our fourth study, We have done several experiments for analyzing the performance and exploring the effectiveness of the technique in a SWfMS for various environments. The technique is introduced to emphasize on storing cost reduction, increase data reusability, and faster workflow execution, to the best of our knowledge, which is the first of its kind. Detail architecture and evaluation of the technique are presented in this thesis. We believe our findings and developed system will contribute significantly to the research domain of SWfMSs
    • …
    corecore