733 research outputs found

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Leveraging data-driven infrastructure management to facilitate AIOps for big data applications and operations

    Get PDF
    As institutions increasingly shift to distributed and containerized application deployments on remote heterogeneous cloud/cluster infrastructures, the cost and difficulty of efficiently managing and maintaining data-intensive applications have risen. A new emerging solution to this issue is Data-Driven Infrastructure Management (DDIM), where the decisions regarding the management of resources are taken based on data aspects and operations (both on the infrastructure and on the application levels). This chapter will introduce readers to the core concepts underpinning DDIM, based on experience gained from development of the Kubernetes-based BigDataStack DDIM platform (https://bigdatastack.eu/). This chapter involves multiple important BDV topics, including development, deployment, and operations for cluster/cloud-based big data applications, as well as data-driven analytics and artificial intelligence for smart automated infrastructure self-management. Readers will gain important insights into how next-generation DDIM platforms function, as well as how they can be used in practical deployments to improve quality of service for Big Data Applications. This chapter relates to the technical priority Data Processing Architectures of the European Big Data Value Strategic Research & Innovation Agenda [33], as well as the Data Processing Architectures horizontal and Engineering and DevOps for building Big Data Value vertical concerns. The chapter relates to the Reasoning and Decision Making cross-sectorial technology enablers of the AI, Data and Robotics Strategic Research, Innovation & Deployment Agenda [34]

    Advances in distribution system reliability assessment

    Get PDF
    Traditionally, reliability of power systems has been an important measure of system performance and a key factor in system planning. Recently, the large-scale changes in the regulations governing the power industry have lead to a growing emphasis on distribution system reliability. Further, the shift towards a more technical and computerized society requires that power supply be increasingly reliable. Advanced models and methods are needed to obtain an improved understanding of the distribution system reliability. Monte Carlo simulation is one such method that can be used to find the statistical distribution of the reliability indices. This dissertation presents a computationally efficient Monte Carlo simulation algorithm for assessing the distribution reliability indices. Several state regulatory agencies have started to prescribe minimum reliability standards to be maintained by the distribution companies. The effect of these regulations has not been fully explored. In this work, a detailed analysis of the impact of various regulatory standards on a practical distribution system is presented. Storms cause a significant fraction of the distribution customer interruptions. While the impact of wind storms on distribution system reliability has been studied earlier, the effect of lightning storms on the reliability indices is not fully understood. Momentary interruptions caused by lightning storms may severely disrupt production at automated manufacturing facilities and other sensitive loads resulting in a loss of millions of dollars per incident. An analysis of lightning storm data is presented in this dissertation along with a method for calculating the impact of lightning storms on distribution system reliability. Finally, several topics for future research are discussed

    Cloud outsourcing:Theoretical & practical evidence of cloud governance strategies by financial institutions in Europe, the United States and Canada

    Get PDF
    This study examined the risk and governance challenges experienced by financial institutions that outsource cloud technologies. Cloud outsourcing prompts a new way of working and fosters an environment in which technology and data are shared across groups and are housed in regional hubs, according to global standards that are influenced by various countries’ policies. Therefore, to effectively manage the cloud, institutions need a thorough understanding of the applicable laws governing the cloud relationship and those that influence the internal control environment. The study explains that, conceptually, the framework nature of cloud contracts and flexibility of the regulation makes it especially difficult for institutions to efficiently manage risks. A real case study on a cloud outsourcing transaction and survey data from financial institution experts were used to study expert perceptions on the severity of various types of cloud risks and the effectiveness of institutional risk management approaches. These findings were also confirmed in a comparative institutional study, where similarities were found in the risk and governance concerns of experts working at 13 different institutions in the United States, Europe, and Canada. Through this investigation, it was found that efficient governance can be more difficult for institutions that comply with US regulations owing to considerable differences in state policies on data privacy. Finally, this study examined how uncertainties in the evaluation of data breaches and network failures become visible in other internal practices, such as cloud risk assessments. A series of cloud risk experiments was created and distributed to 131 cloud risk experts working at financial institutions in the EU and the US to compare whether their risk assessments would differ significantly. The results show that the lack of specification in the regulations and experience of cloud experts can contribute to considerable differences in their risk and disclosure choices. In practice, most experts face significant challenges in assessing the severity of cloud risk events, which have broader implications for enterprise risk management. The results suggest that internal governance continues to be a challenge for firms as they outsource cloud technologies. The knowledge derived from this Ph.D. is useful, as it shows that institutions can benefit if they prioritize the evaluation of liability provisions in their cloud contracts, especially in cases where cloud risk events are a consequence of third-party risks. The findings also establish that internal governance is necessary to reduce the spillover effects of cloud contracts and that institutions can devise sufficient governance structures by implementing data policies and mechanisms that promote cooperation and coordination to oversee data management responsibilities. _Dit onderzoek onderzocht de risico- en governance-uitdagingen van financiële instellingen die cloudtechnologieën uitbesteden. Het uitbesteden van de cloud leidt tot een nieuwe manier van werken en bevordert een omgeving waarin technologie en data worden gedeeld tussen groepen en worden ondergebracht in regionale hubs, volgens wereldwijde standaarden die worden beïnvloed door het beleid van verschillende landen. Om de cloud effectief te beheren, moeten instellingen daarom een grondig begrip hebben van de toepasselijke wetten die de cloudrelatie regelen en van de wetten die de interne controleomgeving beïnvloeden. In dit onderzoek wordt uitgelegd dat, conceptueel gezien, het kaderkarakter van cloudcontracten en de flexibiliteit van de regelgeving het bijzonder moeilijk maakt voor instellingen om hun risico's effectief te beheren. Een echte casus over een cloud outsourcing-transactie en enquêtegegevens van experts van financiële instellingen zijn gebruikt om de percepties van experts te bestuderen over de ernst van verschillende soorten cloudrisico's en de effectiviteit van institutionele risicomanagementbenaderingen. Deze bevindingen werden ook bevestigd in een vergelijkende institutionele studie, waar overeenkomsten werden gevonden in de zorgen rondom risico en governance van experts bij 13 verschillende instellingen in de Verenigde Staten, Europa en Canada. Uit dit onderzoek blijkt dat effectieve governance moeilijker kan zijn voor instellingen die de Amerikaanse regelgeving naleven vanwege de aanzienlijke verschillen in het beleid van de staten met betrekking tot dataprivacy. Tot slot wordt in dit onderzoek gekeken naar hoe onzekerheden in de evaluatie van datalekken en netwerkstoringen zichtbaar worden in andere interne praktijken zoals cloudrisicobeoordelingen. Er is een reeks experimenten met cloudrisico's gemaakt en verspreid onder 131 deskundigen op het gebied van cloudrisico's die werkzaam zijn bij financiële instellingen in de EU en de VS om te vergelijken of hun risicobeoordelingen significant zouden verschillen. De resultaten laten zien dat het gebrek aan specificatie in de regelgeving en de ervaring van cloudexperts kan bijdragen aan aanzienlijke verschillen in risico- en openbaarmakingskeuzes. In de praktijk krijgen de meeste experts te maken met aanzienlijke uitdagingen bij het inschatten van de ernst van cloudrisicogebeurtenissen, die bredere implicaties hebben voor het risicomanagement van bedrijven. De resultaten suggereren dat interne governance een uitdaging blijft voor bedrijven die cloudtechnologieën uitbesteden. De bevindingen van dit proefschrift zijn nuttig, omdat ze laten zien dat instellingen er baat bij kunnen hebben als ze prioriteit geven aan de evaluatie van aansprakelijkheidsbepalingen in hun cloudcontracten, vooral in gevallen waarin cloudrisico's het gevolg zijn van risico's van derden. De bevindingen tonen ook aan dat interne governance nodig is om de overloopeffecten van cloudcontracten te verminderen en dat instellingen toereikende governancestructuren kunnen ontwikkelen door databeleid en -mechanismen te implementeren die samenwerking en coördinatie bevorderen om toezicht te houden op de verantwoordelijkheden voor databeheer

    Cloud Computing Adoption in Afghanistan: A Quantitative Study Based on the Technology Acceptance Model

    Get PDF
    Cloud computing emerged as an alternative to traditional in-house data centers that businesses can leverage to increase the operation agility and employees\u27 productivity. IT solution architects are tasked with presenting to IT managers some analysis reflecting cloud computing adoption critical barriers and challenges. This quantitative correlational study established an enhanced technology acceptance model (TAM) with four external variables: perceived security (PeS), perceived privacy (PeP), perceived connectedness (PeN), and perceived complexity (PeC) as antecedents of perceived usefulness (PU) and perceived ease of use (PEoU) in a cloud computing context. Data collected from 125 participants, who responded to the invitation through an online survey focusing on Afghanistan\u27s main cities Kabul, Mazar, and Herat. The analysis showed that PEoU was a predictor of the behavioral intention of cloud computing adoption, which is consistent with the TAM; PEoU with an R2 = .15 had a stronger influence than PU with an R2 = .023 on cloud computing behavior intention of adoption and use. PeN, PeS, and PeP significantly influenced the behavioral intentions of IT architects to adopt and use the technology. This study showed that PeC was not a significant barrier to cloud computing adoption in Afghanistan. By adopting cloud services, employees can have access to various tools that can help increase business productivity and contribute to improving the work environment. Cloud services, as an alternative solution to home data centers, can help businesses reduce power consumption and consecutively decrease in carbon dioxide emissions due to less power demand

    Mapping Species Composition of Forests and Tree Plantations in Northeastern Costa Rica with an Integration of Hyperspectral and Multitemporal Landsat Imagery

    Get PDF
    An efficient means to map tree plantations is needed to detect tropical land use change and evaluate reforestation projects. To analyze recent tree plantation expansion in northeastern Costa Rica, we examined the potential of combining moderate-resolution hyperspectral imagery (2005 HyMap mosaic) with multitemporal, multispectral data (Landsat) to accurately classify (1) general forest types and (2) tree plantations by species composition. Following a linear discriminant analysis to reduce data dimensionality, we compared four Random Forest classification models: hyperspectral data (HD) alone; HD plus interannual spectral metrics; HD plus a multitemporal forest regrowth classification; and all three models combined. The fourth, combined model achieved overall accuracy of 88.5%. Adding multitemporal data significantly improved classification accuracy (p less than 0.0001) of all forest types, although the effect on tree plantation accuracy was modest. The hyperspectral data alone classified six species of tree plantations with 75% to 93% producer's accuracy; adding multitemporal spectral data increased accuracy only for two species with dense canopies. Non-native tree species had higher classification accuracy overall and made up the majority of tree plantations in this landscape. Our results indicate that combining occasionally acquired hyperspectral data with widely available multitemporal satellite imagery enhances mapping and monitoring of reforestation in tropical landscapes

    Development and Validation of a Proof-of-Concept Prototype for Analytics-based Malicious Cybersecurity Insider Threat in a Real-Time Identification System

    Get PDF
    Insider threat has continued to be one of the most difficult cybersecurity threat vectors detectable by contemporary technologies. Most organizations apply standard technology-based practices to detect unusual network activity. While there have been significant advances in intrusion detection systems (IDS) as well as security incident and event management solutions (SIEM), these technologies fail to take into consideration the human aspects of personality and emotion in computer use and network activity, since insider threats are human-initiated. External influencers impact how an end-user interacts with both colleagues and organizational resources. Taking into consideration external influencers, such as personality, changes in organizational polices and structure, along with unusual technical activity analysis, would be an improvement over contemporary detection tools used for identifying at-risk employees. This would allow upper management or other organizational units to intervene before a malicious cybersecurity insider threat event occurs, or mitigate it quickly, once initiated. The main goal of this research study was to design, develop, and validate a proof-of-concept prototype for a malicious cybersecurity insider threat alerting system that will assist in the rapid detection and prediction of human-centric precursors to malicious cybersecurity insider threat activity. Disgruntled employees or end-users wishing to cause harm to the organization may do so by abusing the trust given to them in their access to available network and organizational resources. Reports on malicious insider threat actions indicated that insider threat attacks make up roughly 23% of all cybercrime incidents, resulting in $2.9 trillion in employee fraud losses globally. The damage and negative impact that insider threats cause was reported to be higher than that of outsider or other types of cybercrime incidents. Consequently, this study utilized weighted indicators to measure and correlate simulated user activity to possible precursors to malicious cybersecurity insider threat attacks. This study consisted of a mixed method approach utilizing an expert panel, developmental research, and quantitative data analysis using the developed tool on simulated data set. To assure validity and reliability of the indicators, a panel of subject matter experts (SMEs) reviewed the indicators and indicator categorizations that were collected from prior literature following the Delphi technique. The SMEs’ responses were incorporated into the development of a proof-of-concept prototype. Once the proof-of-concept prototype was completed and fully tested, an empirical simulation research study was conducted utilizing simulated user activity within a 16-month time frame. The results of the empirical simulation study were analyzed and presented. Recommendations resulting from the study also be provided

    On the security of NoSQL cloud database services

    Get PDF
    Processing a vast volume of data generated by web, mobile and Internet-enabled devices, necessitates a scalable and flexible data management system. Database-as-a-Service (DBaaS) is a new cloud computing paradigm, promising a cost-effective and scalable, fully-managed database functionality meeting the requirements of online data processing. Although DBaaS offers many benefits it also introduces new threats and vulnerabilities. While many traditional data processing threats remain, DBaaS introduces new challenges such as confidentiality violation and information leakage in the presence of privileged malicious insiders and adds new dimension to the data security. We address the problem of building a secure DBaaS for a public cloud infrastructure where, the Cloud Service Provider (CSP) is not completely trusted by the data owner. We present a high level description of several architectures combining modern cryptographic primitives for achieving this goal. A novel searchable security scheme is proposed to leverage secure query processing in presence of a malicious cloud insider without disclosing sensitive information. A holistic database security scheme comprised of data confidentiality and information leakage prevention is proposed in this dissertation. The main contributions of our work are: (i) A searchable security scheme for non-relational databases of the cloud DBaaS; (ii) Leakage minimization in the untrusted cloud. The analysis of experiments that employ a set of established cryptographic techniques to protect databases and minimize information leakage, proves that the performance of the proposed solution is bounded by communication cost rather than by the cryptographic computational effort
    corecore