109 research outputs found

    Holistic security 4.0

    Get PDF
    The future computer climate will represent an ever more aligned world of integrating technologies, affecting consumer, business and industry sectors. The vision was first outlined in the Industry 4.0 conception. The elements which comprise smart systems or embedded devices have been investigated to determine the technological climate. The emerging technologies revolve around core concepts, and specifically in this project, the uses of Internet of Things (IoT), Industrial Internet of Things (IIoT) and Internet of Everything (IoE). The application of bare metal and logical technology qualities are put under the microscope to provide an effective blue print of the technological field. The systems and governance surrounding smart systems are also examined. Such an approach helps to explain the beneficial or negative elements of smart devices. Consequently, this ensures a comprehensive review of standards, laws, policy and guidance to enable security and cybersecurity of the 4.0 systems

    Y-Means Clustering Vs N-CP Clustering with Canopies for Intrusion Detection

    Get PDF
    Intrusions present a very serious security threat in a network environment. It is therefore essential to detect intrusions to prevent compromising the stability of the system or the security of information that is stored on the network. The most difficult problem is detecting new intrusion types, of which intrusion detection systems may not be aware. Many of the signature based methods and learning algorithms generally cannot detect these new intrusions. We propose an optimized algorithm called n-CP clustering algorithm that is capable of detecting intrusions that may be new or otherwise. The algorithm also overcomes two significant shortcomings of K-Means clustering namely dependency and degeneracy on the number of clusters. The proposed clustering method utilizes the concept of canopies to optimize the search by eliminating the pair-wise distance computation of all the data points. The system will also maintain a low false positive rate and high detection rate. The efficiency and the speed of the algorithm are analyzed by comparing with another clustering algorithms used for intrusion detection, called Y-Means clustering. Both the algorithms are tested against the KDD-99 data set to compute the detection rate and false positive rate. The algorithms are also tested for efficiency with varying number of data fields of the dataset. This thesis outlines the technical difficulties of K-means clustering, an algorithm to eliminate those shortcomings and the canopies technique to speed up the intrusion detection process. The results show that our clustering algorithm that uses canopies concept is approximately 40% faster than the Y-Means clustering and overcomes the two main limitations of K-Means clustering. Finally, a comparative analysis of the Y-means clustering and our proposed n-CP clustering with canopies was carried out with the help of ROC Curves showing the respective hit rates to false alarm rates.Computer Science Departmen

    Discovering New Vulnerabilities in Computer Systems

    Get PDF
    Vulnerability research plays a key role in preventing and defending against malicious computer system exploitations. Driven by a multi-billion dollar underground economy, cyber criminals today tirelessly launch malicious exploitations, threatening every aspect of daily computing. to effectively protect computer systems from devastation, it is imperative to discover and mitigate vulnerabilities before they fall into the offensive parties\u27 hands. This dissertation is dedicated to the research and discovery of new design and deployment vulnerabilities in three very different types of computer systems.;The first vulnerability is found in the automatic malicious binary (malware) detection system. Binary analysis, a central piece of technology for malware detection, are divided into two classes, static analysis and dynamic analysis. State-of-the-art detection systems employ both classes of analyses to complement each other\u27s strengths and weaknesses for improved detection results. However, we found that the commonly seen design patterns may suffer from evasion attacks. We demonstrate attacks on the vulnerabilities by designing and implementing a novel binary obfuscation technique.;The second vulnerability is located in the design of server system power management. Technological advancements have improved server system power efficiency and facilitated energy proportional computing. However, the change of power profile makes the power consumption subjected to unaudited influences of remote parties, leaving the server systems vulnerable to energy-targeted malicious exploit. We demonstrate an energy abusing attack on a standalone open Web server, measure the extent of the damage, and present a preliminary defense strategy.;The third vulnerability is discovered in the application of server virtualization technologies. Server virtualization greatly benefits today\u27s data centers and brings pervasive cloud computing a step closer to the general public. However, the practice of physical co-hosting virtual machines with different security privileges risks introducing covert channels that seriously threaten the information security in the cloud. We study the construction of high-bandwidth covert channels via the memory sub-system, and show a practical exploit of cross-virtual-machine covert channels on virtualized x86 platforms

    Computational analytics for venture finance

    Get PDF
    This thesis investigates the application of computational analytics to the domain of venture finance – the deployment of capital to high-risk ventures in pursuit of maximising financial return. Traditional venture finance is laborious and highly inefficient. Whilst high street banks approve (or reject) personal loans in a matter of minutes It takes an early-stage venture capital (VC) firm months to put a term sheet in front of a fledgling new venture. Whilst these are fundamentally different forms of finance (longer return period, larger investments, different risk profiles) a more data-informed and analytical approach to venture finance is foreseeable. We have surveyed existing software tools in relation to the venture capital investment process and stage of investment. We find that analytical tools are nascent and use of analytics in industry is limited. To date only a small handful of venture capital firms have publicly declared their use of computational analytical methods in their decision making and investment selection process. This research has been undertaken with several industry partners including venture capital firms, seed accelerators, universities and other related organisations. Within our research we have developed a prototype software tool NVANA: New Venture Analytics – for assessing new ventures and screening prospective deal flow. Over £20,000 in early-stage funding was distributed with hundreds of new ventures assessed using the system. Both the limitations of our prototype and extensions are discussed. We have focused on computational analytics in the context of three sub-components of the NVANA system. Firstly, improving the classification of private companies using supervised and multi-label classification techniques to develop a novel form of industry classification. Secondly, we have investigated the potential to benchmark private company performance based upon a company's ``digital footprint''. Finally, the novel application of collaborative filtering and content-based recommendation techniques to the domain of venture finance. We conclude by discussing the future potential for computational analytics to increase efficiency and performance within the venture finance domain. We believe there is clear scope for assisting the venture capital investment process. However, we have identified limitations and challenges in terms of access to data, stage of investment and adoption by industry

    Demystifying Internet of Things Security

    Get PDF
    Break down the misconceptions of the Internet of Things by examining the different security building blocks available in Intel Architecture (IA) based IoT platforms. This open access book reviews the threat pyramid, secure boot, chain of trust, and the SW stack leading up to defense-in-depth. The IoT presents unique challenges in implementing security and Intel has both CPU and Isolated Security Engine capabilities to simplify it. This book explores the challenges to secure these devices to make them immune to different threats originating from within and outside the network. The requirements and robustness rules to protect the assets vary greatly and there is no single blanket solution approach to implement security. Demystifying Internet of Things Security provides clarity to industry professionals and provides and overview of different security solutions What You'll Learn Secure devices, immunizing them against different threats originating from inside and outside the network Gather an overview of the different security building blocks available in Intel Architecture (IA) based IoT platforms Understand the threat pyramid, secure boot, chain of trust, and the software stack leading up to defense-in-depth Who This Book Is For Strategists, developers, architects, and managers in the embedded and Internet of Things (IoT) space trying to understand and implement the security in the IoT devices/platforms

    Process Mining Handbook

    Get PDF
    This is an open access book. This book comprises all the single courses given as part of the First Summer School on Process Mining, PMSS 2022, which was held in Aachen, Germany, during July 4-8, 2022. This volume contains 17 chapters organized into the following topical sections: Introduction; process discovery; conformance checking; data preprocessing; process enhancement and monitoring; assorted process mining topics; industrial perspective and applications; and closing

    ENHANCING CLOUD SYSTEM RUNTIME TO ADDRESS COMPLEX FAILURES

    Get PDF
    As the reliance on cloud systems intensifies in our progressively digital world, understanding and reinforcing their reliability becomes more crucial than ever. Despite impressive advancements in augmenting the resilience of cloud systems, the growing incidence of complex failures now poses a substantial challenge to the availability of these systems. With cloud systems continuing to scale and increase in complexity, failures not only become more elusive to detect but can also lead to more catastrophic consequences. Such failures question the foundational premises of conventional fault-tolerance designs, necessitating the creation of novel system designs to counteract them. This dissertation aims to enhance distributed systems’ capabilities to detect, localize, and react to complex failures at runtime. To this end, this dissertation makes contributions to address three emerging categories of failures in cloud systems. The first part delves into the investigation of partial failures, introducing OmegaGen, a tool adept at generating tailored checkers for detecting and localizing such failures. The second part grapples with silent semantic failures prevalent in cloud systems, showcasing our study findings, and introducing Oathkeeper, a tool that leverages past failures to infer rules and expose these silent issues. The third part explores solutions to slow failures via RESIN, a framework specifically designed to detect, diagnose, and mitigate memory leaks in cloud-scale infrastructures, developed in collaboration with Microsoft Azure. The dissertation concludes by offering insights into future directions for the construction of reliable cloud systems

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Preface

    Get PDF

    Understanding the rhythms of email processing strategies in a network of knowledge workers

    Get PDF
    Scope and Method of Study: While emails have improved the communication effectiveness of knowledge workers, they have also started to negatively impact their productivity. Emails have long been known to provide value to the organization, but the influence of the overwhelming amount of information shared through emails and the inefficiencies surrounding the everyday use of emails at work has remained almost completely unanalyzed so far. Frequent announcements of new emails and then a user's checking her email leads to an escalation in the interruption issues, the resulting overall effectiveness derived from email communication needs to be re-explored. This study uses a computational modeling approach to understand how various combinations of timing-based and frequency-based email processing strategies adopted within different types of knowledge networks can influence average email response time, average primary task completion time, and the overall effectiveness, comprising value-effectiveness and time-effectiveness, in the presence of interruptions. Earlier research on the topic has focused on individual knowledge workers. This study performs a network-level analysis to compare different sender-receiver relationships to assess the impact of different overall email policies on the entire network. Computational models of three different email exchange networks were developed, namely, homogeneous networks with higher users of email, homogeneous networks with low users of email and heterogeneous networks utilizing various combinations of email strategies. A new method, referred to as forward and reverse method, to evaluate and validate model parameters is also developed.Findings and Conclusions: Findings suggest the choice of email checking policy can impact time and value effectiveness. For example, rhythmic email processing strategies lead to lower value-effectiveness but higher time-effectiveness for all types of networks. Email response times are generally higher with rhythmic policies than with arrhythmic policies. On the other hand, primary task completion times are usually lower with rhythmic policies. On an average, organizations could potentially save 3 to 6 percent of overall time spent per day by using email strategies that are more time effective but could lose 2.5 to 3.5 percent in the communication-value. These values cumulate into significant time saving or value loss for large organizations
    • …
    corecore