2,667 research outputs found

    GRASE: Granulometry Analysis with Semi Eager Classifier to Detect Malware

    Get PDF
    Technological advancement in communication leading to 5G, motivates everyone to get connected to the internet including ‘Devices’, a technology named Web of Things (WoT). The community benefits from this large-scale network which allows monitoring and controlling of physical devices. But many times, it costs the security as MALicious softWARE (MalWare) developers try to invade the network, as for them, these devices are like a ‘backdoor’ providing them easy ‘entry’. To stop invaders from entering the network, identifying malware and its variants is of great significance for cyberspace. Traditional methods of malware detection like static and dynamic ones, detect the malware but lack against new techniques used by malware developers like obfuscation, polymorphism and encryption. A machine learning approach to detect malware, where the classifier is trained with handcrafted features, is not potent against these techniques and asks for efforts to put in for the feature engineering. The paper proposes a malware classification using a visualization methodology wherein the disassembled malware code is transformed into grey images. It presents the efficacy of Granulometry texture analysis technique for improving malware classification. Furthermore, a Semi Eager (SemiE) classifier, which is a combination of eager learning and lazy learning technique, is used to get robust classification of malware families. The outcome of the experiment is promising since the proposed technique requires less training time to learn the semantics of higher-level malicious behaviours. Identifying the malware (testing phase) is also done faster. A benchmark database like malimg and Microsoft Malware Classification challenge (BIG-2015) has been utilized to analyse the performance of the system. An overall average classification accuracy of 99.03 and 99.11% is achieved, respectively

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    The PREVENT Study: Preventing hospital admissions attributable to gout

    Get PDF
    BackgroundGout is the most common form of inflammatory arthritis, affecting 1 in 40 people in the UK. Despite highly effective treatments, hospital admissions for gout flares have doubled in England over the last 20 years. Many of these admissions may have been prevented if optimal gout management had been delivered to patients.Objectives1. Describe the epidemiology of gout management in primary and secondary care in the UK.2. Develop an intervention package for implementation during hospitalisations for gout flares, with the aim of improving care and reducing hospitalisations.3. Implement and evaluate this intervention in people hospitalised for gout.MethodsI used population-level health datasets (CPRD, OpenSAFELY, NHS Digital Hospital Episode Statistics) to evaluate outcomes for people with incident gout diagnoses over a 20-year period. I used multivariable regression and survival modelling to analyse factors associated with outcomes, including: i) initiation of urate-lowering therapies (ULT); ii) attainment of serum urate targets; and iii) hospitalisations for gout flares.With extensive stakeholder input, I developed an evidence-based intervention package to optimise hospital gout care. This incorporated the findings of a systematic literature review and process mapping of the admitted patient journey in a cohort of hospitalised gout patients. My intervention consisted of a care pathway, based upon British (BSR), European (EULAR) and American (ACR) gout management guidelines, which encouraged ULT initiation prior to discharge, followed by a nurse-led, post-discharge review to facilitate handover to primary care. I implemented this intervention in patients hospitalised for gout flares at King’s College Hospital over a 12-month period, and evaluated outcomes including ULT initiation, urate target attainment and re-admission rates.ResultsIn the UK, between 2004 and 2020, I showed that only 29% of patients with gout were initiated on ULT within 12 months of diagnosis, while only 36% attained urate targets. No significant improvements in these outcomes were observed after publication of updated BSR and EULAR gout management guidelines. Comorbidities, including chronic kidney disease, heart failure and obesity, associated with increased odds of ULT initiation but decreased odds of attaining urate targets. For patients who were diagnosed with gout during the COVID-19 pandemic, I showed that ULT initiation improved modestly, relative to before the pandemic, while urate target attainment trends were similar. Underlying these trends was a 31% decrease in incident gout diagnoses in England during the first year of the pandemic.Using linked primary and secondary care data, I showed that the risk of hospitalisations for gout flares is greatest within the first 6 months after diagnosis. ULT initiation is associated with more hospitalisations for flares within the first 6 months of diagnosis, but a reduced risk of hospitalisations beyond 12 months; particularly when urate targets are attained.After process mapping the admitted patient journey and systematically appraising the evidence base, I developed and implemented a multi-faceted intervention at King’s College Hospital, with the aim of improving hospital gout care. Following implementation of this intervention, the proportion of hospitalised gout patients who initiated ULT increased from 49% to 92%; more patients achieved serum urate targets; and there were 38% fewer repeat hospitalisations for gout flares.ConclusionsAt a population level, ULT initiation and urate target attainment remain sub-optimal for people with gout in the UK, despite updated management guidelines. Initiation of ULT is associated with long-term reductions in hospitalisations for flares; however, only a minority of patients hospitalised for gout flares are initiated on ULT. After designing and implementing a strategy to optimise hospital gout care, over 90% of patients were initiated on ULT, urate target attainment improved, and repeat hospitalisations decreased. My findings suggest that improved primary-secondary care integration is essential if we are to reverse the epidemic of gout hospitalisations

    A Trust Management Framework for Vehicular Ad Hoc Networks

    Get PDF
    The inception of Vehicular Ad Hoc Networks (VANETs) provides an opportunity for road users and public infrastructure to share information that improves the operation of roads and the driver experience. However, such systems can be vulnerable to malicious external entities and legitimate users. Trust management is used to address attacks from legitimate users in accordance with a user’s trust score. Trust models evaluate messages to assign rewards or punishments. This can be used to influence a driver’s future behaviour or, in extremis, block the driver. With receiver-side schemes, various methods are used to evaluate trust including, reputation computation, neighbour recommendations, and storing historical information. However, they incur overhead and add a delay when deciding whether to accept or reject messages. In this thesis, we propose a novel Tamper-Proof Device (TPD) based trust framework for managing trust of multiple drivers at the sender side vehicle that updates trust, stores, and protects information from malicious tampering. The TPD also regulates, rewards, and punishes each specific driver, as required. Furthermore, the trust score determines the classes of message that a driver can access. Dissemination of feedback is only required when there is an attack (conflicting information). A Road-Side Unit (RSU) rules on a dispute, using either the sum of products of trust and feedback or official vehicle data if available. These “untrue attacks” are resolved by an RSU using collaboration, and then providing a fixed amount of reward and punishment, as appropriate. Repeated attacks are addressed by incremental punishments and potentially driver access-blocking when conditions are met. The lack of sophistication in this fixed RSU assessment scheme is then addressed by a novel fuzzy logic-based RSU approach. This determines a fairer level of reward and punishment based on the severity of incident, driver past behaviour, and RSU confidence. The fuzzy RSU controller assesses judgements in such a way as to encourage drivers to improve their behaviour. Although any driver can lie in any situation, we believe that trustworthy drivers are more likely to remain so, and vice versa. We capture this behaviour in a Markov chain model for the sender and reporter driver behaviours where a driver’s truthfulness is influenced by their trust score and trust state. For each trust state, the driver’s likelihood of lying or honesty is set by a probability distribution which is different for each state. This framework is analysed in Veins using various classes of vehicles under different traffic conditions. Results confirm that the framework operates effectively in the presence of untrue and inconsistent attacks. The correct functioning is confirmed with the system appropriately classifying incidents when clarifier vehicles send truthful feedback. The framework is also evaluated against a centralized reputation scheme and the results demonstrate that it outperforms the reputation approach in terms of reduced communication overhead and shorter response time. Next, we perform a set of experiments to evaluate the performance of the fuzzy assessment in Veins. The fuzzy and fixed RSU assessment schemes are compared, and the results show that the fuzzy scheme provides better overall driver behaviour. The Markov chain driver behaviour model is also examined when changing the initial trust score of all drivers

    An In-Depth Analysis on Efficiency and Vulnerabilities on a Cloud-Based Searchable Symmetric Encryption Solution

    Get PDF
    Searchable Symmetric Encryption (SSE) has come to be as an integral cryptographic approach in a world where digital privacy is essential. The capacity to search through encrypted data whilst maintaining its integrity meets the most important demand for security and confidentiality in a society that is increasingly dependent on cloud-based services and data storage. SSE offers efficient processing of queries over encrypted datasets, allowing entities to comply with data privacy rules while preserving database usability. Our research goes into this need, concentrating on the development and thorough testing of an SSE system based on Curtmola’s architecture and employing Advanced Encryption Standard (AES) in Cypher Block Chaining (CBC) mode. A primary goal of the research is to conduct a thorough evaluation of the security and performance of the system. In order to assess search performance, a variety of database settings were extensively tested, and the system's security was tested by simulating intricate threat scenarios such as count attacks and leakage abuse. The efficiency of operation and cryptographic robustness of the SSE system are critically examined by these reviews

    Hybrid Cloud-Based Privacy Preserving Clustering as Service for Enterprise Big Data

    Get PDF
    Clustering as service is being offered by many cloud service providers. It helps enterprises to learn hidden patterns and learn knowledge from large, big data generated by enterprises. Though it brings lot of value to enterprises, it also exposes the data to various security and privacy threats. Privacy preserving clustering is being proposed a solution to address this problem. But the privacy preserving clustering as outsourced service model involves too much overhead on querying user, lacks adaptivity to incremental data and involves frequent interaction between service provider and the querying user. There is also a lack of personalization to clustering by the querying user. This work “Locality Sensitive Hashing for Transformed Dataset (LSHTD)” proposes a hybrid cloud-based clustering as service model for streaming data that address the problems in the existing model such as privacy preserving k-means clustering outsourcing under multiple keys (PPCOM) and secure nearest neighbor clustering (SNNC) models, The solution combines hybrid cloud, LSHTD clustering algorithm as outsourced service model. Through experiments, the proposed solution is able is found to reduce the computation cost by 23% and communication cost by 6% and able to provide better clustering accuracy with ARI greater than 4.59% compared to existing works

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Secure storage systems for untrusted cloud environments

    Get PDF
    The cloud has become established for applications that need to be scalable and highly available. However, moving data to data centers owned and operated by a third party, i.e., the cloud provider, raises security concerns because a cloud provider could easily access and manipulate the data or program flow, preventing the cloud from being used for certain applications, like medical or financial. Hardware vendors are addressing these concerns by developing Trusted Execution Environments (TEEs) that make the CPU state and parts of memory inaccessible from the host software. While TEEs protect the current execution state, they do not provide security guarantees for data which does not fit nor reside in the protected memory area, like network and persistent storage. In this work, we aim to address TEEs’ limitations in three different ways, first we provide the trust of TEEs to persistent storage, second we extend the trust to multiple nodes in a network, and third we propose a compiler-based solution for accessing heterogeneous memory regions. More specifically, • SPEICHER extends the trust provided by TEEs to persistent storage. SPEICHER implements a key-value interface. Its design is based on LSM data structures, but extends them to provide confidentiality, integrity, and freshness for the stored data. Thus, SPEICHER can prove to the client that the data has not been tampered with by an attacker. • AVOCADO is a distributed in-memory key-value store (KVS) that extends the trust that TEEs provide across the network to multiple nodes, allowing KVSs to scale beyond the boundaries of a single node. On each node, AVOCADO carefully divides data between trusted memory and untrusted host memory, to maximize the amount of data that can be stored on each node. AVOCADO leverages the fact that we can model network attacks as crash-faults to trust other nodes with a hardened ABD replication protocol. • TOAST is based on the observation that modern high-performance systems often use several different heterogeneous memory regions that are not easily distinguishable by the programmer. The number of regions is increased by the fact that TEEs divide memory into trusted and untrusted regions. TOAST is a compiler-based approach to unify access to different heterogeneous memory regions and provides programmability and portability. TOAST uses a load/store interface to abstract most library interfaces for different memory regions

    Efficient Deep Learning for Real-time Classification of Astronomical Transients

    Get PDF
    A new golden age in astronomy is upon us, dominated by data. Large astronomical surveys are broadcasting unprecedented rates of information, demanding machine learning as a critical component in modern scientific pipelines to handle the deluge of data. The upcoming Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory will raise the big-data bar for time- domain astronomy, with an expected 10 million alerts per-night, and generating many petabytes of data over the lifetime of the survey. Fast and efficient classification algorithms that can operate in real-time, yet robustly and accurately, are needed for time-critical events where additional resources can be sought for follow-up analyses. In order to handle such data, state-of-the-art deep learning architectures coupled with tools that leverage modern hardware accelerators are essential. The work contained in this thesis seeks to address the big-data challenges of LSST by proposing novel efficient deep learning architectures for multivariate time-series classification that can provide state-of-the-art classification of astronomical transients at a fraction of the computational costs of other deep learning approaches. This thesis introduces the depthwise-separable convolution and the notion of convolutional embeddings to the task of time-series classification for gains in classification performance that are achieved with far fewer model parameters than similar methods. It also introduces the attention mechanism to time-series classification that improves performance even further still, with significant improvement in computational efficiency, as well as further reduction in model size. Finally, this thesis pioneers the use of modern model compression techniques to the field of photometric classification for efficient deep learning deployment. These insights informed the final architecture which was deployed in a live production machine learning system, demonstrating the capability to operate efficiently and robustly in real-time, at LSST scale and beyond, ready for the new era of data intensive astronomy

    HeteroSketch: coordinating network-wide monitoring in heterogeneous and dynamic networks

    Full text link
    CNS-2107086 - National Science Foundation; CNS-2106946 - National Science FoundationPublished versio
    • …
    corecore