13 research outputs found

    Constructing Ontology-Based Cancer Treatment Decision Support System with Case-Based Reasoning

    Full text link
    Decision support is a probabilistic and quantitative method designed for modeling problems in situations with ambiguity. Computer technology can be employed to provide clinical decision support and treatment recommendations. The problem of natural language applications is that they lack formality and the interpretation is not consistent. Conversely, ontologies can capture the intended meaning and specify modeling primitives. Disease Ontology (DO) that pertains to cancer's clinical stages and their corresponding information components is utilized to improve the reasoning ability of a decision support system (DSS). The proposed DSS uses Case-Based Reasoning (CBR) to consider disease manifestations and provides physicians with treatment solutions from similar previous cases for reference. The proposed DSS supports natural language processing (NLP) queries. The DSS obtained 84.63% accuracy in disease classification with the help of the ontology

    An Effective Classification Approach for Big Data Security Based on GMPLS/MPLS Networks

    Get PDF
    The need for effective approaches to handle big data that is characterized by its large volume, different types, and high velocity is vital and hence has recently attracted the attention of several research groups. This is especially the case when traditional data processing techniques and capabilities proved to be insufficient in that regard. Another aspect that is equally important while processing big data is its security, as emphasized in this paper. Accordingly, we propose to process big data in two different tiers.The first tier classifies the data based on its structure and on whether security is required or not. In contrast, the second tier analyzes and processes the data based on volume, variety, and velocity factors. Simulation results demonstrated that using classification feedback from a MPLS/GMPLS core network proved to be key in reducing the data evaluation and processing time

    A review of the state of the art in privacy and security in the eHealth cloud

    Get PDF
    The proliferation and usefulness of cloud computing in eHealth demands high levels of security and privacy for health records. However, eHealth clouds pose serious security and privacy concerns for sensitive health data. Therefore, practical and effective methods for security and privacy management are essential to preserve the privacy and security of the data. To review the current research directions in security and privacy in eHealth clouds, this study has analysed and summarized the state of the art technologies and approaches reported in security and privacy in the eHealth cloud. An extensive review covering 132 studies from several peer-reviewed databases such as IEEE Xplore was conducted. The relevant studies were reviewed and summarized in terms of their benefits and risks. This study also compares several research works in the domain of data security requirements. This paper will provide eHealth stakeholders and researchers with extensive knowledge and information on current research trends in the areas of privacy and security

    Controlled secure social cloud data sharing based on a novel identity based proxy re-encryption plus scheme

    Get PDF
    Currently we are witnessing a rapid integration of social networks and cloud computing, especially on storing social media contents on cloud storage due to its cheap management and easy accessing at any time and from any place. However, how to securely store and share social media contents such as pictures/videos among social groups is still a very challenging problem. In this paper, we try to tackle this problem by using a new cryptographic primitive: identity based proxy re-encryption plus (IBPRE ), which is a variant of proxy re-encryption (PRE). In PRE, by using re-encryption keys, a ciphertext computed for Alice can be transferred to a new one for Bob. Recently, the concept of PRE plus (PRE) was introduced by Wang et al. In PRE, all the algorithms are almost the same as traditional PRE, except the re-encryption keys are generated by the encrypter instead of the delegator. The message-level based fine-grained delegation property and the weak non-transferable property can be easily achieved by PRE , while traditional PRE cannot achieve them. Based on the 3-linear map, we first propose a new IBE scheme and a new IBPRE scheme, we prove the security of these schemes and give the properties and performance analysis of the new IBPRE scheme. Finally, we propose a new framework based on this new primitive for secure cloud social data sharingPeer ReviewedPostprint (author's final draft

    Securing clouds using cryptography and traffic classification

    Get PDF
    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Over the last decade, cloud computing has gained popularity and wide acceptance, especially within the health sector where it offers several advantages such as low costs, flexible processes, and access from anywhere. Although cloud computing is widely used in the health sector, numerous issues remain unresolved. Several studies have attempted to review the state of the art in eHealth cloud privacy and security however, some of these studies are outdated or do not cover certain vital features of cloud security and privacy such as access control, revocation and data recovery plans. This study targets some of these problems and proposes protocols, algorithms and approaches to enhance the security and privacy of cloud computing with particular reference to eHealth clouds. Chapter 2 presents an overview and evaluation of the state of the art in eHealth security and privacy. Chapter 3 introduces different research methods and describes the research design methodology and processes used to carry out the research objectives. Of particular importance are authenticated key exchange and block cipher modes. In Chapter 4, a three-party password-based authenticated key exchange (TPAKE) protocol is presented and its security analysed. The proposed TPAKE protocol shares no plaintext data; all data shared between the parties are either hashed or encrypted. Using the random oracle model (ROM), the security of the proposed TPAKE protocol is formally proven based on the computational Diffie-Hellman (CDH) assumption. Furthermore, the analysis included in this chapter shows that the proposed protocol can ensure perfect forward secrecy and resist many kinds of common attacks such as man-in-the-middle attacks, online and offline dictionary attacks, replay attacks and known key attacks. Chapter 5 proposes a parallel block cipher (PBC) mode in which blocks of cipher are processed in parallel. The results of speed performance tests for this PBC mode in various settings are presented and compared with the standard CBC mode. Compared to the CBC mode, the PBC mode is shown to give execution time savings of 60%. Furthermore, in addition to encryption based on AES 128, the hash value of the data file can be utilised to provide an integrity check. As a result, the PBC mode has a better speed performance while retaining the confidentiality and security provided by the CBC mode. Chapter 6 applies TPAKE and PBC to eHealth clouds. Related work on security, privacy preservation and disaster recovery are reviewed. Next, two approaches focusing on security preservation and privacy preservation, and a disaster recovery plan are proposed. The security preservation approach is a robust means of ensuring the security and integrity of electronic health records and is based on the PBC mode, while the privacy preservation approach is an efficient authentication method which protects the privacy of personal health records and is based on the TPAKE protocol. A discussion about how these integrated approaches and the disaster recovery plan can ensure the reliability and security of cloud projects follows. Distributed denial of service (DDoS) attacks are the second most common cybercrime attacks after information theft. The timely detection and prevention of such attacks in cloud projects are therefore vital, especially for eHealth clouds. Chapter 7 presents a new classification system for detecting and preventing DDoS TCP flood attacks (CS_DDoS) for public clouds, particularly in an eHealth cloud environment. The proposed CS_DDoS system offers a solution for securing stored records by classifying incoming packets and making a decision based on these classification results. During the detection phase, CS_DDOS identifies and determines whether a packet is normal or from an attacker. During the prevention phase, packets classified as malicious are denied access to the cloud service, and the source IP is blacklisted. The performance of the CS_DDoS system is compared using four different classifiers: a least-squares support vector machine (LS-SVM), naïve Bayes, K-nearest-neighbour, and multilayer perceptron. The results show that CS_DDoS yields the best performance when the LS-SVM classifier is used. This combination can detect DDoS TCP flood attacks with an accuracy of approximately 97% and a Kappa coefficient of 0.89 when under attack from a single source, and 94% accuracy and a Kappa coefficient of 0.9 when under attack from multiple attackers. These results are then discussed in terms of the accuracy and time complexity, and are validated using a k-fold cross-validation model. Finally, a method to mitigate DoS attacks in the cloud and reduce excessive energy consumption through managing and limiting certain flows of packets is proposed. Instead of a system shutdown, the proposed method ensures the availability of service. The proposed method manages the incoming packets more effectively by dropping packets from the most frequent requesting sources. This method can process 98.4% of the accepted packets during an attack. Practicality and effectiveness are essential requirements of methods for preserving the privacy and security of data in clouds. The proposed methods successfully secure cloud projects and ensure the availability of services in an efficient way

    Spatio-Temporal Multimedia Big Data Analytics Using Deep Neural Networks

    Get PDF
    With the proliferation of online services and mobile technologies, the world has stepped into a multimedia big data era, where new opportunities and challenges appear with the high diversity multimedia data together with the huge amount of social data. Nowadays, multimedia data consisting of audio, text, image, and video has grown tremendously. With such an increase in the amount of multimedia data, the main question raised is how one can analyze this high volume and variety of data in an efficient and effective way. A vast amount of research work has been done in the multimedia area, targeting different aspects of big data analytics, such as the capture, storage, indexing, mining, and retrieval of multimedia big data. However, there is insufficient research that provides a comprehensive framework for multimedia big data analytics and management. To address the major challenges in this area, a new framework is proposed based on deep neural networks for multimedia semantic concept detection with a focus on spatio-temporal information analysis and rare event detection. The proposed framework is able to discover the pattern and knowledge of multimedia data using both static deep data representation and temporal semantics. Specifically, it is designed to handle data with skewed distributions. The proposed framework includes the following components: (1) a synthetic data generation component based on simulation and adversarial networks for data augmentation and deep learning training, (2) an automatic sampling model to overcome the imbalanced data issue in multimedia data, (3) a deep representation learning model leveraging novel deep learning techniques to generate the most discriminative static features from multimedia data, (4) an automatic hyper-parameter learning component for faster training and convergence of the learning models, (5) a spatio-temporal deep learning model to analyze dynamic features from multimedia data, and finally (6) a multimodal deep learning fusion model to integrate different data modalities. The whole framework has been evaluated using various large-scale multimedia datasets that include the newly collected disaster-events video dataset and other public datasets

    Hybrid bootstrap-based approach with binary artificial bee colony and particle swarm optimization in Taguchi's T-Method

    Get PDF
    Taguchi's T-Method is one of the Mahalanobis Taguchi System (MTS)-ruled prediction techniques that has been established specifically but not limited to small, multivariate sample data. When evaluating data using a system such as the Taguchi's T-Method, bias issues often appear due to inconsistencies induced by model complexity, variations between parameters that are not thoroughly configured, and generalization aspects. In Taguchi's T-Method, the unit space determination is too reliant on the characteristics of the dependent variables with no appropriate procedures designed. Similarly, the least square-proportional coefficient is well known not to be robust to the effect of the outliers, which indirectly affects the accuracy of the weightage of SNR that relies on the model-fit accuracy. The small effect of the outliers in the data analysis may influence the overall performance of the predictive model unless more development is incorporated into the current framework. In this research, the mechanism of improved unit space determination was explicitly designed by implementing the minimum-based error with the leave-one-out method, which was further enhanced by embedding strategies that aim to minimize the impact of variance within each parameter estimator using the leave-one-out bootstrap (LOOB) and 0.632 estimates approaches. The complexity aspect of the prediction model was further enhanced by removing features that did not provide valuable information on the overall prediction. In order to accomplish this, a matrix called Orthogonal Array (OA) was used within the existing Taguchi's T-Method. However, OA's fixed-scheme matrix, as well as its drawback in coping with the high-dimensionality factor, leads to a sub- optimal solution. On the other hand, the usage of SNR, decibel (dB) as its objective function proved to be a reliable measure. The architecture of a Hybrid Binary Artificial Bee Colony and Particle Swarm Optimization (Hybrid Binary ABC-PSO), including the Binary Bitwise ABC (BitABC) and Probability Binary PSO (PBPSO), has been developed as a novel search engine that helps to cater the limitation of OA. The SNR (dB) and mean absolute error (MAE) were the main part of the performance measure used in this research. The generalization aspect was a fundamental addition incorporated into this research to control the effect of overfitting in the analysis. The proposed enhanced parameter estimators with feature selection optimization in this analysis had been tested on 10 case studies and had improved predictive accuracy by an average of 46.21% depending on the cases. The average standard deviation of MAE, which describes the variability impact of the optimized method in all 10 case studies, displayed an improved trend relative to the Taguchi’s T-Method. The need for standardization and a robust approach to outliers is recommended for future research. This study proved that the developed architecture of Hybrid Binary ABC-PSO with Bootstrap and minimum-based error using leave-one-out as the proposed parameter estimators enhanced techniques in the methodology of Taguchi's T-Method by effectively improving its prediction accuracy

    Constructing and restraining the societies of surveillance: Accountability, from the rise of intelligence services to the expansion of personal data networks in Spain and Brazil (1975-2020)

    Get PDF
    541 p.The objective of this study is to examine the development of socio-technical accountability mechanisms in order to: a) preserve and increase the autonomy of individuals subjected to surveillance and b) replenish the asymmetry of power between those who watch and those who are watched. To do so, we address two surveillance realms: intelligence services and personal data networks. The cases studied are Spain and Brazil, from the beginning of the political transitions in the 1970s (in the realm of intelligence), and from the expansion of Internet digital networks in the 1990s (in the realm of personal data) to the present time. The examination of accountability, thus, comprises a holistic evolution of institutions, regulations, market strategies, as well as resistance tactics. The conclusion summarizes the accountability mechanisms and proposes universal principles to improve the legitimacy of authority in surveillance and politics in a broad sense
    corecore