158 research outputs found

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    Analysis of Microarray Data using Machine Learning Techniques on Scalable Platforms

    Get PDF
    Microarray-based gene expression profiling has been emerged as an efficient technique for classification, diagnosis, prognosis, and treatment of cancer disease. Frequent changes in the behavior of this disease, generate a huge volume of data. The data retrieved from microarray cover its veracities, and the changes observed as time changes (velocity). Although, it is a type of high-dimensional data which has very large number of features rather than number of samples. Therefore, the analysis of microarray high-dimensional dataset in a short period is very much essential. It often contains huge number of data, only a fraction of which comprises significantly expressed genes. The identification of the precise and interesting genes which are responsible for the cause of cancer is imperative in microarray data analysis. Most of the existing schemes employ a two phase process such as feature selection/extraction followed by classification. Our investigation starts with the analysis of microarray data using kernel based classifiers followed by feature selection using statistical t-test. In this work, various kernel based classifiers like Extreme learning machine (ELM), Relevance vector machine (RVM), and a new proposed method called kernel fuzzy inference system (KFIS) are implemented. The proposed models are investigated using three microarray datasets like Leukemia, Breast and Ovarian cancer. Finally, the performance of these classifiers are measured and compared with Support vector machine (SVM). From the results, it is revealed that the proposed models are able to classify the datasets efficiently and the performance is comparable to the existing kernel based classifiers. As the data size increases, to handle and process these datasets becomes very bottleneck. Hence, a distributed and a scalable cluster like Hadoop is needed for storing (HDFS) and processing (MapReduce as well as Spark) the datasets in an efficient way. The next contribution in this thesis deals with the implementation of feature selection methods, which are able to process the data in a distributed manner. Various statistical tests like ANOVA, Kruskal-Wallis, and Friedman tests are implemented using MapReduce and Spark frameworks which are executed on the top of Hadoop cluster. The performance of these scalable models are measured and compared with the conventional system. From the results, it is observed that the proposed scalable models are very efficient to process data of larger dimensions (GBs, TBs, etc.), as it is not possible to process with the traditional implementation of those algorithms. After selecting the relevant features, the next contribution of this thesis is the scalable viii implementation of the proximal support vector machine classifier, which is an efficient variant of SVM. The proposed classifier is implemented on the two scalable frameworks like MapReduce and Spark and executed on the Hadoop cluster. The obtained results are compared with the results obtained using conventional system. From the results, it is observed that the scalable cluster is well suited for the Big data. Furthermore, it is concluded that Spark is more efficient than MapReduce due to its an intelligent way of handling the datasets through Resilient distributed dataset (RDD) as well as in-memory processing and conventional system to analyze the Big datasets. Therefore, the next contribution of the thesis is the implementation of various scalable classifiers base on Spark. In this work various classifiers like, Logistic regression (LR), Support vector machine (SVM), Naive Bayes (NB), K-Nearest Neighbor (KNN), Artificial Neural Network (ANN), and Radial basis function network (RBFN) with two variants hybrid and gradient descent learning algorithms are proposed and implemented using Spark framework. The proposed scalable models are executed on Hadoop cluster as well as conventional system and the results are investigated. From the obtained results, it is observed that the execution of the scalable algorithms are very efficient than conventional system for processing the Big datasets. The efficacy of the proposed scalable algorithms to handle Big datasets are investigated and compared with the conventional system (where data are not distributed, kept on standalone machine and processed in a traditional manner). The comparative analysis shows that the scalable algorithms are very efficient to process Big datasets on Hadoop cluster rather than the conventional system

    Rolling Window Time Series Prediction Using MapReduce

    Get PDF
    Prediction of time series data is an important application in many domains. Despite their inherent advantages, traditional databases and MapReduce methodology are not ideally suited for this type of processing due to dependencies introduced by the sequential nature of time series. In this thesis a novel framework is presented to facilitate retrieval and rolling window prediction of irregularly sampled large-scale time series data. By introducing a new index pool data structure, processing of time series can be efficiently parallelised. The proposed framework is implemented in R programming environment and utilises Hadoop to support parallelisation and fault tolerance. A systematic multi-predictor selection model is designed and applied, in order to choose the best-fit algorithm for different circumstances. Additionally, the boosting method is deployed as a post-processing to further optimise the predictive results. Experimental results on a cloud-based platform indicate that the proposed framework scales linearly up to 32-nodes, and performs efficiently with a relatively optimised prediction

    Towards a more efficient and cost-sensitive extreme learning machine: A state-of-the-art review of recent trend

    Get PDF
    In spite of the prominence of extreme learning machine model, as well as its excellent features such as insignificant intervention for learning and model tuning, the simplicity of implementation, and high learning speed, which makes it a fascinating alternative method for Artificial Intelligence, including Big Data Analytics, it is still limited in certain aspects. These aspects must be treated to achieve an effective and cost-sensitive model. This review discussed the major drawbacks of ELM, which include difficulty in determination of hidden layer structure, prediction instability and Imbalanced data distributions, the poor capability of sample structure preserving (SSP), and difficulty in accommodating lateral inhibition by direct random feature mapping. Other drawbacks include multi-graph complexity, global memory size, one-by-one or chuck-by-chuck (a block of data), global memory size limitation, and challenges with big data. The recent trend proposed by experts for each drawback is discussed in detail towards achieving an effective and cost-sensitive mode

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    An insight into imbalanced Big Data classification: outcomes and challenges

    Get PDF
    Big Data applications are emerging during the last years, and researchers from many disciplines are aware of the high advantages related to the knowledge extraction from this type of problem. However, traditional learning approaches cannot be directly applied due to scalability issues. To overcome this issue, the MapReduce framework has arisen as a “de facto” solution. Basically, it carries out a “divide-and-conquer” distributed procedure in a fault-tolerant way to adapt for commodity hardware. Being still a recent discipline, few research has been conducted on imbalanced classification for Big Data. The reasons behind this are mainly the difficulties in adapting standard techniques to the MapReduce programming style. Additionally, inner problems of imbalanced data, namely lack of data and small disjuncts, are accentuated during the data partitioning to fit the MapReduce programming style. This paper is designed under three main pillars. First, to present the first outcomes for imbalanced classification in Big Data problems, introducing the current research state of this area. Second, to analyze the behavior of standard pre-processing techniques in this particular framework. Finally, taking into account the experimental results obtained throughout this work, we will carry out a discussion on the challenges and future directions for the topic.This work has been partially supported by the Spanish Ministry of Science and Technology under Projects TIN2014-57251-P and TIN2015-68454-R, the Andalusian Research Plan P11-TIC-7765, the Foundation BBVA Project 75/2016 BigDaPTOOLS, and the National Science Foundation (NSF) Grant IIS-1447795

    A study of EU data protection regulation and appropriate security for digital services and platforms

    Get PDF
    A law often has more than one purpose, more than one intention, and more than one interpretation. A meticulously formulated and context agnostic law text will still, when faced with a field propelled by intense innovation, eventually become obsolete. The European Data Protection Directive is a good example of such legislation. It may be argued that the technological modifications brought on by the EU General Data Protection Regulation (GDPR) are nominal in comparison to the previous Directive, but from a business perspective the changes are significant and important. The Directive’s lack of direct economic incentive for companies to protect personal data has changed with the Regulation, as companies may now have to pay severe fines for violating the legislation. The objective of the thesis is to establish the notion of trust as a key design goal for information systems handling personal data. This includes interpreting the EU legislation on data protection and using the interpretation as a foundation for further investigation. This interpretation is connected to the areas of analytics, security, and privacy concerns for intelligent service development. Finally, the centralised platform business model and its challenges is examined, and three main resolution themes for regulating platform privacy are proposed. The aims of the proposed resolutions are to create a more trustful relationship between providers and data subjects, while also improving the conditions for competition and thus providing data subjects with service alternatives. The thesis contributes new insights into the evolving privacy practices in the digital society at an important time of transition from the service driven business models to the platform business models. Firstly, privacy-related regulation and state of the art analytics development are examined to understand their implications for intelligent services that are based on automated processing and profiling. The ability to choose between providers of intelligent services is identified as the core challenge. Secondly, the thesis examines what is meant by appropriate security for systems that handle personal data, something the GDPR requires that organisations use without however specifying what can be considered appropriate. We propose a method for active network security in web software that is developed through the use of analytics for detection and by inserting data generators into a software installation. The active network security method is proposed as a framework for achieving compliance with the GDPR requirements for services and platforms to use appropriate security. Thirdly, the platform business model is considered from the privacy point of view and the implication of “processing silos” for intelligent services. The centralised platform model is considered problematic from both the data subject and from the competition standpoint. A resolution is offered for enabling user-initiated open data flow to counter the centralised “processing silos”, and thereby to facilitate the introduction of decentralised platforms. The thesis provides an interdisciplinary analysis considering the legal study (lex lata) and additionally the resolution (lex ferenda) is defined through argumentativist legal dogmatics and (de lege ferenda) of how the legal framework ought to be adapted to fit the described environment. User-friendly Legal Science is applied as a theory framework to provide a holistic approach to answering the research questions. The User-friendly Legal Science theory has its roots in design science and offers a way towards achieving interdisciplinary research in the fields of information systems and legal science

    A speculative approach to spatial-temporal efficiency with multi-objective optimization in a heterogeneous cloud environment

    Get PDF
    A heterogeneous cloud system, for example, a Hadoop 2.6.0 platform, provides distributed but cohesive services with rich features on large-scale management, reliability, and error tolerance. As big data processing is concerned, newly built cloud clusters meet the challenges of performance optimization focusing on faster task execution and more efficient usage of computing resources. Presently proposed approaches concentrate on temporal improvement, that is, shortening MapReduce time, but seldom focus on storage occupation; however, unbalanced cloud storage strategies could exhaust those nodes with heavy MapReduce cycles and further challenge the security and stability of the entire cluster. In this paper, an adaptive method is presented aiming at spatial–temporal efficiency in a heterogeneous cloud environment. A prediction model based on an optimized Kernel-based Extreme Learning Machine algorithm is proposed for faster forecast of job execution duration and space occupation, which consequently facilitates the process of task scheduling through a multi-objective algorithm called time and space optimized NSGA-II (TS-NSGA-II). Experiment results have shown that compared with the original load-balancing scheme, our approach can save approximate 47–55 s averagely on each task execution. Simultaneously, 1.254‰ of differences on hard disk occupation were made among all scheduled reducers, which achieves 26.6% improvement over the original scheme
    corecore