11 research outputs found

    Data Security Using Stegnography and Quantum Cryptography

    Get PDF
    Stegnography is the technique of hiding confidential information within any media. In recent years variousstegnography methods have been proposed to make data more secure. At the same time differentsteganalysis methods have also evolved. The number of attacks used by the steganalyst has only multipliedover the years. Various tools for detecting hidden informations are easily available over the internet, sosecuring data from steganalyst is still considered a major challenge. While various work have been done toimprove the existing algorithms and also new algorithms have been proposed to make data behind theimage more secure. We have still been using the same public key cryptography like Deffie-Hellman andRSA for key negotiation which is vulnerable to both technological progress of computing power andevolution in mathematics, so in this paper we have proposed use of quantum cryptography along withstegnography. The use of this combination will create key distribution schemes that are uninterceptable thusproviding our data a perfect security.Keywords: Stegnography, Steganalysis, Steganalyst, Quantum Cryptography

    Study on Unsupervised Feature Selection Method Based on Extended Entropy

    Get PDF
    Feature selection techniques are designed to find the relevant feature subset of the original features that can facilitate clustering, classification and retrieval. It is an important research topic in pattern recognition and machine learning. Feature selection is mainly partitioned into two classes, i.e. supervised and unsupervised methods. Currently research mostly concentrates on supervised ones. Few efficient unsupervised feature selection methods have been developed because no label information is available. On the other hand, it is difficult to evaluate the selected features. An unsupervised feature selection method based on extended entropy is proposed here. The information loss based on extended entropy is used to measure the correlation between features. The method assures that the selected features have both big individual information and little redundancy information with the selected features. At last, the efficiency of the proposed method is illustrated with some practical datasets

    An Invocation Cost Optimization Method for Web Services in Cloud Environment

    Get PDF

    Comparison Between CPU and GPU for Parallel Implementation for a Neural Network Model Using Tensorflow and a Big Dataset

    Get PDF
    Machine learning is a rapidly growing field that has become more common of late. Because of the demanding computational usage of machine learning, this field has many dimensions needing research. TensorFlow has been developed to deal with and analyze neural networks computation. In particular, TensorFlow is often used in one of the machine learning branches and is called deep learning. This work discusses the performance of a deep learning model to train a very large dataset with TensorFlow. It compares performance when the run happens on CPUs and on GPUs regarding the run time and speed. The run time is an important factor for deep learning projects. The goal is to find the most efficient machine and platform to run the neural networks computation. TensorFlow provides all the resources and operations that are needed to process the neural networks computations. TensorFlow has two versions 1.0 and 2.0. This work uses TensorFlow 2.0 which is easier to code, faster to build the models, and faster for training time. Also, TensorFlow 2.0 has the methods used to distribute the run on multi-CPUs and multi-GPUs which use the strategy scope to run the model in parallel. The hardware utilized for this work consist of two clusters referred to as BlueMoon CPU and DeepGreen GPU. These clusters were developed by the University of Vermont for artificial intelligence research. The results show the performance of running the model for training a large dataset that becomes better each time the number of processors increases. The speedup is the highest when training a large batch size of samples with a higher number of processors. When more devices (GPU) have been added to these processors to run the model the performance becomes faster especially for the larger batch sizes. When the model runs on GPUs it requires a CPU to make the computation processing complete. Reducing the CPU number that is used to distribute the data on multi- GPUs makes the speedup higher than using more CPUs that are responsible to distribute the data on the same number of GPUs. The results contain the comparison run time of the classification model for the same dataset when run on a different machine with or without accelerator and using two different TensorFlow versions. The comparison results show using the TensorFlow 2.0 is better than when using the TensorFlow 1.0. Running the model with accelerator achieves more speedup and running the model on the clusters with both TensorFlow versions obtains higher performance

    Dynamic Resource Scheduling in Cloud Data Center

    Get PDF
    Cloud infrastructure provides a wide range of resources and services to companies and organizations, such as computation, storage, database, platforms, etc. These resources and services are used to power up and scale out tenants' workloads and meet their specified service level agreements (SLA). With the various kinds and characteristics of its workloads, an important problem for cloud provider is how to allocate it resource among the requests. An efficient resource scheduling scheme should be able to benefit both the cloud provider and also the cloud users. For the cloud provider, the goal of the scheduling algorithm is to improve the throughput and the job completion rate of the cloud data center under the stress condition or to use less physical machines to support all incoming jobs under the overprovisioning condition. For the cloud users, the goal of scheduling algorithm is to guarantee the SLAs and satisfy other job specified requirements. Furthermore, since in a cloud data center, jobs would arrive and leave very frequently, hence, it is critical to make the scheduling decision within a reasonable time. To improve the efficiency of the cloud provider, the scheduling algorithm needs to jointly reduce the inter-VM and intra-VM fragments, which means to consider the scheduling problem with regard to both the cloud provider and the users. This thesis address the cloud scheduling problem from both the cloud provider and the user side. Cloud data centers typically require tenants to specify the resource demands for the virtual machines (VMs) they create using a set of pre-defined, fixed configurations, to ease the resource allocation problem. However, this approach could lead to low resource utilization of cloud data centers as tenants are obligated to conservatively predict the maximum resource demand of their applications. In addition to that, users are at an inferior position of estimating the VM demands without knowing the multiplexing techniques of the cloud provider. Cloud provider, on the other hand, has a better knowledge at selecting the VM sets for the submitted applications. The scheduling problem is even severe for the mobile user who wants to use the cloud infrastructure to extend his/her computation and battery capacity, where the response and scheduling time is tight and the transmission channel between mobile users and cloudlet is highly variable. This thesis investigates into the resource scheduling problem for both wired and mobile users in the cloud environment. The proposed resource allocation problem is studied in the methodology of problem modeling, trace analysis, algorithm design and simulation approach. The first aspect this thesis addresses is the VM scheduling problem. Instead of the static VM scheduling, this thesis proposes a finer-grained dynamic resource allocation and scheduling algorithm that can substantially improve the utilization of the data center resources by increasing the number of jobs accommodated and correspondingly, the cloud data center provider's revenue. The second problem this thesis addresses is joint VM set selection and scheduling problem. The basic idea is that there may exist multiple VM sets that can support an application's resource demand, and by elaborately select an appropriate VM set, the utilization of the data center can be improved without violating the application's SLA. The third problem addressed by the thesis is the mobile cloud resource scheduling problem, where the key issue is to find the most energy and time efficient way of allocating components of the target application given the current network condition and cloud resource usage status. The main contribution of this thesis are the followings. For the dynamic real-time scheduling problem, a constraint programming solution is proposed to schedule the long jobs, and simple heuristics are used to quickly, yet quite accurately schedule the short jobs. Trace-driven simulations shows that the overall revenue for the cloud provider can be improved by 30\% over the traditional static VM resource allocation based on the coarse granularity specifications. For the joint VM selection and scheduling problem, this thesis proposes an optimal online VM set selection scheme that satisfies the user resource demand and minimizes the number of activated physical machines. Trace driven simulation shows around 18\% improvement of the overall utility of the provider compared to Bazaar-I approach and more than 25\% compared to best-fit and first-fit. For the mobile cloud scheduling problem, a reservation-based joint code partition and resource scheduling algorithm is proposed by conservatively estimating the minimal resource demand and a polynomial time code partition algorithm is proposed to obtain the corresponding partition

    The Role of Distributed Computing in Big Data Science: Case Studies in Forensics and Bioinformatics

    Get PDF
    2014 - 2015The era of Big Data is leading the generation of large amounts of data, which require storage and analysis capabilities that can be only ad- dressed by distributed computing systems. To facilitate large-scale distributed computing, many programming paradigms and frame- works have been proposed, such as MapReduce and Apache Hadoop, which transparently address some issues of distributed systems and hide most of their technical details. Hadoop is currently the most popular and mature framework sup- porting the MapReduce paradigm, and it is widely used to store and process Big Data using a cluster of computers. The solutions such as Hadoop are attractive, since they simplify the transformation of an application from non-parallel to the distributed one by means of general utilities and without many skills. However, without any algorithm engineering activity, some target applications are not alto- gether fast and e cient, and they can su er from several problems and drawbacks when are executed on a distributed system. In fact, a distributed implementation is a necessary but not su cient condition to obtain remarkable performance with respect to a non-parallel coun- terpart. Therefore, it is required to assess how distributed solutions are run on a Hadoop cluster, and/or how their performance can be improved to reduce resources consumption and completion times. In this dissertation, we will show how Hadoop-based implementations can be enhanced by using carefully algorithm engineering activity, tuning, pro ling and code improvements. It is also analyzed how to achieve these goals by working on some critical points, such as: data local computation, input split size, number and granularity of tasks, cluster con guration, input/output representation, etc. i In particular, to address these issues, we choose some case studies coming from two research areas where the amount of data is rapidly increasing, namely, Digital Image Forensics and Bioinformatics. We mainly describe full- edged implementations to show how to design, engineer, improve and evaluate Hadoop-based solutions for Source Camera Identi cation problem, i.e., recognizing the camera used for taking a given digital image, adopting the algorithm by Fridrich et al., and for two of the main problems in Bioinformatics, i.e., alignment- free sequence comparison and extraction of k-mer cumulative or local statistics. The results achieved by our improved implementations show that they are substantially faster than the non-parallel counterparts, and re- markably faster than the corresponding Hadoop-based naive imple- mentations. In some cases, for example, our solution for k-mer statis- tics is approximately 30× faster than our Hadoop-based naive im- plementation, and about 40× faster than an analogous tool build on Hadoop. In addition, our applications are also scalable, i.e., execution times are (approximately) halved by doubling the computing units. Indeed, algorithm engineering activities based on the implementation of smart improvements and supported by careful pro ling and tun- ing may lead to a much better experimental performance avoiding potential problems. We also highlight how the proposed solutions, tips, tricks and insights can be used in other research areas and problems. Although Hadoop simpli es some tasks of the distributed environ- ments, we must thoroughly know it to achieve remarkable perfor- mance. It is not enough to be an expert of the application domain to build Hadop-based implementations, indeed, in order to achieve good performance, an expert of distributed systems, algorithm engi- neering, tuning, pro ling, etc. is also required. Therefore, the best performance depend heavily on the cooperation degree between the domain expert and the distributed algorithm engineer. [edited by Author]XIV n.s

    Data Exfiltration:A Review of External Attack Vectors and Countermeasures

    Get PDF
    AbstractContext One of the main targets of cyber-attacks is data exfiltration, which is the leakage of sensitive or private data to an unauthorized entity. Data exfiltration can be perpetrated by an outsider or an insider of an organization. Given the increasing number of data exfiltration incidents, a large number of data exfiltration countermeasures have been developed. These countermeasures aim to detect, prevent, or investigate exfiltration of sensitive or private data. With the growing interest in data exfiltration, it is important to review data exfiltration attack vectors and countermeasures to support future research in this field. Objective This paper is aimed at identifying and critically analysing data exfiltration attack vectors and countermeasures for reporting the status of the art and determining gaps for future research. Method We have followed a structured process for selecting 108 papers from seven publication databases. Thematic analysis method has been applied to analyse the extracted data from the reviewed papers. Results We have developed a classification of (1) data exfiltration attack vectors used by external attackers and (2) the countermeasures in the face of external attacks. We have mapped the countermeasures to attack vectors. Furthermore, we have explored the applicability of various countermeasures for different states of data (i.e., in use, in transit, or at rest). Conclusion This review has revealed that (a) most of the state of the art is focussed on preventive and detective countermeasures and significant research is required on developing investigative countermeasures that are equally important; (b) Several data exfiltration countermeasures are not able to respond in real-time, which specifies that research efforts need to be invested to enable them to respond in real-time (c) A number of data exfiltration countermeasures do not take privacy and ethical concerns into consideration, which may become an obstacle in their full adoption (d) Existing research is primarily focussed on protecting data in ‘in use’ state, therefore, future research needs to be directed towards securing data in ‘in rest’ and ‘in transit’ states (e) There is no standard or framework for evaluation of data exfiltration countermeasures. We assert the need for developing such an evaluation framework

    Towards Efficient Intrusion Detection using Hybrid Data Mining Techniques

    Get PDF
    The enormous development in the connectivity among different type of networks poses significant concerns in terms of privacy and security. As such, the exponential expansion in the deployment of cloud technology has produced a massive amount of data from a variety of applications, resources and platforms. In turn, the rapid rate and volume of data creation in high-dimension has begun to pose significant challenges for data management and security. Handling redundant and irrelevant features in high-dimensional space has caused a long-term challenge for network anomaly detection. Eliminating such features with spectral information not only speeds up the classification process, but also helps classifiers make accurate decisions during attack recognition time, especially when coping with large-scale and heterogeneous data such as network traffic data. Furthermore, the continued evolution of network attack patterns has resulted in the emergence of zero-day cyber attacks, which nowadays has considered as a major challenge in cyber security. In this threat environment, traditional security protections like firewalls, anti-virus software, and virtual private networks are not always sufficient. With this in mind, most of the current intrusion detection systems (IDSs) are either signature-based, which has been proven to be insufficient in identifying novel attacks, or developed based on absolute datasets. Hence, a robust mechanism for detecting intrusions, i.e. anomaly-based IDS, in the big data setting has therefore become a topic of importance. In this dissertation, an empirical study has been conducted at the initial stage to identify the challenges and limitations in the current IDSs, providing a systematic treatment of methodologies and techniques. Next, a comprehensive IDS framework has been proposed to overcome the aforementioned shortcomings. First, a novel hybrid dimensionality reduction technique is proposed combining information gain (IG) and principal component analysis (PCA) methods with an ensemble classifier based on three different classification techniques, named IG-PCA-Ensemble. Experimental results show that the proposed dimensionality reduction method contributes more critical features and reduced the detection time significantly. The results show that the proposed IG-PCA-Ensemble approach has also exhibits better performance than the majority of the existing state-of-the-art approaches

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
    corecore