24 research outputs found

    A model for computing skyline data items in cloud incomplete databases

    Get PDF
    Skyline queries intend to retrieve the most superior data items in the database that best fit with the user’s given preference. However, processing skyline queries are expensive and uneasy when applying on large distributed databases such as cloud databases. Moreover, it would be further sophisticated to process skyline queries if these distributed databases have missing values in certain dimensions. The effect of data incompleteness on skyline process is extremely severe because missing values result in un-hold the transitivity property of skyline technique and leads to the problem of cyclic dominance. This paper proposes an efficient model for computing skyline data items in cloud incomplete databases. The model focuses on processing skyline queries in cloud incomplete databases aiming at reducing the domination tests between data items, the processing time, and the amount of data transfer among the involved datacenters. Various set of experiments are conducted over two different types of datasets and the result demonstrates that the proposed solution outperforms the previous approaches in terms of domination tests, processing time, and amount of data transferred

    An empirical evaluation of the “cognitive complexity” measure as a predictor of code understandability

    Get PDF
    Background: Code that is difficult to understand is also difficult to inspect and maintain and ultimately causes increased costs. Therefore, it would be greatly beneficial to have source code measures that are related to code understandability. Many ‘‘traditional’’ source code measures, including for instance Lines of Code and McCabe’s Cyclomatic Complexity, have been used to identify hard-to-understand code. In addition, the ‘‘Cognitive Complexity’’ measure was introduced in 2018 with the specific goal of improving the ability to evaluate code understandability. Aims: The goals of this paper are to assess whether (1) ‘‘Cognitive Complexity’’ is better correlated with code understandability than traditional measures, and (2) the availability of the ‘‘Cognitive Complexity’’ measure improves the performance (i.e., the accuracy) of code understandability prediction models. Method: We carried out an empirical study, in which we reused code understandability measures used in several previous studies. We first built Support Vector Regression models of understandability vs. code measures, and we then compared the performance of models that use ‘‘Cognitive Complexity’’ against the performance of models that do not. Results: ‘‘Cognitive Complexity’’ appears to be correlated to code understandability approximately as much as traditional measures, and the performance of models that use ‘‘Cognitive Complexity’’ is extremely close to the performance of models that use only traditional measures. Conclusions: The ‘‘Cognitive Complexity’’ measure does not appear to fulfill the promise of being a significant improvement over previously proposed measures, as far as code understandability prediction is concerned

    Disaster recovery in cloud computing systems: an overview

    Get PDF
    With the rapid growth of internet technologies, large-scale online services, such as data backup and data recovery are increasingly available. Since these large-scale online services require substantial networking, processing, and storage capacities, it has become a considerable challenge to design equally large-scale computing infrastructures that support these services cost-effectively. In response to this rising demand, cloud computing has been refined during the past decade and turned into a lucrative business for organizations that own large datacenters and offer their computing resources. Undoubtedly cloud computing provides tremendous benefits for data storage backup and data accessibility at a reasonable cost. This paper aims at surveying and analyzing the previous works proposed for disaster recovery in cloud computing. The discussion concentrates on investigating the positive aspects and the limitations of each proposal. Also examined are discussed the current challenges in handling data recovery in the cloud context and the impact of data backup plan on maintaining the data in the event of natural disasters. A summary of the leading research work is provided outlining their weaknesses and limitations in the area of disaster recovery in the cloud computing environment. An in-depth discussion of the current and future trends research in the area of disaster recovery in cloud computing is also offered. Several work research directions that ought to be explored are pointed out as well, which may help researchers to discover and further investigate those problems related to disaster recovery in the cloud environment that have remained unresolved

    Quantum computers for optimization the performance

    Get PDF
    Computers decrease human work and concentrate on enhancing the performance to advance the technology. Various methods have been developed to enhance the performance of computers. Performance of computer is based on computer architecture, while computer architecture differs in various devices, such as microcomputers, minicomputers, mainframes, laptops, tablets, and mobile phones. While each device has its own architecture, the majority of these systems are built on Boolean algebra. In this study, a few basic concepts used in quantum computing are discussed. It is known that quantum computers do not possess any transistor and chip while being roughly 100 times faster than a common classic silicon computer. Scientists believe that quantum computers are the next generation of the classic computers

    Data backup and recovery with a minimum replica plan in a multi-cloud environment

    Get PDF
    Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replicationsin the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial ofservice attacksto maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicasreplication strategies

    Using machine learning algorithm for detection of cyber-attacks in cyber physical systems

    Get PDF
    Network integration is common in cyber-physical systems (CPS) to allow for remote access, surveillance, and analysis. They have been exposed to cyberattacks because of their integration with an insecure network. In the event of a violation in internet security, an attacker was able to interfere with the system's functions, which might result in catastrophic consequences. As a result, detecting breaches into mission-critical CPS is a top priority. Detecting assaults on CPSs, which are increasingly being targeted by cyber criminals and cyber threats, is becoming increasingly difficult. Machine Learning (ML) and Artificial Intelligence (AI) have the potential to make these the worst of moments, but it may also be the finest of times. There are a variety of ways in which AI technology can aid in the growth and profitability of a variety of industries. Such data can be parsed using ML and AI approaches in designed to check attacks on CPSs. Hence, in this paper, we propose a novel cyberattack detection framework by integrating AI and ML (ML) methods. Here, initially we collect the dataset from the CPS database and preprocess the data using normalization for removal of errors and redundant data. The features are extracted using Linear Discriminant Analysis (LDA). We have proposed Self-tuned Fuzzy Logic-based Hidden Markov Model (SFL-HMM) with Heuristic Multi-Swarm Optimization (HMS-ACO) algorithm for detection of the cyberattacks. The proposed method is evaluated using the MATLAB simulation tool and the metrics are compared with existing approaches. The results of the experiments reveal that the framework is more successful than traditional strategies in achieving high degrees of privacy. Furthermore, in terms of detection rate, false positive rate, and computing time, the framework beats traditional detection algorithms

    Improved handover decision algorithm using multiple criteria

    Get PDF
    The transfer of massive data between varied network positions links of network relies on data rate, as well as the traffic capacity of the network. Conventionally, a device that is mobile can be used to attain vertical handover functional by weighing in only an aspect, which refers to Received Signal Strength (RSS). The application of this particular criterion could lead to interruption in services, ineffective vertical handover, and a network load that is not balanced. Hence, this paper proposes an improvised vertical handover decision algorithm by integrating multi-criteria within a wireless network that is heterogeneous. The proposed algorithm comprised of three vertical handover decision algorithms, namely: mobile weight, network weight, and equal weight. Additionally, three technology interfaces were embedded in this study including Worldwide interoperability for Microwave Access (WiMAX), Wireless Local Area Network (WLAN), and Long-Term Evolution (LTE). As a result, the simulation outcomes demonstrated that the handover decision algorithm for network weight generated exceptional outputs, in comparison to mobile and equal weights, as well as the conventional network decision algorithm from the aspects of handover failure and handover number probabilities

    An overview of query processing on crowdsourced databases

    Get PDF
    Crowd-sourcing is a powerful solution to correctly answer expensive and unanswered queries in the database. This includes queries on a database with uncertain and incomplete data. The crowd-sourcing attempts to exploit human abilities to process these difficult tasks and workers helped to provide accurate results utilizing the available data in the crowd. The crowd-sourcing database systems (CSDB) combined the ability of the crowd with the relational database by using some variant of the relational database with minor changes. This paper surveys and examines the leading studies conducted on the area of query processing for both traditional and preference queries in crowd-sourcing databases. The focus is given on highlights the strengths and the weakness of each approach. A detailed discussion for the current and future trends research relevant to query processing in the area of crow-sourced databases is also demonstrated

    Algorithm for enhancing the QoS of video traffic over wireless mesh networks

    Get PDF
    One of the major issues in a wireless mesh networks (WMNs) which needs to be solved is the lack of a viable protocol for medium access control (MAC). In fact, the main concern is to expand the application of limited wireless resources while simultaneously retaining the quality of service (QoS) of all types of traffic. In particular, the video service for real-time variable bit rate (rt-VBR). As such, this study attempts to enhance QoS with regard to packet loss, average delay, and throughput by controlling the transmitted video packets. The packet loss and average delay of QoS for video traffic can be controlled. Results of simulation show that Optimum Dynamic Reservation-Time Division Multiplexing Access (ODR-TDMA) has achieved excellent utilization of resource that improvised the QoS meant for video packets. This study has also proven the adequacy of the proposed algorithm to minimize packet delay and packet loss, in addition to enhancing throughput in comparison to those reported in previous studies
    corecore