38 research outputs found

    Analyzing the impact of feature selection on the accuracy of heart disease prediction

    Full text link
    Heart Disease has become one of the most serious diseases that has a significant impact on human life. It has emerged as one of the leading causes of mortality among the people across the globe during the last decade. In order to prevent patients from further damage, an accurate diagnosis of heart disease on time is an essential factor. Recently we have seen the usage of non-invasive medical procedures, such as artificial intelligence-based techniques in the field of medical. Specially machine learning employs several algorithms and techniques that are widely used and are highly useful in accurately diagnosing the heart disease with less amount of time. However, the prediction of heart disease is not an easy task. The increasing size of medical datasets has made it a complicated task for practitioners to understand the complex feature relations and make disease predictions. Accordingly, the aim of this research is to identify the most important risk-factors from a highly dimensional dataset which helps in the accurate classification of heart disease with less complications. For a broader analysis, we have used two heart disease datasets with various medical features. The classification results of the benchmarked models proved that there is a high impact of relevant features on the classification accuracy. Even with a reduced number of features, the performance of the classification models improved significantly with a reduced training time as compared with models trained on full feature set.Comment: Published in Healthcare Analytics, 202

    MinHash-Based Fuzzy Keyword Search of Encrypted Data across Multiple Cloud Servers

    No full text
    To enhance the efficiency of data searching, most data owners store their data files in different cloud servers in the form of cipher-text. Thus, efficient search using fuzzy keywords becomes a critical issue in such a cloud computing environment. This paper proposes a method that aims at improving the efficiency of cipher-text retrieval and lowering storage overhead for fuzzy keyword search. In contrast to traditional approaches, the proposed method can reduce the complexity of Min-Hash-based fuzzy keyword search by using Min-Hash fingerprints to avoid the need to construct the fuzzy keyword set. The method will utilize Jaccard similarity to rank the results of retrieval, thus reducing the amount of calculation for similarity and saving a lot of time and space overhead. The method will also take consideration of multiple user queries through re-encryption technology and update user permissions dynamically. Security analysis demonstrates that the method can provide better privacy preservation and experimental results show that efficiency of cipher-text using the proposed method can improve the retrieval time and lower storage overhead as well

    The Role of Artificial Intelligence within Circular Economy Activities—A View from Ireland

    No full text
    The world’s current linear economic model is unsustainable. This model encourages improper use of limited natural resources and causes abundant waste production resulting in severe harm to the environment. A circular economy (CE) is a sustainable, restorative, and regenerative alternative to the current linear economy and is gaining popularity worldwide. Amongst various digital technologies, Artificial intelligence (AI) is a crucial enabler for CE and can aid significantly with the adoption and implementation of CE in real-world applications. In this paper, we describe the intersection of AI and CE and policies around implementing CE principles using AI. As a means of grounding the discussion, we discuss some initiatives taken by the Irish government to adopt circularity and explore the role AI plays in these. We present a number of practical examples of AI and CE from Ireland. We argue that digitalisation has potential in CE and it has a major role to play in the transition towards CE. We close the paper by reflecting on future steps around practical implementations of AI-based CE processes

    An Adaptive Privacy Protection Method for Smart Home Environments Using Supervised Learning

    No full text
    In recent years, smart home technologies have started to be widely used, bringing a great deal of convenience to people’s daily lives. At the same time, privacy issues have become particularly prominent. Traditional encryption methods can no longer meet the needs of privacy protection in smart home applications, since attacks can be launched even without the need for access to the cipher. Rather, attacks can be successfully realized through analyzing the frequency of radio signals, as well as the timestamp series, so that the daily activities of the residents in the smart home can be learnt. Such types of attacks can achieve a very high success rate, making them a great threat to users’ privacy. In this paper, we propose an adaptive method based on sample data analysis and supervised learning (SDASL), to hide the patterns of daily routines of residents that would adapt to dynamically changing network loads. Compared to some existing solutions, our proposed method exhibits advantages such as low energy consumption, low latency, strong adaptability, and effective privacy protection

    Network Forensics Method Based on Evidence Graph and Vulnerability Reasoning

    No full text
    As the Internet becomes larger in scale, more complex in structure and more diversified in traffic, the number of crimes that utilize computer technologies is also increasing at a phenomenal rate. To react to the increasing number of computer crimes, the field of computer and network forensics has emerged. The general purpose of network forensics is to find malicious users or activities by gathering and dissecting firm evidences about computer crimes, e.g., hacking. However, due to the large volume of Internet traffic, not all the traffic captured and analyzed is valuable for investigation or confirmation. After analyzing some existing network forensics methods to identify common shortcomings, we propose in this paper a new network forensics method that uses a combination of network vulnerability and network evidence graph. In our proposed method, we use vulnerability evidence and reasoning algorithm to reconstruct attack scenarios and then backtrack the network packets to find the original evidences. Our proposed method can reconstruct attack scenarios effectively and then identify multi-staged attacks through evidential reasoning. Results of experiments show that the evidence graph constructed using our method is more complete and credible while possessing the reasoning capability

    Analyzing Impact of Time on Early Detection of Rainfall Event

    No full text
    Rainfall is a critical feature of a climatic system, which has a chaotic impact on agriculture, water resource management and biological systems. An early and accurate prediction of rainfall is a very important task and has vital effects on human life. However rainfall prediction is a challenging task in meteorology. Rainfall data mostly have high inconstancy and irregular patterns which are rare in other time series data. The rainfall data changes greatly with time, so the time factor has a high importance in such time series data. In order to develop efficient forecast models, one should deeply analyze the effect of time on prediction accuracy. Therefore, in this paper, we analyze how early we can predict a rainfall event with accuracy. We have performed the experiments on a 5-year daily rainfall data obtained from National Oceanic and Atmospheric Administration (NOAA) at 1-, 2-, 3-, and 4-day prior forecasting horizons using a machine learning technique to observe the trends in the prediction accuracy. Furthermore, we have also identified some most important input features from the dataset which plays a major role in the prediction of a rainfall event. Our results conclude that average wind speed and minimum temperature are most important weather variables in classifying rainfall events. We also observe that the forecasting error gradually increases with increasing lead times

    An Efficient Trust-Based Scheme for Secure and Quality of Service Routing in MANETs

    No full text
    Due to the dynamism of topology, sharing of bandwidth and constraint of resources in wireless nodes, the provision of quality of service (QoS) for routing in mobile ad hoc networks (MANETs) presents a great challenge. Security is another crucial aspect of providing QoS since the existence of malicious nodes present all kinds of threats to MANETs. Although a number of mechanisms have been proposed for protecting MANETs, most of the solutions are only effective for a particular kind of attacks or provide security at the cost of sacrificing QoS. In this paper, we propose a trust-based secure QoS routing scheme by combining social and QoS trust. The primary approach of the proposed scheme relies on mitigating nodes that exhibit various packet forwarding misbehavior and on discovering the path that ensures reliable communication through the trust mechanism. The scheme would select the best forwarding node based on packet forwarding behavior as well as capability in terms of QoS parameters, such as residual energy, channel quality, link quality, etc. We will present an adversary model for packet dropping attack against which we evaluate the proposed scheme. Simulation experiment using Network Simulator-2 (NS2) and under various network conditions show that mixing social and QoS trust parameters can greatly improve security and quality of service routing in terms of overhead, packet delivery ratio and energy consumption

    Efficient Forecasting of Precipitation Using LSTM

    No full text
    Precipitation is one of those many critical elements of the hydrological cycle that has a direct impact on human life in many aspects. An accurate and early detection of a future precipitation event can help in preventing human and financial losses. Therefore, it is vital to design a framework that can predict the precipitation with a significant accuracy. Accordingly, in this paper we have proposed a Long-Short-Term-Memory (LSTM) based forecast model which can predict the precipitation values efficiently using the historical values. The design of forecast model is made simple to avoid the heavy training time. Furthermore, a transformation technique was applied on the precipitation dataset to make the data more normal distribution-like. The proposed model was evaluated in terms of Root Mean Square Error (RMSE). The proposed model achieves a 6.186 RMSE and outperforms the traditional persistence and average forecast models
    corecore