24 research outputs found

    Impact of the energy-based and location-based LEACH secondary cluster aggregation on WSN lifetime

    Get PDF
    The improvement of sensor networks’ lifetime has been a major research challenge in recent years. This is because sensor nodes are battery powered and may be difficult to replace when deployed. Low energy adaptive clustering hierarchical (LEACH) routing protocol was proposed to prolong sensor nodes lifetime by dividing the network into clusters. In each cluster, a cluster head (CH) node receives and aggregates data from other nodes. However, CH nodes in LEACH are randomly elected which leads to a rapid loss of network energy. This energy loss occurs when the CH has a low energy level or when it is far from the BS. LEACH with two level cluster head (LEACH-TLCH) protocol deploys a secondary cluster head (2CH) to relieve the cluster head burden in these circumstances. However, in LEACH-TLCH the optimal distance of CH to base station (BS), and the choicest CH energy level for the 2CH to be deployed for achieving an optimal network lifetime was not considered. After a survey of related literature, we improved on LEACH-TLCH by investigating the conditions set to deploy the 2CH for an optimal network lifetime. Experiments were conducted to indicate how the 2CH impacts on the network at different CH energy levels and (or) CH distance to BS. This, is referred to as factor-based LEACH (FLEACH). Investigations in FLEACH show that as CHs gets farther from the BS, the use of a 2CH extends the network lifetime. Similarly, an increased lifetime also results as the CH energy decreases when the 2CH is deployed. We further propose FLEACH-E which uses a deterministic CH selection with the deployment of 2CH from the outset of network operation. Results show an improved performance over existing state-of-the-art homogeneous routing protocols

    Malware classification framework for dynamic analysis using Information Theory

    Get PDF
    Objectives: 1. To propose a framework for Malware Classification System (MCS) to analyze malware behavior dynamically using a concept of information theory and a machine learning technique. 2. To extract behavioral patterns from execution reports of malware in terms of its features and generates a data repository. 3. To select the most promising features using information theory based concepts. Methods/Statistical Analysis: Today, malware is a major concern of computer security experts. Variety and in- creasing number of malware affects millions of systems in the form of viruses, worms, Trojans etc. Many techniques have been proposed to analyze the malware to its class accurately. Some of analysis techniques analyzed malware based upon its structure, code flow, etc. without executing it (called static analysis), whereas other techniques (termed as dynamic analysis) focused to monitor the behavior of malware by executing it and comparing it with known malware behavior. Dynamic analysis has proved to be effective in malware detection as behavior is more difficult to mask while executing than its underlying code (static analysis). In this study, we propose a framework for Malware Classification System (MCS) to analyze malware behavior dynamically using a concept of information theory and a machine learning technique. The proposed framework extracts behavioral patterns from execution reports of malware in terms of its features and generates a data repository. Further, it selects the most promising features using information theory based concepts. Findings: The proposed framework detects the family of unknown malware samples after training of a classifier from malware data repository. We validated the applicability of the proposed framework by comparing with the other dynamic malware analysis technique on a real malware dataset from Virus Total. Application: The proposed framework is a Malware Classification System (MCS) to analyze malware behavior dynamically using a concept of information theory and a machine learning technique

    Fusion of global shape and local features using meta-classifier framework.

    Get PDF
    In computer vision, objects in an image can be described using many features such as shape, color, texture and local features. The number of dimensions for each type of feature has differing size. Basically, the underlying belief from a recognition point of view is that, the more features being used, the better the recognition performance. However, having more features does not necessarily correlate to better performance. The higher dimensional vectors resulting from fusion might contain irrelevant or noisy features that can degrade classifier performance. Repetitive and potentially useless information might be present which further escalates the 'curse of dimensionality' problem. Consequently, unwanted and irrelevant features are removed from the combination of features. Although this technique provides promising recognition performance, it is not efficient when it comes to computational time in model building. This study proposes meta- classifier framework to ensure all relevant features are not ignored, while maintaining minimal computational time. In this framework, individual classifiers are trained using the local and global shape features, respectively. Then, these classifiers results are combined as input to the meta- classifier. Experimental results have shown to be comparable, or superior to existing state-of-the-art works for object class recognition

    Performance Evaluation of Intrusion Detection System using Selected Features and Machine Learning Classifiers

    Get PDF
    Some of the main challenges in developing an effective network-based intrusion detection system (IDS) include analyzing large network traffic volumes and realizing the decision boundaries between normal and abnormal behaviors. Deploying feature selection together with efficient classifiers in the detection system can overcome these problems.  Feature selection finds the most relevant features, thus reduces the dimensionality and complexity to analyze the network traffic.  Moreover, using the most relevant features to build the predictive model, reduces the complexity of the developed model, thus reducing the building classifier model time and consequently improves the detection performance.  In this study, two different sets of selected features have been adopted to train four machine-learning based classifiers.  The two sets of selected features are based on Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) approach respectively.  These evolutionary-based algorithms are known to be effective in solving optimization problems.  The classifiers used in this study are Naïve Bayes, k-Nearest Neighbor, Decision Tree and Support Vector Machine that have been trained and tested using the NSL-KDD dataset. The performance of the abovementioned classifiers using different features values was evaluated.  The experimental results indicate that the detection accuracy improves by approximately 1.55% when implemented using the PSO-based selected features than that of using GA-based selected features.  The Decision Tree classifier that was trained with PSO-based selected features outperformed other classifiers with accuracy, precision, recall, and f-score result of 99.38%, 99.36%, 99.32%, and 99.34% respectively.  The results show that using optimal features coupling with a good classifier in a detection system able to reduce the classifier model building time, reduce the computational burden to analyze data, and consequently attain high detection rate

    A performance optimization model of task scheduling towards green cloud computing

    Get PDF
    Cloud computing becomes a powerful trend in the development of ICT services. It allows dynamic resource scaling from infinite resource pool for supporting Cloud users. Such scenario leads to necessity of larger size of computing infrastructure and increases processing power. Demand on the cloud computing is continually growth that makes it changes to scope of green cloud computing. It aims to reduce energy consumption in Cloud computing while maintaining a better performance. However, there is lack of performance metric that analyzing trade-off between energy consumption and performance. Considering high volume of mixed users’ requirements and diversity of services offered; an appropriate performance model for achieving better balance between Cloud performance and energy consumption is needed. In this work, we focus on green Cloud Computing through scheduling optimization model. Specifically, we investigate a relationship between performance metrics that chosen in scheduling approaches with energy consumption for energy efficiency. Through such relationship, we develop an energy-based performance model that provides a clear picture on parameter selection in scheduling for effective energy management. We believed that better understanding on how to model the scheduling performance will lead to green Cloud computing

    Sensor communication model using cyber-physical system approach for green data center

    Get PDF
    Energy consumption in distributed computing system gains a lot of attention recently after its processing capacity becomes significant for better business and economic operations. Comprehensive analysis of energy efficiency in high-performance data center for distributed processing requires ability to monitor a proportion of resource utilization versus energy consumption. In order to gain green data center while sustaining computational performance, a model of energy efficient cyber-physical communication is proposed. A real-time sensor communication is used to monitor heat emitted by processors and room temperature. Specifically, our cyber-physical communication model dynamically identifies processing states in data center while implying a suitable air-conditioning temperature level. The information is then used by administration to fine-tune the room temperature according to the current processing activities. Our automated triggering approach aims to improve edge computing performance with cost-effective energy consumption. Simulation experiments show that our cyber-physical communication achieves better energy consumption and resource utilization compared with other cooling model

    Model klasifikasi berasaskan privasi data awam dengan menggunakan pendekatan pengesahan dua peringkat

    Get PDF
    Maklumat digital telah menjadi trend dan penting untuk memodenkan dan memanfaatkan pelbagai sumber dalam Teknologi Maklumat (IT). Data dan maklumat yang luas boleh diperolehi pada bila-bila masa dan di mana sahaja di hujung jari kami melalui kemudahan ICT. Ini dianggap sebagai data awam kerana ia dikongsi secara terbuka, seperti di media sosial. Data awam boleh diatur mengikut pelbagai kriteria dan format. Pengguna mempunyai hak untuk memahami data mana yang boleh dikongsi secara terbuka dan data mana yang sepatutnya berada dalam keadaan peribadi. Walau bagaimanapun, orang sentiasa salah faham dan mengelirukan data mana yang perlu dijamin dan yang boleh dikongsi. Ia lebih kritikal apabila data awam ini sudah terdedah kepada pelanggaran data dan kecurian data. Dalam kerja ini, kami mencadangkan pendekatan klasifikasi privasi data untuk data awam di mana data ini berada di platform digital. Ia bertujuan untuk memaklumkan kepada orang ramai tentang tahap privasi data sebelum mereka mendedahkannya di platform digital terbuka dan percuma. Kami menggunakan tiga kelas privasi yang berbeza; rendah, sederhana, dan tinggi. Sebagai tindak balas kepada itu, kami mengenal pasti entiti data awam yang merujuk kepada platform maklumat digital seperti laman web, aplikasi mudah alih dan sistem dalam talian. Kami kemudian menggali lebih jauh ke dalam atribut data setiap entiti. Atribut data awam disusun dan diserahkan kepada responden untuk mendapatkan input mereka berkenaan dengan keputusan mereka mengenai kelas privasi yang sesuai untuk atribut masing-masing. Berdasarkan input daripada responden, kami kemudian menggunakan pengelas Naive Bayesian untuk menjana pemberat kebarangkalian untuk memperuntukkan semula atribut data ke dalam kelas privasi yang paling sesuai. Peringkat klasifikasi data dua peringkat ini membawa perspektif yang lebih baik mengenai privasi data. Versi kelas privasi data awam yang diubah suai ini kemudiannya disahkan oleh responden untuk menganalisis pilihan mereka sambil mengukur kepuasan pengguna. Mengikut keputusan, model klasifikasi privasi data awam kami memenuhi jangkaan orang ramai. Secara optimis, klasifikasi data yang teratur menyumbang kepada amalan data yang lebih baik

    Performance evaluation of an adaptive forwarding strategy in Named Data Networking

    Get PDF
    Named Data Networking (NDN) is an envisioned Internet architecture that uses named data to locate the data of interest as opposed to the IP address of the stored data. The forwarding strategy is critical in this network to ensure the data is well received in a timely manner. Stochastic Adaptive Forwarding (SAF) is said to have an increased throughput and provide quick recovery as it efficiently chooses the potential forwarding link whenever there is a failure on the existing link. SAF is designed to consider both the context and content of the networks to optimize its forwarding behavior. This paper compares the performance of SAF and Best Route algorithms, in terms of Interest satisfaction ratio, cache hit ratio, delay, Interest retransmission rate and hop count value. The results show SAF outperforms Best Route except for the hop count value
    corecore