32 research outputs found

    A new hybrid model of dengue incidence rate using negative binomial generalised additive model and fuzzy c-means model: a case study in Selangor

    Get PDF
    Dengue is one of the top reason for illness and mortality in the world with beyond one­third of the world's population living in the risk areas of dengue infection. In this study, there are five stages to achieve the research objectives. Firstly, the verification of predetem1ined variables. Secondly, the identification of new datasets after clustered by district and Fuzzy C-Means Model (FCM). Thirdly, the development of models using the existing dataset and the new datasets which clustered by the two different clustering categories. Then, to assess the models developed by using three measurement methods which are deviance (D), Akaike Jnfonnation Criteria (AIC) and Bayesian Infonnation Criteria (BIC} Lastly, the validation of model developed by comparing the value of D, AIC and BIC between the existing model and the new models developed which used the new datasets. There are two different clustering techniques applied which are clustering the data by district and by FCM. This study proposed a new modelling hybrid framework by using two statistical models which are FCM and negative binomial Generalised Additive Model (GAM). This study successfully presents the significant difference in the climatic and non-climatic factors that influenced dengue incidence rate (DIR) in Selangor, Malaysia. Results show that the climatic factors such as rainfall with current month up to 3 months and number of rainy days with current month up to lag 3 months are significant to DIR. Besides, the interaction between rainfall and number of rainy days also shows strong positive relationship to DIR. Meanwhile, non-climatic vaiiables such as population density, number of locality and lag DIR from I month until 3 months also show significant relationship towards DIR For both clustering techniques, there are two clusters fonned and there are four new models developed in this study. After comparing the values of D, AIC ai1d BIC between the existing model and the new models, this study concluded that four new models recorded lower values compared to the existing model. Therefore, the four new models are selected to present the dengue incidence in Selangor

    Smart home technology for aging

    Get PDF
    The majority of the growing population, in the US and the rest of the world requires some degree of formal and or informal care either due to the loss of function or failing health as a result of aging and most of them suffer from chronic disorders. The cost and burden of caring for elders is steadily increasing. This thesis focuses on providing the analysis of the technologies with which a Smart Home is built to improve the quality of life of the elderly. A great deal of emphasis is given to the sensor technologies that are the back bone of these Smart Homes. In addition to the Analysis of these technologies a survey of commercial sensor products and products in research that are concerned with monitoring the health of the occupants of the Smart Home is presented. A brief analysis on the communication technologies which form the communication infrastructure for the Smart Home is also illustrated. Finally, System Architecture for the Smart Home is proposed describing the functionality and users of the system. The feasibility of the system is also discussed. A scenario measuring the blood glucose level of the occupant in a Smart Home is presented as to support the system architecture presented

    Symbol level decoding of Reed-Solomon codes with improved reliability information over fading channels

    Get PDF
    A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy in the School of Electrical and Information Engineering, 2016Reliable and e cient data transmission have been the subject of current research, most especially in realistic channels such as the Rayleigh fading channels. The focus of every new technique is to improve the transmission reliability and to increase the transmission capacity of the communication links for more information to be transmitted. Modulation schemes such as M-ary Quadrature Amplitude Modulation (M-QAM) and Orthogonal Frequency Division Multiplexing (OFDM) were developed to increase the transmission capacity of communication links without additional bandwidth expansion, and to reduce the design complexity of communication systems. On the contrary, due to the varying nature of communication channels, the message transmission reliability is subjected to a couple of factors. These factors include the channel estimation techniques and Forward Error Correction schemes (FEC) used in improving the message reliability. Innumerable channel estimation techniques have been proposed independently, and in combination with di erent FEC schemes in order to improve the message reliability. The emphasis have been to improve the channel estimation performance, bandwidth and power consumption, and the implementation time complexity of the estimation techniques. Of particular interest, FEC schemes such as Reed-Solomon (RS) codes, Turbo codes, Low Density Parity Check (LDPC) codes, Hamming codes, and Permutation codes, are proposed to improve the message transmission reliability of communication links. Turbo and LDPC codes have been used extensively to combat the varying nature of communication channels, most especially in joint iterative channel estimation and decoding receiver structures. In this thesis, attention is focused on using RS codes to improve the message reliability of a communication link because RS codes have good capability of correcting random and burst errors, and are useful in di erent wireless applications. This study concentrates on symbol level soft decision decoding of RS codes. In this regards, a novel symbol level iterative soft decision decoder for RS codes based on parity-check equations is developed. This Parity-check matrix Transformation Algorithm (PTA) is based on the soft reliability information derived from the channel output in order to perform syndrome checks in an iterative process. Performance analysis verify that this developed PTA outperforms the conventional RS hard decision decoding algorithms and the symbol level Koetter and Vardy (KV ) RS soft decision decoding algorithm. In addition, this thesis develops an improved Distance Metric (DM) method of deriving reliability information over Rayleigh fading channels for combined demodulation with symbol level RS soft decision decoding algorithms. The newly proposed DM method incorporates the channel state information in deriving the soft reliability information over Rayleigh fading channels. Analysis verify that this developed metric enhances the performance of symbol level RS soft decision decoders in comparison with the conventional method. Although, in this thesis, the performance of the developed DM method of deriving soft reliability information over Rayleigh fading channels is only veri ed for symbol level RS soft decision decoders, it is applicable to any symbol level soft decision decoding FEC scheme. Besides, the performance of the all FEC decoding schemes plummet as a result of the Rayleigh fading channels. This engender the development of joint iterative channel estimation and decoding receiver structures in order to improve the message reliability, most especially with Turbo and LDPC codes as the FEC schemes. As such, this thesis develops the rst joint iterative channel estimation and Reed- Solomon decoding receiver structure. Essentially, the joint iterative channel estimation and RS decoding receiver is developed based on the existing symbol level soft decision KV algorithm. Consequently, the joint iterative channel estimation and RS decoding receiver is extended to the developed RS parity-check matrix transformation algorithm. The PTA provides design ease and exibility, and lesser computational time complexity in an iterative receiver structure in comparison with the KV algorithm. Generally, the ndings of this thesis are relevant in improving the message transmission reliability of a communication link with RS codes. For instance, it is pertinent to numerous data transmission technologies such as Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB), Digital Subscriber Line (DSL), WiMAX, and long distance satellite communications. Equally, the developed, less computationally intensive, and performance e cient symbol level decoding algorithm for RS codes can be use in consumer technologies like compact disc and digital versatile disc.GS201

    A framework for network traffic analysis using GPUs

    Get PDF
    During the last years the computer networks have become an important part of our society. Networks have kept growing in size and complexity, making more complex its management and traffic monitoring and analysis processes, due to the huge amount of data and calculations involved. In the last decade, several researchers found effective to use graphics processing units (GPUs) rather than a traditional processors (CPU) to boost the execution of some algorithms not related to graphics (GPGPU). In 2006 the GPU chip manufacturer NVIDIA launched CUDA, a library that allows software developers to use their GPUs to perform general purpose algorithm calculations, using the C programming language. This thesis presents a framework which tries to simplify the task of programming network traffic analysis with CUDA to software developers. The objectives of the framework have been abstracting the task of obtaining network packets, simplify the task of creating network analysis programs using CUDA and offering an easy way to reuse the analysis code. Several network traffic analysis have also been developed

    Generación y calibración de señales de reloj para conversores analógico-digital de tiempo entrelazado

    Get PDF
    Proyecto Integrador (IE)--FCEFN-UNC, 2018Trata el diseño, implementación y verificación de los bloques encargados de la generación y calibración de las fases de muestreo de un TI-ADC (conversor analógico digital de tiempo entrelazado

    A Multi Agent System for Flow-Based Intrusion Detection Using Reputation and Evolutionary Computation

    Get PDF
    The rising sophistication of cyber threats as well as the improvement of physical computer network properties present increasing challenges to contemporary Intrusion Detection (ID) techniques. To respond to these challenges, a multi agent system (MAS) coupled with flow-based ID techniques may effectively complement traditional ID systems. This paper develops: 1) a scalable software architecture for a new, self-organized, multi agent, flow-based ID system; and 2) a network simulation environment suitable for evaluating implementations of this MAS architecture and for other research purposes. Self-organization is achieved via 1) a reputation system that influences agent mobility in the search for effective vantage points in the network; and 2) multi objective evolutionary algorithms that seek effective operational parameter values. This paper illustrates, through quantitative and qualitative evaluation, 1) the conditions for which the reputation system provides a significant benefit; and 2) essential functionality of a complex network simulation environment supporting a broad range of malicious activity scenarios. These results establish an optimistic outlook for further research in flow-based multi agent systems for ID in computer networks

    Novi pristup rešavanju problema frekvencijskog ofseta kod sistema sa ortogonalnim frekvencijskim multipleksom

    Get PDF
    In this dissertation, principles of orthogonal frequency division multiplex (OFDM), a method of encoding digital data on multiple carrier frequencies, are given. After theoretical basics of wireless channel and discrete adaptive filters, advantages and disadvantages of this kind of transfer are presented. Also, basic blocks of classical OFDM system are described, with comments on how behavior and implementation of each of them can be improved. OFDM systems with MDPSK modulation and differential detection in receiver are considered. Performances for different values of modulation levels and different OFDM parameter values are analyzed, in accordance with existing standards and OFDM systems that do not belong to certain standard classification. For this purpose, a modular simulation environment is developed, in which a universal model of the OFDM system with possibility of all OFDM parameters adjustment and wireless channel parameters is possible. The main OFDM system disadvantage is frequency offset sensitivity, which destroys subcarrier orthogonality and produces intercarrier interference. Hence, frequency offset is the main factor which limits the subcarrier bandwidth and increase of system bitrate. In order to resolve frequency offset problem, we analyze OFDM receivers with different configuration and complexity. Proposed receivers are designed by modifying existing differential detection algorithms, such as double differential detection, multisymbol differential detection and decision differential detection algorithm. The new approach to the frequency offset problem solving in system with orthogonal frequency division multiplex is based on adaptive transversal filter application. We proposed optimal OFDM receivers with good performance in wide frequency offset range in the Rician and Rayleigh fading channel. Proposed receivers significantly improve the quality of the received signal and can be applied in modern wireless telecommunication systems

    Distributed Network Monitoring for Distributed Denial of Service Attacks Detection and Prevention

    Get PDF
    There are two main categories of Distributed Denial of Service (DDoS) attacks that are capable of disrupting the daily operations of internet users and these are the low and high rate DDoS attacks. The detection and prevention of DDoS attacks is a very important aspect in network security in ensuring that the operations of businesses, communication, and educational facilities operate efficiently without disruptions. Over the years, many DDoS attacks detection systems have been proposed. These detection systems have focused more on obtaining high accuracy, reduction of false alarm rates and simplification of detection systems. However, less attention has been given to the computational costs of detection systems (processing power requirements and memory consumptions), early detection and flexibility in their deployment to support the different needs of networks and distributed monitoring approaches. The focus of this thesis is to investigate the use of a robust feature selection approach and machine learning classifiers to develop useful DDoS detection architectures for fast, effective, and efficient DDoS attacks detection to achieve high performance at low computational cost. To achieve this, a lightweight software architecture which is simple in design using minimal number of network flow features for distinguishing normal from DDoS attack network flows is proposed. The architecture is based on the Decision-Tree (DT) classifier and distinguishes DDoS attack from normal traffic network flows with a detection accuracy of over 99.9% when evaluated with up-to-date DDoS attack datasets. In addition, it can flexibly be deployed in a real-time network environment and at different network nodes to meet the needs of the network being monitored creating an avenue for distributed monitoring. Also, the use of minimal network flow features selected through a robust features selection approach results in a massive reduction in memory requirements when compared to traditional systems. Results from the software implementation of the architecture indicated that it uses just 7% processing power of a core of the detection system’s CPU in offline mode and provides no additional overhead to the monitored network. However, software applications for distinguishing normal from DDoS attack traffic are struggling to cope with the ever-increasing complexity and intensity of DDoS attack traffic. This increased workload ranges from the capturing and processing of millions of packets per second to classification of thousands of network flows per second which is evident in some of the most recent DDoS attacks faced by a variety of companies. To cope with this workload, a hardware accelerated hybrid network monitoring application is proposed. The proposed application is capable of fast network flows classification by leveraging the hardware parallel processing characteristics of a Field Programmable Gate Array (FPGA) whilst using a software application in the CPU for the network flow pre-processing required for classification. The hybrid system is capable of distinguishing DDoS attacks from normal network traffic flows with a detection accuracy of over 98% when deployed in a real-time environment under different network traffic conditions with detection in 1µs which is over thirty times faster than the software implementation of the architecture. The hardware accelerated application was implemented in the Zynq-7000 All Programmable SoCs ZedBoard which can monitor up to 1Gbps line rate. The evaluation results and findings from analysis of the experimental results of the hard ware accelerated application provide some important insights in improving the programmability, overall performance, scalability, and flexibility in deployment of the detection system across a network for accurate and early DDoS attack detection. In the final part of this thesis, the use of distributed network monitoring is explored with the implementation of the lightweight DDoS attacks detection architecture using Network Simulator 3 (NS-3). The systems are distributed at different parts of a network and results from the approach indicated that effective implementation of distributed network monitoring systems dramatically reduces the effect of DDoS attack to a minimal on the target network or network node
    corecore