3,220 research outputs found

    Systemization of Pluggable Transports for Censorship Resistance

    Full text link
    An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. In particular, the link between the censored client and entry point to the uncensored network is a frequent target of censorship due to the ease with which a nation-state censor can control it. A number of censorship resistance systems have been developed thus far to help circumvent blocking on this link, which we refer to as link circumvention systems (LCs). The variety and profusion of attack vectors available to a censor has led to an arms race, leading to a dramatic speed of evolution of LCs. Despite their inherent complexity and the breadth of work in this area, there is no systematic way to evaluate link circumvention systems and compare them against each other. In this paper, we (i) sketch an attack model to comprehensively explore a censor's capabilities, (ii) present an abstract model of a LC, a system that helps a censored client communicate with a server over the Internet while resisting censorship, (iii) describe an evaluation stack that underscores a layered approach to evaluate LCs, and (iv) systemize and evaluate existing censorship resistance systems that provide link circumvention. We highlight open challenges in the evaluation and development of LCs and discuss possible mitigations.Comment: Content from this paper was published in Proceedings on Privacy Enhancing Technologies (PoPETS), Volume 2016, Issue 4 (July 2016) as "SoK: Making Sense of Censorship Resistance Systems" by Sheharbano Khattak, Tariq Elahi, Laurent Simon, Colleen M. Swanson, Steven J. Murdoch and Ian Goldberg (DOI 10.1515/popets-2016-0028

    A Survey of Bandwidth Optimization Techniques and Patterns in VoIP Services and Applications

    Get PDF
    This article surveys the various techniques adopted for optimising bandwidth for VoIP services over the period 1999-2014. The improvement of bandwidth can be realized through; silence suppression measure of repressing the silent portions (packets) in a voice conversation using Voice Activity Detection algorithm; by so doing, the transmission rate during the inactive periods of speech is reduced, and thus, the mean transmission rate can be reduced. A second measure is packet header reduction which defines a process of multiplexing and de-multiplexing packet headers to curb excesses. Voice/ Packet Header compression is considered the most productive of all the techniques, offering a scheme where VoIP packets are compressed from the 40 bytes of size to a smaller byte size of 2 bytes. When combined with aggregation, compression potentially yields a compressed size of up to 1 byte. In either case, bandwidth save is reached using compression and decompression codecs of varying data and bit rates. It is envisaged that an improvement in the performance of codecs would yield a better result in terms of enhancing results favourably in Voice over broadband networksComment: 8 pages, 7 figures. ISSN (Print): 1694-0814 | ISSN (Online): 1694-078

    A study of QoS support for real time multimedia communication over IEEE802.11 WLAN : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering, Massey University, Albany, New Zealand

    Get PDF
    Quality of Service (QoS) is becoming a key problem for Real Time (RT) traffic transmitted over Wireless Local Area Network (WLAN). In this project the recent proposals for enhanced QoS performance for RT multimedia is evaluated and analyzed. Two simulation models for EDCF and HCF protocols are explored using OPNET and NS-2 simulation packages respectively. From the results of the simulation, we have studied the limitations of the 802.1 le standard for RT multimedia communication and analysed the reasons of the limitations happened and proposed the solutions for improvement. Since RT multimedia communication encompasses time-sensitive traffic, the measure of quality of service generally is minimal delay (latency) and delay variation (jitter). 802.11 WLAN standard focuses on the PHY layer and the MAC layer. The transmitted data rate on PHY layer are increased on standards 802.1 lb, a, g, j, n by different code mapping technologies while 802.1 le is developed specially for the QoS performance of RT-traffics at the MAC layer. Enhancing the MAC layer protocols are the significant topic for guaranteeing the QoS performance of RT-traffics. The original MAC protocols of 802.11 are DCF (Distributed Coordination Function) and PCF (Point Coordinator Function). They cannot achieve the required QoS performance for the RT-traffic transmission. IEEE802.lle draft has developed EDCF and HCF instead. Simulation results of EDCF and HCF models that we explored by OPNET and NS-2, show that minimal latency and jitter can be achieved. However, the limitations of EDCF and HCF are identified from the simulation results. EDCF is not stable under the high network loading. The channel utilization is low by both protocols. Furthermore, the fairness index is very poor by the HCF. It means the low priority traffic should starve in the WLAN network. All these limitations are due to the priority mechanism of the protocols. We propose a future work to develop dynamic self-adaptive 802.11c protocol as practical research directions. Because of the uncertainly in the EDCF in the heavy loading, we can add some parameters to the traffic loading and channel condition efficiently. We provide indications for adding some parameters to increase the EDCF performance and channel utilization. Because all the limitations are due to the priority mechanism, the other direction is doing away with the priority rule for reasonable bandwidth allocation. We have established that the channel utilization can be increased and collision time can be reduced for RT-traffics over the EDCF protocol. These parameters can include loading rate, collision rate and total throughput saturation. Further simulation should look for optimum values for the parameters. Because of the huge polling-induced overheads, HCF has the unsatisfied tradeoff. This leads to poor fairness and poor throughput. By developing enhanced HCF it may be possible to enhance the RI polling interval and TXOP allocation mechanism to get better fairness index and channel utilization. From the simulation, we noticed that the traffics deployment could affect the total QoS performance, an indication to explore whether the classification of traffics deployments to different categories is a good idea. With different load-based traffic categories, QoS may be enhanced by appropriate bandwidth allocation Strategy

    A multiplex-multicast scheme that improves system capacity of voice-over-IP on wireless LAN by 100%

    Get PDF
    Voice-over-IP (VoIP) is.an important application on the Internet. With the emergence of WLAN technology and its various advantages compared with the traditional wired LAN, it is fast becoming the 'last-mile' of choice for the overall Internet infrastructure. This work considers the support of VoIP over 802.11b WLAN. We show that although the raw WLAN capacity can potentially support more than 500 VoIP sessions, various overheads bring this down to only 12 VoIP sessions when using GSM 6.10 codec. We propose a novel multiplexing scheme for VoIP which exploits multicasting over WLAN for the downlink VoIP traffic. This scheme can achieve nearly 100% improvement in system capacity. In addition, we present results showing that the delay and delay jitter introduced by the proposed scheme are small. We believe that the scheme can reduce the blocking probability of VoIP sessions in an enterprise WLAN significantly.published_or_final_versio

    A new hybrid model of dengue incidence rate using negative binomial generalised additive model and fuzzy c-means model: a case study in Selangor

    Get PDF
    Dengue is one of the top reason for illness and mortality in the world with beyond one­third of the world's population living in the risk areas of dengue infection. In this study, there are five stages to achieve the research objectives. Firstly, the verification of predetem1ined variables. Secondly, the identification of new datasets after clustered by district and Fuzzy C-Means Model (FCM). Thirdly, the development of models using the existing dataset and the new datasets which clustered by the two different clustering categories. Then, to assess the models developed by using three measurement methods which are deviance (D), Akaike Jnfonnation Criteria (AIC) and Bayesian Infonnation Criteria (BIC} Lastly, the validation of model developed by comparing the value of D, AIC and BIC between the existing model and the new models developed which used the new datasets. There are two different clustering techniques applied which are clustering the data by district and by FCM. This study proposed a new modelling hybrid framework by using two statistical models which are FCM and negative binomial Generalised Additive Model (GAM). This study successfully presents the significant difference in the climatic and non-climatic factors that influenced dengue incidence rate (DIR) in Selangor, Malaysia. Results show that the climatic factors such as rainfall with current month up to 3 months and number of rainy days with current month up to lag 3 months are significant to DIR. Besides, the interaction between rainfall and number of rainy days also shows strong positive relationship to DIR. Meanwhile, non-climatic vaiiables such as population density, number of locality and lag DIR from I month until 3 months also show significant relationship towards DIR For both clustering techniques, there are two clusters fonned and there are four new models developed in this study. After comparing the values of D, AIC ai1d BIC between the existing model and the new models, this study concluded that four new models recorded lower values compared to the existing model. Therefore, the four new models are selected to present the dengue incidence in Selangor
    corecore