16 research outputs found

    Evaluating the CDMA System Using Hidden Markov and Semi Hidden Markov Models

    Get PDF
    CDMA is an important and basic part of today’s communications technologies. This technology can be analyzed efficiently by reducing the time, computation burden, and cost by characterizing the physical layer with a Markov Model. Waveform level simulation is generally used for simulating different parts of a digital communication system. In this paper, we introduce two different mathematical methods to model digital communication channels. Hidden Markov and Semi Hidden Markov models’ applications have been investigated for evaluating the DS-CDMA link performance with different parameters. Hidden Markov Models have been a powerful mathematical tool that can be applied as models of discrete-time series in many fields successfully. A semi-hidden Markov model as a stochastic process is a modification of hidden Markov models with states that are no longer unobservable and less hidden. A principal characteristic of this mathematical model is statistical inertia, which admits the generation, and analysis of observation symbol contains frequent runs. The SHMMs cause a substantial reduction in the model parameter set. Therefore in most cases, these models are computationally more efficient models compared to HMMs. After 30 iterations for different Number of Interferers, all parameters have been estimated as the likelihood become constant by the Baum Welch algorithm. It has been demonstrated that by employing these two models for different Numbers of Interferers and Number of symbols, Error sequences can be generated, which are statistically the same as the sequences derived from the CDMA simulation. An excellent match confirms both models’ reliability to those of the underlying CDMA-based physical layer

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    VIRAL TOPIC PREDICTION AND DESCRIPTION IN MICROBLOG SOCIAL NETWORKS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Modeling, Predicting and Capturing Human Mobility

    Get PDF
    Realistic models of human mobility are critical for modern day applications, specifically for recommendation systems, resource planning and process optimization domains. Given the rapid proliferation of mobile devices equipped with Internet connectivity and GPS functionality today, aggregating large sums of individual geolocation data is feasible. The thesis focuses on methodologies to facilitate data-driven mobility modeling by drawing parallels between the inherent nature of mobility trajectories, statistical physics and information theory. On the applied side, the thesis contributions lie in leveraging the formulated mobility models to construct prediction workflows by adopting a privacy-by-design perspective. This enables end users to derive utility from location-based services while preserving their location privacy. Finally, the thesis presents several approaches to generate large-scale synthetic mobility datasets by applying machine learning approaches to facilitate experimental reproducibility

    Unsupervised Machine Learning for Networking:Techniques, Applications and Research Challenges

    Get PDF
    While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently there has been a rising trend of employing unsupervised machine learning using unstructured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications for various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances

    Process query systems : advanced technologies for process detection and tracking

    Get PDF
    Vrijwel alles wat rondom ons heen gebeurt is van nature proces georienteerd. Het is dan niet verbazingwekkend dat het mentale omgevingsbeeld dat mensen van hun omgeving vormen hierop is gebaseerd. Zodra we iets waarnemen, en vervolgens herkennen, betekent dit dat we de waarneming begrijpen, ze bij elkaar kunnen groeperen, en voorspellen welke andere waarnemingen spoedig zullen volgen. Neem bijvoorbeeld een kamer met een televisie. Zodra we de kamer binnenkomen horen we geluiden, misschien stemmen, mischien muziek. Als we om ons heen kijken zien wij spoedig, visueel, de televisie. Omdat we het "proces" van TV goed kennen, kunnen we mentaal de geluiden bij het beeld van de televisie voegen. Ook weten we dat de telvisie aan is, en daarom verwachten we dat er nog meer geluiden zullen volgen. Zodra we de afstandsbediening oppakken en de televisie uitzetten, verwachten we dat het beeld verdwijnt en de geluiden ophouden. Als dit niet gebeurt, merken we dit direct op: we waren niet succesvol in het veranderen van de staat van het "proces TV". Over het algemeen, als onze waarnemingen niet bij een bekend proces passen zijn wij verbaasd, geinteresseerd, of zelfs bang. Dit is een goed voorbeeld van hoe mensen hun omgeving beschouwen, gebaseerd op processen classificeren we al onze waarnemingen, en zijn we in staat te voorspellen welke waarnemingen komen gaan. Computers zijn traditioneel niet in staat om herkenning op diezelfde wijze te realiseren. Computerverwerking van signalen is vaak gebaseerd op eenvoudige "signatures", ofwel enkelvoudige eigenschappen waar direct naar gezocht wordt. Vaak zijn deze systemen heel specifiek en kunnen slechts zeer beperkte voorspellingen maken inzake de waargenomen omgeving. Dit proefschrift introduceert een algemene methode waarin omgevingsbeschrijvingen worden ingevoerd als processen: een nieuwe klasse van gegevensverwerkende systemen, genaamd Process Query Systems (PQS). Een PQS stelt de gebruiker in staat om snel en efficient een robuust omgevingsbewust systeem te bouwen, dat in staat is meerdere processen en meerdere instanties van processen te detecteren en volgen. Met behulp van PQS worden verschillende systemen gepresenteerd zo divers als de beveiliging van grote computer netwerken, tot het volgen van vissen in een vistank. Het enige verschil tussen al deze systemen is de procesmodellen die ingevoerd werden in de PQS. Deze technologie is een nieuw en veelbelovend vakgebied dat het potentieel heeft zeer succesvol te worden in alle vormen van digitale signaalverwerking.UBL - phd migration 201

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity
    corecore