46 research outputs found

    Router-based algorithms for improving internet quality of service.

    Get PDF
    We begin this thesis by generalizing some results related to a recently proposed positive system model of TCP congestion control algorithms. Then, motivated by a mean ¯eld analysis of the positive system model, a novel, stateless, queue management scheme is designed: Multi-Level Comparisons with index l (MLC(l)). In the limit, MLC(l) enforces max-min fairness in a network of TCP flows. We go further, showing that counting past drops at a congested link provides su±cient information to enforce max-min fairness among long-lived flows and to reduce the flow completion times of short-lived flows. Analytical models are presented, and the accuracy of predictions are validated by packet level ns2 simulations. We then move our attention to e±cient measurement and monitoring techniques. A small active counter architecture is presented that addresses the problem of accurate approximation of statistics counter values at very-high speeds that can be both updated and estimated on a per-packet basis. These algorithms are necessary in the design of router-based flow control algorithms since on-chip Static RAM (SRAM) currently is a scarce resource, and being economical with its usage is an important task. A highly scalable method for heavy-hitter identifcation that uses our small active counters architecture is developed based on heuristic argument. Its performance is compared to several state-of-the-art algorithms and shown to out-perform them. In the last part of the thesis we discuss the delay-utilization tradeoff in the congested Internet links. While several groups of authors have recently analyzed this tradeoff, the lack of realistic assumption in their models and the extreme complexity in estimation of model parameters, reduces their applicability at real Internet links. We propose an adaptive scheme that regulates the available queue space to keep utilization at desired, high, level. As a consequence, in large-number-of-users regimes, sacrifcing 1-2% of bandwidth can result in queueing delays that are an order of magnitude smaller than in the standard BDP-bu®ering case. We go further and introduce an optimization framework for describing the problem of interest and propose an online algorithm for solving it

    Router-based algorithms for improving internet quality of service.

    Get PDF
    We begin this thesis by generalizing some results related to a recently proposed positive system model of TCP congestion control algorithms. Then, motivated by a mean ¯eld analysis of the positive system model, a novel, stateless, queue management scheme is designed: Multi-Level Comparisons with index l (MLC(l)). In the limit, MLC(l) enforces max-min fairness in a network of TCP flows. We go further, showing that counting past drops at a congested link provides su±cient information to enforce max-min fairness among long-lived flows and to reduce the flow completion times of short-lived flows. Analytical models are presented, and the accuracy of predictions are validated by packet level ns2 simulations. We then move our attention to e±cient measurement and monitoring techniques. A small active counter architecture is presented that addresses the problem of accurate approximation of statistics counter values at very-high speeds that can be both updated and estimated on a per-packet basis. These algorithms are necessary in the design of router-based flow control algorithms since on-chip Static RAM (SRAM) currently is a scarce resource, and being economical with its usage is an important task. A highly scalable method for heavy-hitter identifcation that uses our small active counters architecture is developed based on heuristic argument. Its performance is compared to several state-of-the-art algorithms and shown to out-perform them. In the last part of the thesis we discuss the delay-utilization tradeoff in the congested Internet links. While several groups of authors have recently analyzed this tradeoff, the lack of realistic assumption in their models and the extreme complexity in estimation of model parameters, reduces their applicability at real Internet links. We propose an adaptive scheme that regulates the available queue space to keep utilization at desired, high, level. As a consequence, in large-number-of-users regimes, sacrifcing 1-2% of bandwidth can result in queueing delays that are an order of magnitude smaller than in the standard BDP-bu®ering case. We go further and introduce an optimization framework for describing the problem of interest and propose an online algorithm for solving it

    Contribution to the improvement of the performance of wireless mesh networks providing real time services

    Get PDF
    Nowadays, people expectations for ubiquitous connectivity is continuously growing. Cities are now moving towards the smart city paradigm. Electricity companies aims to become part of smart grids. Internet is no longer exclusive for humans, we now assume the Internet of everything. We consider that Wireless Mesh Networks (WMNs) have a set of valuable features that will make it an important part of such environments. WMNs can also be use in less favored areas thanks to their low-cost deployment. This is socially relevant since it facilitates the digital divide reduction and could help to improve the population quality of life. Research and industry have been working these years in open or proprietary mesh solutions. Standardization efforts and real deployments establish a solid starting point.We expect that WMNs will be a supporting part for an unlimited number of new applications from a variety of fields: community networking, intelligent transportation systems, health systems, public safety, disaster management, advanced metering, etc. For all these cases, the growing needs of users for real-time and multimedia information is currently evident. On this basis, this thesis proposes a set of contributions to improve the performance of an application service of such type and to promote the better use of two critical resources (memory and energy) of WMNs.For the offered service, this work focuses on a Video on Demand (VoD) system. One of the requirements of this system is the high capacity support. This is mainly achieved by distributing the video contents among various distribution points which in turn consist of several video servers. Each client request that arrives to such video server cluster must be handled by a specific server in a way that the load is balanced. For such task, this thesis proposes a mechanism to appropriately select a specific video server such that the transfer time at the cluster could be minimized.On the other hand, mesh routers that creates the mesh backbone are equipped with multiple interfaces from different technologies and channel types. An important resource is the amount of memory intended for buffers. The quality of service perceived by the users are largely affected by the size of such buffers. This is because important network performance parameters such as packet loss probability, delay, and channel utilization are highly affected by the buffer sizes. An efficient use of memory for buffering, in addition to facilitate the mesh devices scalability, also prevents the problems associated with excessively large buffers. Most of the current works associate the buffer sizing problem with the dynamics of TCP congestion control mechanism. Since this work focuses on real time services, in which the use of TCP is unfeasible, this thesis proposes a dynamic buffer sizing mechanism mainly dedicated for such real time flows. The approach is based on the maximum entropy principle and allows that each device be able to dynamically self-configure its buffers to achieve more efficient memory utilization. The proper performance of the proposal has been extensively evaluated in wired and wireless interfaces. Classical infrastructure-based wireless and multi-hop mesh interfaces have been considered. Finally, when the WMN is built by the interconnection of user hand-helds, energy is a limited and scarce resource, and therefore any approach to optimize its use is valuable. For this case, this thesis proposes a topology control mechanism based on centrality metrics. The main idea is that, instead of having all the devices executing routing functionalities, just a subset of nodes are selected for this task. We evaluate different centralities, form both centralized and distributed perspectives. In addition to the common random mobility models we include the analysis of the proposal with a socially-aware mobility model that generates networks with a community structure.Actualmente las expectativas de las personas de una conectividad ubicua están creciendo. Las ciudades están trabajando para alcanzar el paradigma de ciudades inteligentes. Internet ha dejado de ser exclusivo de las personas y ahora se asume el Internet de todo. Las redes inalámbricas de malla (WMNs) poseen un valioso conjunto de características que las harán parte importante de tales entornos. Las WMNs pueden utilizarse en zonas menos favorecidas debido a su despliegue económico. Esto es socialmente relevante ya que facilita la reducción de la brecha digital y puede ayudar a mejorar la calidad de vida de la población. Los esfuerzos de estandarización y los despliegues de redes reales establecen un punto de partida sólido.Se espera entonces, que las WMNs den soporte a un número importante de nuevas aplicaciones y servicios, de una variedad de campos: redes comunitarias, sistemas de transporte inteligente, sistemas de salud y seguridad, operaciones de rescate y de emergencia, etc. En todos estos casos, es evidente la necesidad de disponer de información multimedia y en tiempo real. En base a estos precedentes, esta tesis propone un conjunto de contribuciones para mejorar el funcionamiento de un servicio de este tipo y promover un uso eficiente de dos recursos críticos (memoria y energía) de las WMNs.Para el servicio ofrecido, este trabajo se centra en un sistema de video bajo demanda. Uno de los requisitos de estos sistemas es el de soportar capacidades elevadas. Esto se consigue principalmente distribuyendo los contenidos de video entre diferentes puntos de distribución, los cuales a su vez están formados por varios servidores. Cada solicitud de un cliente que llega a dicho conjunto de servidores debe ser manejada por un servidor específico, de tal forma que la carga sea balanceada. Para esta tarea, esta tesis propone un mecanismo que selecciona apropiadamente un servidor de tal manera que el tiempo de transferencia del sistema sea minimizado.Por su parte, los enrutadores de malla que crean la red troncal están equipados con múltiples interfaces de diferentes tecnologías y tipos de canal. Un recurso muy importante para éstos es la memoria destinada a sus colas. La calidad de servicio percibida por los usuarios está altamente influenciada por el tamaño de las colas. Esto porque parámetros importantes del rendimiento de la red como la probabilidad de pérdida de paquetes, el retardo, y la utilización del canal se ven afectados por dicho tamaño. Un uso eficiente de tal memoria, a más de facilitar la escalabilidad de los equipos, también evita los problemas asociados a colas muy largas. La mayoría de los trabajos actuales asocian el problema de dimensionamiento de las colas con la dinámica del mecanismo de control de congestión de TCP. Debido a que este trabajo se enfoca en servicios en tiempo real, en los cuales no es factible usar TCP, esta tesis propone un mecanismo de dimensionamiento dinámico de colas dedicado principalmente a flujos en tiempo real. La propuesta está basada en el principio de máxima entropía y permite que los dispositivos sean capaces de auto-configurar sus colas y así lograr un uso más eficiente de la memoria. Finalmente, cuando la WMN se construye a través de la interconexión de los dispositivos portátiles, la energía es un recurso limitado y escaso, y cualquier propuesta para optimizar su uso es muy valorada. Para esto, esta tesis propone un mecanismo de control de topología basado en métricas de centralidad. La idea principal es que en lugar de que todos los dispositivos realicen funciones de enrutamiento, solo un subconjunto de nodos es seleccionado para esta tarea. Se evalúan diferentes métricas, desde una perspectiva centralizada y otra distribuida. A más de los modelos aleatorios clásicos de movilidad, se incluye el análisis de la propuesta con modelos de movilidad basados en información social que toman en cuenta el comportamiento humano y generan redes con una clara estructura de comunidade

    Methodologies for the analysis of value from delay-tolerant inter-satellite networking

    Get PDF
    In a world that is becoming increasingly connected, both in the sense of people and devices, it is of no surprise that users of the data enabled by satellites are exploring the potential brought about from a more connected Earth orbit environment. Lower data latency, higher revisit rates and higher volumes of information are the order of the day, and inter-connectivity is one of the ways in which this could be achieved. Within this dissertation, three main topics are investigated and built upon. First, the process of routing data through intermittently connected delay-tolerant networks is examined and a new routing protocol introduced, called Spae. The consideration of downstream resource limitations forms the heart of this novel approach which is shown to provide improvements in data routing that closely match that of a theoretically optimal scheme. Next, the value of inter-satellite networking is derived in such a way that removes the difficult task of costing the enabling inter-satellite link technology. Instead, value is defined as the price one should be willing to pay for the technology while retaining a mission value greater than its non-networking counterpart. This is achieved through the use of multi-attribute utility theory, trade-space analysis and system modelling, and demonstrated in two case studies. Finally, the effects of uncertainty in the form of sub-system failure are considered. Inter-satellite networking is shown to increase a system's resilience to failure through introduction of additional, partially failed states, made possible by data relay. The lifetime value of a system is then captured using a semi-analytical approach exploiting Markov chains, validated with a numerical Monte Carlo simulation approach. It is evident that while inter-satellite networking may offer more value in general, it does not necessarily result in a decrease in the loss of utility over the lifetime.In a world that is becoming increasingly connected, both in the sense of people and devices, it is of no surprise that users of the data enabled by satellites are exploring the potential brought about from a more connected Earth orbit environment. Lower data latency, higher revisit rates and higher volumes of information are the order of the day, and inter-connectivity is one of the ways in which this could be achieved. Within this dissertation, three main topics are investigated and built upon. First, the process of routing data through intermittently connected delay-tolerant networks is examined and a new routing protocol introduced, called Spae. The consideration of downstream resource limitations forms the heart of this novel approach which is shown to provide improvements in data routing that closely match that of a theoretically optimal scheme. Next, the value of inter-satellite networking is derived in such a way that removes the difficult task of costing the enabling inter-satellite link technology. Instead, value is defined as the price one should be willing to pay for the technology while retaining a mission value greater than its non-networking counterpart. This is achieved through the use of multi-attribute utility theory, trade-space analysis and system modelling, and demonstrated in two case studies. Finally, the effects of uncertainty in the form of sub-system failure are considered. Inter-satellite networking is shown to increase a system's resilience to failure through introduction of additional, partially failed states, made possible by data relay. The lifetime value of a system is then captured using a semi-analytical approach exploiting Markov chains, validated with a numerical Monte Carlo simulation approach. It is evident that while inter-satellite networking may offer more value in general, it does not necessarily result in a decrease in the loss of utility over the lifetime

    Space programs summary no. 37-64, volume 2 for the period 1 June to 31 July 1970. The Deep Space Network

    Get PDF
    Mariner Mars 1971 mission support, engineering, and design of Deep Space Networ

    Research and Technology, 1998

    Get PDF
    This report selectively summarizes the NASA Lewis Research Center's research and technology accomplishments for the fiscal year 1998. It comprises 134 short articles submitted by the staff scientists and engineers. The report is organized into five major sections: Aeronautics, Research and Technology, Space, Engineering and Technical Services, and Commercial Technology. A table of contents and an author index have been developed to assist readers in finding articles of special interest. This report is not intended to he a comprehensive summary of all the research and technology work done over the past fiscal year. Most of the work is reported in Lewis-published technical reports, journal articles, and presentations prepared by Lewis staff and contractors. In addition, university grants have enabled faculty members and graduate students to engage in sponsored research that is reported at technical meetings or in journal articles. For each article in this report, a Lewis contact person has been identified, and where possible, reference documents are listed so that additional information can be easily obtained. The diversity of topics attests to the breadth of research and technology being pursued and to the skill mix of the staff that makes it possible. At the time of publication, NASA Lewis was undergoing a name change to the NASA John H. Glenn Research Center at Lewis Field

    Time Dependent Performance Analysis of Wireless Networks

    Get PDF
    Many wireless networks are subject to frequent changes in a combination of network topology, traffic demand, and link capacity, such that nonstationary/transient conditions always exist in packet-level network behavior. Although there are extensive studies on the steady-state performance of wireless networks, little work exists on the systematic study of their packet-level time varying behavior. However, it is increasingly noted that wireless networks must not only perform well in steady state, but must also have acceptable performance under nonstationary/transient conditions. Furthermore, numerous applications in today's wireless networks are very critical to the real-time performance of delay, packet delivery ratio, etc, such as safety applications in vehicular networks and military applications in mobile ad hoc networks. Thus, there exists a need for techniques to analyze the time dependent performance of wireless networks. In this dissertation, we develop a performance modeling framework incorporating queuing and stochastic modeling techniques to efficiently evaluate packet-level time dependent performance of vehicular networks (single-hop) and mobile ad hoc networks (multi-hop). For vehicular networks, we consider the dynamic behavior of IEEE 802.11p MAC protocol due to node mobility and model the network hearability as a time varying adjacency matrix. For mobile ad hoc networks, we focus on the dynamic behavior of network layer performance due to rerouting and model the network connectivity as a time varying adjacency matrix. In both types of networks, node queues are modeled by the same fluid flow technique, which follows flow conservation principle to construct differential equations from a pointwise mapping of the steady-state queueing relationships. Numerical results confirm that fluid-flow based performance models are able to respond to the ongoing nonstationary/transient conditions of wireless networks promptly and accurately. Moreover, compared to the computation time of standard discrete event simulator, fluid-flow based model is shown to be a more scalable evaluation tool. In general, our proposed performance model can be used to explore network design alternatives or to get a quick estimate on the performance variation in response to some dynamic changes in network conditions

    Properties, functionality and potential applications of novel modified iron nanoparticles for the treatment of 2,4,6-Trichlorophenol

    Get PDF
    2,4,6-trichlorophenol (TCP) is a pervasive carcinogenic water contaminant found in a wide variety of water and waste systems and is a pertinent model compound of broader aromatic organics, specifically organo-halide pesticides. These compounds are persistent in the environment and show resilience to regular water and waste treatment protocols thus warranting the development and implementation of novel treatment materials for improved contaminant removal. Zero-valent iron (ZVI) has demonstrated the ability to remove or degrade a wide variety of inorganic and organic water contaminants, including chlorophenols, and has been widely applied for in-situ groundwater remediation where contamination is often localised in a low-oxygen environment. ZVI’s broader applications in water treatment have remained mainly limited due to corrosion, particle dispersion, and confinement issues in deployment. This work, therefore, explored the development, functionality, and potential application of new modified nZVI materials (nZVI-Osorb) and assessed their potential to improve iron’s intrinsic functionality while also gauging the material’s viability for TCP remediation in water and waste systems. Materials produced in this thesis were prepared utilising three different embedment procedures (1-pot, multiple additions, oxygen-free). All embedment methods resulted in tightly bound composites featuring high surface areas (340.2-449.1 sq. m/g) with net iron composition ranging from 10% to 29.78% by mass. Electron imaging microscopy verified even dispersion of iron throughout the substrate. Composite materials did not exhibit a delayed rate of atmospheric corrosion over nZVI controls evincing an 18% nZVI0 loss per day until reaching a stabilised concentration (7%) after 48 hrs. nZVI-Osorb composites did produce more favourable iron oxide species which remain conducive to electron transfer from core Fe0 atom. After 50 days, a majority of nZVI in nZVI-Osorb had oxidised to maghemite (30%) and magnetite (26%) compared to control nZVI producing 19% and 12% respectively. Unreactive hematite accounted for 47% of the control and just 36% of the composite. While 1-pot embedment allowed the most substantial control over final iron composition, the oxygen-free method allowed the most reliable preservation of initial nZVI0 concentrations through restricted oxidation. Materials generated through oxygen-free embedment were utilised in the following water treatment trials with TCP. Parameters related to sorption and degradation mechanisms of TCP by nZVI-Osorb were tested in aerobic conditions, e.g. surface and potable water. nZVI-Osorb materials demonstrated high extraction capacity for TCP from aqueous solutions (Qe=1286.4 ±13.5 mg TCP/g Osorb, Qe=1253±106.7 mg TCP/g nZVI-Osorb, pH 5.1, 120mg/L TCP) and followed pseudo second order kinetics. In the broader class of chlorophenols, sorptive affinity mirrored partitioning values with highly substituted chlorophenols displaying the highest sorption capacities. Degradation of TCP by nZVI-Osorb or nZVI controls was not observed due to corrosive hindrance and inadequate reductive capacity, suggesting that materials may not be suitable for highly aerated surface and potable water treatment systems. Environmental conditions pertinent to sorption and degradation mechanisms were evaluated to improve understanding and robustness of functionality in low-oxygen applications, such as wastewater and anaerobic digesters, where nZVI-Osorb treatment is anticipated to be advantageous to TCP sorption and methane production. pH was found to influence sorption dramatically. Acidic solutions below 5 found sorption >90%. This capacity was reduced to <30% when pH was raised above TCP pKa value (6.23) to 7 and above. Further trials found a positive effect on TCP sorption (+7.55%) linked to net pH reduction (5.1 to 3.3) with the addition of secondary acids (volatile fatty acids: acetic, propionic, butyric, 3x 100mg/L) commonly found in anaerobic digester systems. Salinity did not affect TCP sorption. The removal of dissolved and atmospheric oxygen increased total sorption (40ppm-+1.94%, 100ppm- +7.93%, 200ppm- +0.89%, 400mg/L- +14.59%) through reduced iron corrosion and the production of favorable iron oxides, but did not facilitate contaminant degradation. Biodegradation mechanisms for TCP have broadly been established, and new research has supported the improved cometabolic degradation of recalcitrant contaminants like TCP and PCP in nZVI-dosed anaerobic digesters. Model anaerobic digester systems (3.9 g/L nZVI-Osorb, 25mg/L TCP, 240 mg/L acetic, 120mg/L propionic, 120mg/L butyric acid) containing bioreactor sludge (62.5%) were observed through standard water quality diagnostics (pH, ORP, COD, head pressure) for 14 days and suggested that nZVI-Osorb did not inhibit cellular processes. Increased electron activity from iron corrosion and hydrogen gas production, increased overall pH and decreased total ORP in these AD systems. TCP degradation by-products (DCP, CP) were detected in dilute concentrations (<0.01 mg/L) with poor recovery by LC-MS/MS. Results suggest that nZVIOsorb may be well-suited additive for AD systems. This study contributes to knowledge of the properties, functionality, and treatment mechanisms of metal-sorbent composites with a model chlorinated aromatic water contaminant in aerobic and anaerobic environments. The work identifies favourable environmental and process conditions to apply these materials in larger scale applications, particularly, anaerobic digestion and provides support for the continued refinement and improvement of nZVI based remediation systems

    Analysis of buffer allocations in time-dependent and stochastic flow lines

    Full text link
    This thesis reviews and classifies the literature on the Buffer Allocation Problem under steady-state conditions and on performance evaluation approaches for queueing systems with time-dependent parameters. Subsequently, new performance evaluation approaches are developed. Finally, a local search algorithm for the derivation of time-dependent buffer allocations is proposed. The algorithm is based on numerically observed monotonicity properties of the system performance in the time-dependent buffer allocations. Numerical examples illustrate that time-dependent buffer allocations represent an adequate way of minimizing the average WIP in the flow line while achieving a desired service level

    Control of Energy Storage

    Get PDF
    Energy storage can provide numerous beneficial services and cost savings within the electricity grid, especially when facing future challenges like renewable and electric vehicle (EV) integration. Public bodies, private companies and individuals are deploying storage facilities for several purposes, including arbitrage, grid support, renewable generation, and demand-side management. Storage deployment can therefore yield benefits like reduced frequency fluctuation, better asset utilisation and more predictable power profiles. Such uses of energy storage can reduce the cost of energy, reduce the strain on the grid, reduce the environmental impact of energy use, and prepare the network for future challenges. This Special Issue of Energies explore the latest developments in the control of energy storage in support of the wider energy network, and focus on the control of storage rather than the storage technology itself
    corecore