2,007 research outputs found

    Data compression by a decreasing slope-threshold test

    Get PDF
    Resolution can be obtained at large compression ratios with method for selecting data points for transmission by telemetry in television compressed-data system. Test slope of raw data stream and compare it to symmetric pair of decreasing thresholds. When either threshold is exceeded, data are sampled and transmitted; thresholds are reset, and test begins again

    Performance analysis under finite load and improvements for multirate 802.11

    Get PDF
    Automatic rate adaptation in CSMA/CA wireless networks may cause drastic throughput degradation for high speed bit rate stations (STAs). The CSMA/CA medium access method guarantees equal long-term channel access probability to all hosts when they are saturated. In previous work it has been shown that the saturation throughput of any STA is limited by the saturation throughput of the STA with the lowest bit rate in the same infrastructure. In order to overcome this problem, we ¯rst introduce in this paper a new model for ¯nite load sources with multirate capabilities. We use our model to investigate the throughput degradation outside and inside the saturation regime. We de¯ne a new fairness index based on the channel occupation time to have more suitable de¯nition of fairness in multirate environments. Further, we propose two simple but powerful mechanisms to partly bypass the observed decline in performance and meet the proposed fairness. Finally, we use our model for ¯nite load sources to evaluate our proposed mechanisms in terms of total throughput and MAC layer delay for various network con¯gurations

    Stochastic Chemical Reactions in Micro-domains

    Full text link
    Traditional chemical kinetics may be inappropriate to describe chemical reactions in micro-domains involving only a small number of substrate and reactant molecules. Starting with the stochastic dynamics of the molecules, we derive a master-diffusion equation for the joint probability density of a mobile reactant and the number of bound substrate in a confined domain. We use the equation to calculate the fluctuations in the number of bound substrate molecules as a function of initial reactant distribution. A second model is presented based on a Markov description of the binding and unbinding and on the mean first passage time of a molecule to a small portion of the boundary. These models can be used for the description of noise due to gating of ionic channels by random binding and unbinding of ligands in biological sensor cells, such as olfactory cilia, photo-receptors, hair cells in the cochlea.Comment: 33 pages, Journal Chemical Physic

    Bathymetric Artifacts in Sea Beam Data: How to Recognize Them and What Causes Them

    Get PDF
    Sea Beam multibeam bathymetric data have greatly advanced understanding of the deep seafloor. However, several types of bathymetric artifacts have been identified in Sea Beam\u27s contoured output. Surveys with many overlapping swaths and digital recording on magnetic tape of Sea Beam\u27s 16 acoustic returns made it possible to evaluate actual system performance. The artifacts are not due to the contouring algorithm used. Rather, they result from errors in echo detection and processing. These errors are due to internal factors such as side lobe interference, bottom-tracking gate malfunctions, or external interference from other sound sources (e.g., 3.5 kHz echo sounders or seismic sound sources). Although many artifacts are obviously spurious and would be disregarded, some (particularly the omega effects described in this paper) are more subtle and could mislead the unwary observer. Artifacts observed could be mistaken for volcanic constructs, abyssal hill trends, hydrothermal mounds, slump blocks, or channels and could seriously affect volcanic, tectonic, or sedimentological interpretations. Misinterpretation of these artifacts may result in positioning errors when seafloor bathymetry is used to navigate the ship. Considering these possible geological misinterpretations, a clear understanding of the Sea Beam system\u27s capabilities and limitations is deemed essential

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Enhancing IEEE 802.11MAC in congested environments

    Get PDF
    IEEE 802.11 is currently the most deployed wireless local area networking standard. It uses carrier sense multiple access with collision avoidance (CSMA/CA) to resolve contention between nodes. Contention windows (CW) change dynamically to adapt to the contention level: Upon each collision, a node doubles its CW to reduce further collision risks. Upon a successful transmission, the CW is reset, assuming that the contention level has dropped. However, the contention level is more likely to change slowly, and resetting the CW causes new collisions and retransmissions before the CW reaches the optimal value again. This wastes bandwidth and increases delays. In this paper we analyze simple slow CW decrease functions and compare their performances to the legacy standard. We use simulations and mathematical modeling to show their considerable improvements at all contention levels and transient phases, especially in highly congested environments

    Asymptotic Expansions for the Conditional Sojourn Time Distribution in the M/M/1M/M/1-PS Queue

    Full text link
    We consider the M/M/1M/M/1 queue with processor sharing. We study the conditional sojourn time distribution, conditioned on the customer's service requirement, in various asymptotic limits. These include large time and/or large service request, and heavy traffic, where the arrival rate is only slightly less than the service rate. The asymptotic formulas relate to, and extend, some results of Morrison \cite{MO} and Flatto \cite{FL}.Comment: 30 pages, 3 figures and 1 tabl

    A Statistical Mechanical Load Balancer for the Web

    Full text link
    The maximum entropy principle from statistical mechanics states that a closed system attains an equilibrium distribution that maximizes its entropy. We first show that for graphs with fixed number of edges one can define a stochastic edge dynamic that can serve as an effective thermalization scheme, and hence, the underlying graphs are expected to attain their maximum-entropy states, which turn out to be Erdos-Renyi (ER) random graphs. We next show that (i) a rate-equation based analysis of node degree distribution does indeed confirm the maximum-entropy principle, and (ii) the edge dynamic can be effectively implemented using short random walks on the underlying graphs, leading to a local algorithm for the generation of ER random graphs. The resulting statistical mechanical system can be adapted to provide a distributed and local (i.e., without any centralized monitoring) mechanism for load balancing, which can have a significant impact in increasing the efficiency and utilization of both the Internet (e.g., efficient web mirroring), and large-scale computing infrastructure (e.g., cluster and grid computing).Comment: 11 Pages, 5 Postscript figures; added references, expanded on protocol discussio

    Distributed Hashtable on Pre-structured Overlay Networks

    Full text link
    corecore