5,937 research outputs found

    Transport Protocol Throughput Fairness

    Get PDF
    Interest continues to grow in alternative transport protocols to the Transmission Control Protocol (TCP). These alternatives include protocols designed to give greater efficiency in high-speed, high-delay environments (so-called high-speed TCP variants), and protocols that provide congestion control without reliability. For the former category, along with the deployed base of ‘vanilla’ TCP – TCP NewReno – the TCP variants BIC and CUBIC are widely used within Linux: for the latter category, the Datagram Congestion Control Protocol (DCCP) is currently on the IETF Standards Track. It is clear that future traffic patterns will consist of a mix of flows from these protocols (and others). So, it is important for users and network operators to be aware of the impact that these protocols may have on users. We show the measurement of fairness in throughput performance of DCCP Congestion Control ID 2 (CCID2) relative to TCP NewReno, and variants Binary Increase Congestion control (BIC), CUBIC and Compound, all in “out-of-the box” configurations. We use a testbed and endto- end measurements to assess overall throughput, and also to assess fairness – how well these protocols might respond to each other when operating over the same end-to-end network path. We find that, in our testbed, DCCP CCID2 shows good fairness with NewReno, while BIC, CUBIC and Compound show unfairness above round-trip times of 25ms

    Superbunching and Nonclassicality as new Hallmarks of Superradiance

    Full text link
    Superradiance, i.e., spontaneous emission of coherent radiation by an ensemble of identical two-level atoms in collective states introduced by Dicke in 1954, is one of the enigmatic problems of quantum optics. The startling gist is that even though the atoms have no dipole moment they radiate with increased intensity in particular directions. Following the advances in our understanding of superradiant emission by atoms in entangled WW states we examine the quantum statistical properties of superradiance. Such investigations require the system to have at least two excitations as one needs to explore the photon-photon correlations of the radiation emitted by such states. We present specifically results for the spatially resolved photon-photon correlations of systems prepared in doubly excited WW states and give conditions when the atomic system emits nonclassial light. Equally, we derive the conditions for the occurrence of bunching and even of superbunching, a rare phenomenon otherwise known only from nonclassical states of light like the squeezed vacuum. We finally investigate the photon-photon cross correlations of the spontaneously scattered light and highlight the nonclassicalty of such correlations.Comment: 14 pages, 7 picture

    Depth estimation of metallic objects using multiwavelets scale-space representation

    Full text link
    The problem of dimensional defects in aluminum die-castings is widespread throughout the foundry industry and their detection is of paramount importance in maintaining product quality. Due to the unpredictable factory environment and metallic with highly reflective nature, it is extremely hard to estimate true dimensionality of these metallic parts, autonomously. Some existing vision systems are capable of estimating depth to high accuracy, however are very much hardware dependent, involving the use of light and laser pattern projectors, integrated into vision systems or laser scanners. However, due to the reflective nature of these metallic parts and variable factory environments, the aforementioned vision systems tend to exhibit unpromising performance. Moreover, hardware dependency makes these systems cumbersome and costly. In this work, we propose a novel robust 3D reconstruction algorithm capable of reconstructing dimensionally accurate 3D depth models of the aluminum die-castings. The developed system is very simple and cost effective as it consists of only a pair of stereo cameras and a defused fluorescent light. The proposed vision system is capable of estimating surface depths within the accuracy of 0.5mm. In addition, the system is invariant to illuminative variations as well as orientation and location of the objects on the input image space, making the developed system highly robust. Due to its hardware simplicity and robustness, it can be implemented in different factory environments without a significant change in the setup. The proposed system is a major part of quality inspection system for the automotive manufacturing industry. <br /

    Stereo correspondence estimation using multiwavelets scale-space representation-based multiresolution analysis

    Full text link
    A multiresolution technique based on multiwavelets scale-space representation for stereo correspondence estimation is presented. The technique uses the well-known coarse-to-fine strategy, involving the calculation of stereo correspondences at the coarsest resolution level with consequent refinement up to the finest level. Vector coefficients of the multiwavelets transform modulus are used as corresponding features, where modulus maxima defines the shift invariant high-level features (multiscale edges) with phase pointing to the normal of the feature surface. The technique addresses the estimation of optimal corresponding points and the corresponding 2D disparity maps. Illuminative variation that can exist between the perspective views of the same scene is controlled using scale normalization at each decomposition level by dividing the details space coefficients with approximation space. The problems of ambiguity, explicitly, and occlusion, implicitly, are addressed by using a geometric topological refinement procedure. Geometric refinement is based on a symbolic tagging procedure introduced to keep only the most consistent matches in consideration. Symbolic tagging is performed based on probability of occurrence and multiple thresholds. The whole procedure is constrained by the uniqueness and continuity of the corresponding stereo features. The comparative performance of the proposed algorithm with eight famous existing algorithms, presented in the literature, is shown to validate the claims of promising performance of the proposed algorithm. <br /

    Bioleaching of metal ions from low grade sulphide ore: Process optimization by using orthogonal experimental array design

    Get PDF
    The present work was aimed at studying the bioleachability of metal ions from low grade sulphide ore containing high amount of carbonaceous materials by selected moderately thermophilic strain of acidophilic chemolothotrophic bacteria, Sulfobacilllus thermosulfidooxidans. The bioleaching process was optimized by constructing L25 Taguchi orthogonal experimental array design and optimization of variable proportions of process parameters. Five factors were investigated and twenty five batchbioleaching tests were run under lower, medium and higher levels of these factors. The parameters considered for shake flask leaching experiments were initial pH (1.8, 2, 2.5, 3, 3.5), particle size, (50, 100, 120, 200, 270 m), pulp density (1, 5, 10, 15, 25%), temperature (40, 45, 47, 52, 57oC) and agitation (100, 120, 180, 220, 280 rpm). Statistical analysis (ANOVA) was also employed to determine significant relationship between experimental conditions and yield levels. The experimental results for selective leaching showed that under engineered leaching conditions; pH 1.8, particle size 120 m, pulp density 10%, temperature 47&#176;C and agitation 180 rpm, the percent bioleachabilities of metals were Zn 72%, Co68%, Cu 78%, Ni 81% and Fe 70% with an inoculum size of 1.0 &#215;107 /mL
    corecore