1,207 research outputs found

    The study of activated sludge settleability using the solids-flux analysis

    Get PDF
    The activated sludge was cultivated in two pilot-scale activated sludge systems under three ratios of BOD to N, equal to 20:1, 70:1 and 300:1 in the influent wastewater. The aeration tank of the activated sludge system was constructed in two different configurations: one without compartments in the tank, the other consisting of six compartments. The activated sludge withdrawn from the last compartment of each system was tested in a one-liter graduate cylinder to measure its zone settling velocity. The solids-flux method was employed to analyze the sludge settling characteristic as a function of solids concentration. The results show that the activated sludge grown under the nitrogen-sufficient condition and cultivated in a compartmentalized aeration tank under the nitrogen-deficient condition were excellent in settleability. In contrast, the poor settling sludge was found in the severely limited nitrogen system and in the nitrogen-deficient system without compartments in the aeration tank. This study indicates that sufficient nitrogen in the wastewater is necessary for successful treatment of wastewater, and the compartmentalization of the aeration tank could improve the efficiency of the secondary sedimentation tank

    Waiting times for target detection models

    Get PDF
    One of the major developments in the theory of visual search is the establishment of a performance model based on fitting the search time distribution. Such a distribution is examined, based on a paper by Morawski et al;A modification of the traditional traveling salesman problem is made to relate specifically to the development of optimal search strategies. The modification involves inserting capture probabilities at the cities to be visited, and adapts the traditional dynamic programming algorithms to this added stochastic feature. A countably infinite version of this stochastic modification is formulated. For this formulation, typical ingredients of infinite dynamic programs are explored; these include: the convergence of the optimal value function, Bellman\u27s functional equation, and the construction of optimal (in this case only conditionally optimal) strategies;Visual search is a process involving certain deterministic, as well as random, components. This idea is incorporated into a second search model for which the expected value, variance and distribution of search time are computed, and also approximated numerically. A certain accelerated Monte Carlo method is discussed in connection with the numerical approximation of the distribution of search time

    Burglarproof WEP Protocol on Wireless Infrastructure

    Get PDF
    With the popularization of wireless network, security issue is more and more important. When IEEE 802.11i draft proposed TKIP, it is expected to improve WEP (Wired Equivalent Privacy) on both active and passive attack methods. Especially in generating and management of secret keys, TKIP uses more deliberative attitude to distribute keys. Besides, it just upgrades software to accomplish these functions without changing hardware equipments. However, implementing TKIP on the exiting equipment, the transmission performance is decreased dramatically. This article presents a new scheme, Burglarproof WEP Protocol (BWP), that encrypt WEP key twice to improve the security drawbacks of original WEP, and have better performance on transmission. The proposed method is focus on modifying encryption sets to improve the low performance of TKIP, and provides better transmission rate without losing security anticipations base on current hardware configuration

    Device-independent point estimation from finite data and its application to device-independent property estimation

    Full text link
    The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a {\em finite number} of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of {\em any} Bell inequality tailored for the specific property and the specific distribution of interest.Comment: Essentially published version, but with the typo in Eq. (E5) correcte

    Naturally restricted subsets of nonsignaling correlations: typicality and convergence

    Get PDF
    It is well-known that in a Bell experiment, the observed correlation between measurement outcomes -- as predicted by quantum theory -- can be stronger than that allowed by local causality, yet not fully constrained by the principle of relativistic causality. In practice, the characterization of the set QQ of quantum correlations is carried out, often, through a converging hierarchy of outer approximations. On the other hand, some subsets of QQ arising from additional constraints [e.g., originating from quantum states having positive-partial-transposition (PPT) or being finite-dimensional maximally entangled (MES)] turn out to be also amenable to similar numerical characterizations. How, then, at a quantitative level, are all these naturally restricted subsets of nonsignaling correlations different? Here, we consider several bipartite Bell scenarios and numerically estimate their volume relative to that of the set of nonsignaling correlations. Within the number of cases investigated, we have observed that (1) for a given number of inputs nsn_s (outputs non_o), the relative volume of both the Bell-local set and the quantum set increases (decreases) rapidly with increasing non_o (nsn_s) (2) although the so-called macroscopically local set Q1Q_1 may approximate QQ well in the two-input scenarios, it can be a very poor approximation of the quantum set when ns>non_s>n_o (3) the almost-quantum set Q~1\tilde{Q}_1 is an exceptionally-good approximation to the quantum set (4) the difference between QQ and the set of correlations originating from MES is most significant when no=2n_o=2, whereas (5) the difference between the Bell-local set and the PPT set generally becomes more significant with increasing non_o. This last comparison, in particular, allows us to identify Bell scenarios where there is little hope of realizing the Bell violation by PPT states and those that deserve further exploration.Comment: v4: published version (in Quantum); v3: Substantially rewritten, main results summarized in 10 observations, 8 figures, and 7 tables; v2: Results updated; v1: 13 + 4 pages, 10 tables, 5 figures, this is [66] of arXiv:1810.00443; Comments are welcome
    corecore