1,207 research outputs found
Recommended from our members
Ghost and tachyon free gauge theories of gravity: A systematic approach
In this thesis, we present a systematic method for determining the conditions on the parameters in the action of a parity-preserving gauge theory of gravity about a Minkowski background for it to be free of ghost or tachyon particles. The approach naturally accommodates critical cases in which the parameter values satisfying some critical conditions causing changes of particle contents and may lead to additional gauge invariances. In Chapter 1, we give an overall introduction to the field. We then introduce the systematic method in Chapter 2. The method is implemented as a computer program, and the details of its implementation are presented in Chapter 3. In Chapter 4, we apply the method to investigate the particle content of parity-conserving Poincar\'e gauge theory (PGT). We find 450 critical cases that are free of ghosts and tachyons and compare the no-ghost-and-tachyon conditions of some critical cases with literature. We also examine the power-counting renormalisability of some of the critical cases of PGT and clarify the treatment of non-propagating modes in determining whether a theory is power-counting renormalisable (PCR) in Chapter 5. We identify 58 of the ghost and tachyon free PGT critical cases that are also PCR, of which seven have 2 massless degrees of freedom (d.o.f.) in propagating modes and a massive or mode, 12 have only 2 massless d.o.f., and 39 have only massive mode(s). In chapter 6, we analyse parity-preserving Weyl gauge theory (WGT) in a similar way. Within a subset of WGT, we find 168 critical cases that are free of ghosts and tachyons. We further identify 40 of these cases that are also PCR. Of these theories, 11 have only massless tordion propagating particles, 23 have only a massive tordion propagating mode, and 6 have both. We also repeat our analysis for PGT and WGT with vanishing torsion or curvature, respectively. In Chapter 7, we summarise the contents in this thesis and suggest some future work
The study of activated sludge settleability using the solids-flux analysis
The activated sludge was cultivated in two pilot-scale activated sludge systems under three ratios of BOD to N, equal to 20:1, 70:1 and 300:1 in the influent wastewater. The aeration tank of the activated sludge system was constructed in two different configurations: one without compartments in the tank, the other consisting of six compartments. The activated sludge withdrawn from the last compartment of each system was tested in a one-liter graduate cylinder to measure its zone settling velocity. The solids-flux method was employed to analyze the sludge settling characteristic as a function of solids concentration. The results show that the activated sludge grown under the nitrogen-sufficient condition and cultivated in a compartmentalized aeration tank under the nitrogen-deficient condition were excellent in settleability. In contrast, the poor settling sludge was found in the severely limited nitrogen system and in the nitrogen-deficient system without compartments in the aeration tank. This study indicates that sufficient nitrogen in the wastewater is necessary for successful treatment of wastewater, and the compartmentalization of the aeration tank could improve the efficiency of the secondary sedimentation tank
Waiting times for target detection models
One of the major developments in the theory of visual search is the establishment of a performance model based on fitting the search time distribution. Such a distribution is examined, based on a paper by Morawski et al;A modification of the traditional traveling salesman problem is made to relate specifically to the development of optimal search strategies. The modification involves inserting capture probabilities at the cities to be visited, and adapts the traditional dynamic programming algorithms to this added stochastic feature. A countably infinite version of this stochastic modification is formulated. For this formulation, typical ingredients of infinite dynamic programs are explored; these include: the convergence of the optimal value function, Bellman\u27s functional equation, and the construction of optimal (in this case only conditionally optimal) strategies;Visual search is a process involving certain deterministic, as well as random, components. This idea is incorporated into a second search model for which the expected value, variance and distribution of search time are computed, and also approximated numerically. A certain accelerated Monte Carlo method is discussed in connection with the numerical approximation of the distribution of search time
Burglarproof WEP Protocol on Wireless Infrastructure
With the popularization of wireless network, security issue is more and more important. When IEEE 802.11i draft proposed TKIP, it is expected to improve WEP (Wired Equivalent Privacy) on both active and passive attack methods. Especially in generating and management of secret keys, TKIP uses more deliberative attitude to distribute keys. Besides, it just upgrades software to accomplish these functions without changing hardware equipments. However, implementing TKIP on the exiting equipment, the transmission performance is decreased dramatically. This article presents a new scheme, Burglarproof WEP Protocol (BWP), that encrypt WEP key twice to improve the security drawbacks of original WEP, and have better performance on transmission. The proposed method is focus on modifying encryption sets to improve the low performance of TKIP, and provides better transmission rate without losing security anticipations base on current hardware configuration
Device-independent point estimation from finite data and its application to device-independent property estimation
The device-independent approach to physics is one where conclusions are drawn
directly from the observed correlations between measurement outcomes. In
quantum information, this approach allows one to make strong statements about
the properties of the underlying systems or devices solely via the observation
of Bell-inequality-violating correlations. However, since one can only perform
a {\em finite number} of experimental trials, statistical fluctuations
necessarily accompany any estimation of these correlations. Consequently, an
important gap remains between the many theoretical tools developed for the
asymptotic scenario and the experimentally obtained raw data. In particular, a
physical and concurrently practical way to estimate the underlying quantum
distribution has so far remained elusive. Here, we show that the natural
analogs of the maximum-likelihood estimation technique and the
least-square-error estimation technique in the device-independent context
result in point estimates of the true distribution that are physical, unique,
computationally tractable and consistent. They thus serve as sound algorithmic
tools allowing one to bridge the aforementioned gap. As an application, we
demonstrate how such estimates of the underlying quantum distribution can be
used to provide, in certain cases, trustworthy estimates of the amount of
entanglement present in the measured system. In stark contrast to existing
approaches to device-independent parameter estimations, our estimation does not
require the prior knowledge of {\em any} Bell inequality tailored for the
specific property and the specific distribution of interest.Comment: Essentially published version, but with the typo in Eq. (E5)
correcte
Naturally restricted subsets of nonsignaling correlations: typicality and convergence
It is well-known that in a Bell experiment, the observed correlation between
measurement outcomes -- as predicted by quantum theory -- can be stronger than
that allowed by local causality, yet not fully constrained by the principle of
relativistic causality. In practice, the characterization of the set of
quantum correlations is carried out, often, through a converging hierarchy of
outer approximations. On the other hand, some subsets of arising from
additional constraints [e.g., originating from quantum states having
positive-partial-transposition (PPT) or being finite-dimensional maximally
entangled (MES)] turn out to be also amenable to similar numerical
characterizations. How, then, at a quantitative level, are all these naturally
restricted subsets of nonsignaling correlations different? Here, we consider
several bipartite Bell scenarios and numerically estimate their volume relative
to that of the set of nonsignaling correlations. Within the number of cases
investigated, we have observed that (1) for a given number of inputs
(outputs ), the relative volume of both the Bell-local set and the quantum
set increases (decreases) rapidly with increasing () (2) although
the so-called macroscopically local set may approximate well in the
two-input scenarios, it can be a very poor approximation of the quantum set
when (3) the almost-quantum set is an
exceptionally-good approximation to the quantum set (4) the difference between
and the set of correlations originating from MES is most significant when
, whereas (5) the difference between the Bell-local set and the PPT set
generally becomes more significant with increasing . This last comparison,
in particular, allows us to identify Bell scenarios where there is little hope
of realizing the Bell violation by PPT states and those that deserve further
exploration.Comment: v4: published version (in Quantum); v3: Substantially rewritten, main
results summarized in 10 observations, 8 figures, and 7 tables; v2: Results
updated; v1: 13 + 4 pages, 10 tables, 5 figures, this is [66] of
arXiv:1810.00443; Comments are welcome
- …