8 research outputs found
Expander Construction in VNC1
We give a combinatorial analysis (using edge expansion) of a variant of the iterative expander construction due to Reingold, Vadhan, and Wigderson (2002), and show that this analysis can be formalized in the bounded arithmetic system VNC^1 (corresponding to the "NC^1 reasoning"). As a corollary, we prove the assumption made by Jerabek (2011) that a construction of certain bipartite expander graphs can be formalized in VNC^1. This in turn implies that every proof in Gentzen\u27s sequent calculus LK of a monotone sequent can be simulated in the monotone version of LK (MLK) with only polynomial blowup in proof size, strengthening the quasipolynomial simulation result of Atserias, Galesi, and Pudlak (2002)
A Micro-ORC Energy System: Preliminary Performance and Test Bench Development
Abstract A large market potential for small electricity and heat generators can be identified in the domestic sector. Among the under development micro-scale power generation technologies the ORC (Organic Rankine Cycle) concept is a promising solution, already proven in the MW-range of power. There is still a prospective for smaller units for domestic users, with low temperature thermal demand. A test bench for a micro-CHP unit, currently run with a prototypal prime mover, is under development at University of Bologna. In particular, the system in study in the test facility is a micro-ORC system, rated for up to 3 kW. The ORC input heat is provided from an external source, which can be an external combustion system (a 46 kW biomass boiler will be connected to the thermal cycle) or an electric heater. The heat source delivers hot water to the bottoming ORC, currently operated with R134a as working fluid, which evolves in a recuperated cycle, with a 3-piston reciprocating expander, producing mechanical/electric power. The residual low-value heat is discharged to the environment with a water cooled condenser. The hot and cold water circuits have been realized in the lab to test the ORC performance. The micro-ORC internal layout and the external hot and cold water lines have been instrumented, implementing an acquisition and control software by means of LabVIEW software. A preliminary test campaign has been performed on the micro-ORC system, obtaining information on the actual thermodynamic cycle and the real performance under different operating conditions
Waste Heat Recovery Systems: Numerical and Experimental Analysis of organic Rankine Cycle Solutions
This thesis aims to present the ORC technology, its advantages and related problems. In particular, it provides an analysis of ORC waste heat recovery system in different and innovative scenarios, focusing on cases from the biggest to the lowest scale. Both industrial and residential ORC applications are considered. In both applications, the installation of a subcritical and recuperated ORC system is examined. Moreover, heat recovery is considered in absence of an intermediate heat transfer circuit. This solution allow to improve the recovery efficiency, but requiring safety precautions. Possible integrations of ORC systems with renewable sources are also presented and investigated to improve the non-programmable source exploitation. In particular, the offshore oil and gas sector has been selected as a promising industrial large-scale ORC application. From the design of ORC systems coupled with Gas Turbines (GTs) as topper systems, the dynamic behavior of the GT+ORC innovative combined cycles has been analyzed by developing a dynamic model of all the considered components. The dynamic behavior is caused by integration with a wind farm. The electric and thermal aspects have been examined to identify the advantages related to the waste heat recovery system installation. Moreover, an experimental test rig has been realized to test the performance of a micro-scale ORC prototype. The prototype recovers heat from a low temperature water stream, available for instance in industrial or residential waste heat. In the test bench, various sensors have been installed, an acquisitions system developed in Labview environment to completely analyze the ORC behavior. Data collected in real time and corresponding to the system dynamic behavior have been used to evaluate the system performance based on selected indexes. Moreover, various operational steady-state conditions are identified and operation maps are realized for a completely characterization of the system and to detect the optimal operating conditions
Proof complexity of positive branching programs
We investigate the proof complexity of systems based on positive branching
programs, i.e. non-deterministic branching programs (NBPs) where, for any
0-transition between two nodes, there is also a 1-transition. Positive NBPs
compute monotone Boolean functions, just like negation-free circuits or
formulas, but constitute a positive version of (non-uniform) NL, rather than P
or NC1, respectively.
The proof complexity of NBPs was investigated in previous work by Buss, Das
and Knop, using extension variables to represent the dag-structure, over a
language of (non-deterministic) decision trees, yielding the system eLNDT. Our
system eLNDT+ is obtained by restricting their systems to a positive syntax,
similarly to how the 'monotone sequent calculus' MLK is obtained from the usual
sequent calculus LK by restricting to negation-free formulas.
Our main result is that eLNDT+ polynomially simulates eLNDT over positive
sequents. Our proof method is inspired by a similar result for MLK by Atserias,
Galesi and Pudl\'ak, that was recently improved to a bona fide polynomial
simulation via works of Je\v{r}\'abek and Buss, Kabanets, Kolokolova and
Kouck\'y. Along the way we formalise several properties of counting functions
within eLNDT+ by polynomial-size proofs and, as a case study, give explicit
polynomial-size poofs of the propositional pigeonhole principle.Comment: 31 pages, 5 figure
Hardness magnification near state-of-the-art lower bounds
This work continues the development of hardness magnification. The latter proposes a new strategy for showing strong complexity lower bounds by reducing them to a refined analysis of weaker models, where combinatorial techniques might be successful. We consider gap versions of the meta-computational problems MKtP and MCSP, where one needs to distinguish instances (strings or truth-tables) of complexity = s_2(N), and N = 2^n denotes the input length. In MCSP, complexity is measured by circuit size, while in MKtP one considers Levin's notion of time-bounded Kolmogorov complexity. (In our results, the parameters s_1(N) and s_2(N) are asymptotically quite close, and the problems almost coincide with their standard formulations without a gap.) We establish that for Gap-MKtP[s_1,s_2] and Gap-MCSP[s_1,s_2], a marginal improvement over the state-of-the-art in unconditional lower bounds in a variety of computational models would imply explicit super-polynomial lower bounds. Theorem. There exists a universal constant c >= 1 for which the following hold. If there exists epsilon > 0 such that for every small enough beta > 0 (1) Gap-MCSP[2^{beta n}/c n, 2^{beta n}] !in Circuit[N^{1 + epsilon}], then NP !subseteq Circuit[poly]. (2) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in TC^0[N^{1 + epsilon}], then EXP !subseteq TC^0[poly]. (3) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in B_2-Formula[N^{2 + epsilon}], then EXP !subseteq Formula[poly]. (4) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in U_2-Formula[N^{3 + epsilon}], then EXP !subseteq Formula[poly]. (5) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in BP[N^{2 + epsilon}], then EXP !subseteq BP[poly]. (6) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in (AC^0[6])[N^{1 + epsilon}], then EXP !subseteq AC^0[6]. These results are complemented by lower bounds for Gap-MCSP and Gap-MKtP against different models. For instance, the lower bound assumed in (1) holds for U_2-formulas of near-quadratic size, and lower bounds similar to (3)-(5) hold for various regimes of parameters. We also identify a natural computational model under which the hardness magnification threshold for Gap-MKtP lies below existing lower bounds: U_2-formulas that can compute parity functions at the leaves (instead of just literals). As a consequence, if one managed to adapt the existing lower bound techniques against such formulas to work with Gap-MKtP, then EXP !subseteq NC^1 would follow via hardness magnification
Unprovability of strong complexity lower bounds in bounded arithmetic
While there has been progress in establishing the unprovability of complexity statements in lower fragments of bounded arithmetic, understanding the limits of JeĖr Ģabekās theory APC1 [JeĖr07a] and of higher levels of Bussās hierarchy Si 2 [Bus86] has been a more elusive task. Even in the more restricted setting of Cookās theory PV [Coo75], known results often rely on a less natural formalization that encodes a complexity statement using a collection of sentences instead of a single sentence. This is done to reduce the quantifier complexity of the resulting sentences so that standard witnessing results can be invoked.
In this work, we establish unprovability results for stronger theories and for sentences of higher quantifier complexity. In particular, we unconditionally show that APC1 cannot prove strong complexity lower bounds separating the third level of the polynomial hierarchy. In more detail, we consider non-uniform average-case separations, and establish that APC1 cannot prove a sentence stating that
ān ā„ n0 ā fn ā Ī 3-SIZE[nd] that is (1/n)-far from every Ī£3-SIZE[2nĪ“] circuit.
This is a consequence of a much more general result showing that, for every i ā„ 1, strong separations for
Ī i-SIZE[poly(n)] versus Ī£i-SIZE[2nā¦(1)] cannot be proved in the theory Ti PV consisting of all true āĪ£b iā1- sentences in the language of Cookās theory PV.
Our argument employs a convenient game-theoretic witnessing result that can be applied to sentences of arbitrary quantifier complexity. We combine it with extensions of a technique introduced by Kraj ĢıĖcek [Kra11] that was recently employed by Pich and Santhanam [PS21] to establish the unprovability of lower bounds in PV (i.e., the case i = 1 above, but under a weaker formalization) and in a fragment of APC1
Recommended from our members
Application of CFD in designing a drug delivery mixing chamber: an experimental and computational study
The purpose of this novel research was to understand the flow behaviour and improve the efficiency of the Volumaticā¢ spacer, using a combination of engineering tools such as CFD, Laser Doppler Anemometry (LDA) and Row visualization techniques. The lack of information on the Volumatic /A/ spacer meant that, initial understanding had to be gained into the flow behaviour within the spacer. This was initially performed by injecting air carrying a tracer concentration to represent t li<^drug portion of the medicine. The efficiency (volume of drug collected at the mouth piece) was found to be about 6.5% which was in the same order as the figure quoted in the literature Chuffart A series of parametric studies were carried out to discover the effects of various parameters on The overall efficiency of the spacer. In the initial part a series of jet profiles were studied at the inlet, these were in the shape of straight, cone shape and spray jet profiles. It was concluded that the jet with a cone angle of 5Ā° increased the efficiency of the spacer from G.5% to 9.4%.
The next stage of parametric study involved reducing the length of the spacer from 0.24 m to 0.12 m and varying the inlet velocity from 40 m/s down to 10 m/s. The findings concluded that t in1 efficiency of the spacer could be increased to 23%, using a velocity of 40 m/s at inlet. The length was reduced from 0.12 m to 0.06 m and a similar study as described above was carried out. This time it was concluded that reducing the velocity to 30 m/s increased the efficiency to 30%. The other interesting feature to come out of this study was that the whole of tIk1 spacer volume was used, hence the drug was mixing better than in the original Volumatic /A/ spacer, where about one third of the spacer volume remained completely empty of the drug.
The studies carried out so far had shown that the additional increase in drug delivery efficiency in the case of the Volumatic 7 A/ spacer, was not substantial enough to justify the considerable manufacturing costs which have to be met, if the Volumatic 7 A/ spacer was to be remanufactured in its improved design. The way forward seemed to be in the development of a new design. The new design had to be small enough, so that it could be carried around easily by patients, who do not use1 the current spacer due to its size. The new design had to be economical in terms of manufacture, simple to use and easy to clean. The reasons mentioned above and the current trend towards the tube type spacer designs, implied the logical approach would be to base the design on a similar geometry. A tube type spacer was modelled with two holes drilled directly opposite each other, a distance of 10 mm away from the pMDI's nozzle. The holes introduced a pressure difference, hence directing the drug towards the patient's airway system. The new spacer had a length of 0.1 m. The computational results showed that the efficiency had increased to 71% for this particular design.
The CFD results obtained from the initial study on the Volumatic 7 M spacer were validated using LDA measurements. The velocities along four different locations were measured. At each location the velocities were measured at increments of 5 mm for a distance of 50 mm inside the spacer. The LDA results showed very good agreements with those obtained from CFD. The volume of data sampled experimentally at each point was 25,000 data points. This large volume of data eliminated any random sources of error, and as the CF D simulations were carried out some six months prior to LDA results, it was safe to assume that the drug had been modelled accurately. The same experimental set up was used to measure velocity values for the tube spacer, but in this instance, velocity measurements were made only along two planes, due to limited time and availability of the drug source.
Finally laser light sheeting was used to illuminate the Volumatic T spacer and a high speed KODAK camera capable of capturing 4500 frames per second was used. The visualization study proved that there was a portion of the Volumatie /A/ spacer which at times was free of any drug.
The originality of the work has been described in the following paragraph: Prior to this research there was no comprehensive study available combining engineering tools such as Computational Fluid Dynamics (CFD), Laser Doppler Anemoinetry (LDA) and High Speed Photography to study the (low pattern within the current Volumatic /A/ spacer design and hence analysing its efficiency. The studies carried out were of the impaction type. The results of this study have confirmed that there are several parameters contributing to the efficiency of the Volumatic' A/ spacer. This knowledge was not available in the open literature previously.
The initial part of this study has provided a scientific approach to analysing the flow patterns, hence obtaining an accurate value for the efficiency of the current device. This part of the study alone is a valuable tool for industry, because it has given industry data which has not been previously available. The results from this study have indicated that, the Aero Chamber Spacer type design has an efficiency of 71% compared to the current 10% efficiency of the Volumatic 7 A/ spacer. The efficiencies discussed are measured in terms of the percentage of the drug delivered to the mouth piece. The benefit to industry would be saving at a conservative estimate in terms of millions of Pounds annually. This can be calculated from industry's own figures that, 1 out of every 5 new born baby suffers from asthma in various degrees. The drug is the most expensive component of the device, hence a more efficient device would use a lesser quantity of the drug.
Finally the combination of techniques used, and the number of data samples taken for example in the case of LDA measurements some 25000 data samples were taken and averaged at each point, has ensured a high degree of accuracy and confidence in the results presented