5,666 research outputs found
Light Concentrators for Borexino and CTF
Light concentrators for the solar neutrino experiment Borexino and the
Counting Test Facility (CTF) have been developed and constructed. They increase
the light yield of these detectors by a factor of 2.5 and 8.8, respectively.
Technical challenges like long term stability in various media, high
reflectivity and radiopurity have been addressed and the concepts to overcome
these difficulties will be described. Gamma spectroscopy measurements of the
concentrators show an upper limit of 12e-6 Bq/g for uranium and a value of
120e-6 Bq/g for thorium. Upper limits on other possible contaminations like
26Al are presented. The impact of these results on the performance of Borexino
and the CTF are discussed and it is shown that the design goals of both
experiments are fulfilled.Comment: submitted to Nuclear Instruments and Methods in Physics Researc
Geoneutrinos in Borexino
This paper describes the Borexino detector and the high-radiopurity studies
and tests that are integral part of the Borexino technology and development.
The application of Borexino to the detection and studies of geoneutrinos is
discussed.Comment: Conference: Neutrino Geophysics Honolulu, Hawaii December 14-16, 200
Producing geothermal energy with a deep borehole heat exchanger. Exergy optimization of different applications and preliminary design criteria
This paper aims at proposing fast and plain design tools to evaluate the best energy application for deep borehole heat exchangers, exploiting geothermal resources. Exergy efficiency has been chosen as a performance index. Five possible utilization solutions have been analyzed: district heating, adsorption cooling, ORC power production, a thermal cascade system, and combined heat and power configuration. An extensive sensitivity analysis on source characteristics and well geometry has been performed to find the design criteria that ensure the maximum exergy performance. Results show that configurations involving district heating are recommended for exclusive power production. If optimized, district heating exergy efficiency can reach values in the range 40%–50% when a geothermal source at the well bottom is lower than 300 °C. For higher values, the combined heat and power production is a preferable choice, reaching an exergy efficiency of up to 60%. Design charts are also provided to read first-attempt values of the well operative temperatures and flow rate to maximize exergy efficiency for each utilization layouts
Sustainability assessment and performance evaluation of a Ground Coupled Heat Pump system: coupling a model based on COMSOL Multiphysics and a MATLAB heat pump model
The present study investigates the sustainable use of a ground coupled heat pump (GCHP). In order to assess the performance of this type of installation, a computer model composed by two parts has been developed. The Borehole Heat Exchanger (BHE) model is developed in COMSOL Multiphysics, based on numerical methods. Part of the results are fed to the heat pump energy model, developed in MATLAB. A real case study has been used to validate the model: the Faculty of Engineering of La Sapienza University in Latina, undertaking a renewal project for an abandoned part of the building. After the renovation, the building will host a research center on the topic of low-enthalpy geothermal systems. The analysis have demonstrated that the modelled GCHP system can supply a significant share of the energy required from the future research center. This amount of energy can be provided keeping almost stable the thermal balance of the surrounding region in the subsoil, operating in a sustainable way. The range of variation of the ground temperature with respect to the average value is within the limit of 5°C, which is the cap set by the international legislation
Placing regenerators in optical networks to satisfy multiple sets of requests.
The placement of regenerators in optical networks has become an active area of research during the last years. Given a set of lightpaths in a network G and a positive integer d, regenerators must be placed in such a way that in any lightpath there are no more than d hops without meeting a regenerator. While most of the research has focused on heuristics and simulations, the first theoretical study of the problem has been recently provided in [10], where the considered cost function is the number of locations in the network hosting regenerators. Nevertheless, in many situations a more accurate estimation of the real cost of the network is given by the total number of regenerators placed at the nodes, and this is the cost function we consider. Furthermore, in our model we assume that we are given a finite set of p possible traffic patterns (each given by a set of lightpaths), and our objective is to place the minimum number of regenerators at the nodes so that each of the traffic patterns is satisfied. While this problem can be easily solved when d = 1 or p = 1, we prove that for any fixed d,p ≥ 2 it does not admit a PTASUnknown control sequence '\textsc', even if G has maximum degree at most 3 and the lightpaths have length O(d)(d). We complement this hardness result with a constant-factor approximation algorithm with ratio ln (d ·p). We then study the case where G is a path, proving that the problem is NP-hard for any d,p ≥ 2, even if there are two edges of the path such that any lightpath uses at least one of them. Interestingly, we show that the problem is polynomial-time solvable in paths when all the lightpaths share the first edge of the path, as well as when the number of lightpaths sharing an edge is bounded. Finally, we generalize our model in two natural directions, which allows us to capture the model of [10] as a particular case, and we settle some questions that were left open in [10]
A new method of measuring two-phase mass flow rates in a venturi
The metering of the individual flow rates of gas and liquid in a multi-component flow is of great importance for the oil industry. A convenient, non-intrusive way of measuring these is the registration and analyzing of pressure drops over parts of a venturi. The commercially available venturi-based measuring equipment is costly since they additionally measure the void fraction. This paper presents a method to deduce the individual mass flow rates of air and water from pressure drop ratios and fluctuations in pressure drops. Not one but two pressure drops are used and not only time-averaged values of pressure drops are utilized. As a proof-of-principle, prediction results for a horizontal and vertical venturi are compared with measurements for void fractions up to 80 %. Residual errors are quantified and the effect of variation of equipment and of slip correlation is shown to be negligible.
At relatively low cost a good predictive capacity of individual mass flow rates is obtained
Tropical Dominating Sets in Vertex-Coloured Graphs
Given a vertex-coloured graph, a dominating set is said to be tropical if
every colour of the graph appears at least once in the set. Here, we study
minimum tropical dominating sets from structural and algorithmic points of
view. First, we prove that the tropical dominating set problem is NP-complete
even when restricted to a simple path. Then, we establish upper bounds related
to various parameters of the graph such as minimum degree and number of edges.
We also give upper bounds for random graphs. Last, we give approximability and
inapproximability results for general and restricted classes of graphs, and
establish a FPT algorithm for interval graphs.Comment: 19 pages, 4 figure
The Nylon Scintillator Containment Vessels for the Borexino Solar Neutrino Experiment
Borexino is a solar neutrino experiment designed to observe the 0.86 MeV Be-7
neutrinos emitted in the pp cycle of the sun. Neutrinos will be detected by
their elastic scattering on electrons in 100 tons of liquid scintillator. The
neutrino event rate in the scintillator is expected to be low (~0.35 events per
day per ton), and the signals will be at energies below 1.5 MeV, where
background from natural radioactivity is prominent. Scintillation light
produced by the recoil electrons is observed by an array of 2240
photomultiplier tubes. Because of the intrinsic radioactive contaminants in
these PMTs, the liquid scintillator is shielded from them by a thick barrier of
buffer fluid. A spherical vessel made of thin nylon film contains the
scintillator, separating it from the surrounding buffer. The buffer region
itself is divided into two concentric shells by a second nylon vessel in order
to prevent inward diffusion of radon atoms. The radioactive background
requirements for Borexino are challenging to meet, especially for the
scintillator and these nylon vessels. Besides meeting requirements for low
radioactivity, the nylon vessels must also satisfy requirements for mechanical,
optical, and chemical properties. The present paper describes the research and
development, construction, and installation of the nylon vessels for the
Borexino experiment
Inapproximability of maximal strip recovery
In comparative genomic, the first step of sequence analysis is usually to
decompose two or more genomes into syntenic blocks that are segments of
homologous chromosomes. For the reliable recovery of syntenic blocks, noise and
ambiguities in the genomic maps need to be removed first. Maximal Strip
Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff
for reliably recovering syntenic blocks from genomic maps in the midst of noise
and ambiguities. Given genomic maps as sequences of gene markers, the
objective of \msr{d} is to find subsequences, one subsequence of each
genomic map, such that the total length of syntenic blocks in these
subsequences is maximized. For any constant , a polynomial-time
2d-approximation for \msr{d} was previously known. In this paper, we show that
for any , \msr{d} is APX-hard, even for the most basic version of the
problem in which all gene markers are distinct and appear in positive
orientation in each genomic map. Moreover, we provide the first explicit lower
bounds on approximating \msr{d} for all . In particular, we show that
\msr{d} is NP-hard to approximate within . From the other
direction, we show that the previous 2d-approximation for \msr{d} can be
optimized into a polynomial-time algorithm even if is not a constant but is
part of the input. We then extend our inapproximability results to several
related problems including \cmsr{d}, \gapmsr{\delta}{d}, and
\gapcmsr{\delta}{d}.Comment: A preliminary version of this paper appeared in two parts in the
Proceedings of the 20th International Symposium on Algorithms and Computation
(ISAAC 2009) and the Proceedings of the 4th International Frontiers of
Algorithmics Workshop (FAW 2010
- …