2,180 research outputs found

    A model of radiating black hole in noncommutative geometry

    Full text link
    The phenomenology of a radiating Schwarzschild black hole is analyzed in a noncommutative spacetime. It is shown that noncommutativity does not depend on the intensity of the curvature. Thus we legitimately introduce noncommutativity in the weak field limit by a coordinate coherent state approach. The new interesting results are the following: i) the existence of a minimal non-zero mass to which black hole can shrink; ii) a finite maximum temperature that the black hole can reach before cooling down to absolute zero; iii) the absence of any curvature singularity. The proposed scenario offers a possible solution to conventional difficulties when describing terminal phase of black hole evaporation.Comment: 10 pages, 4 figure

    Spinning Loop Black Holes

    Full text link
    In this paper we construct four Kerr-like spacetimes starting from the loop black hole Schwarzschild solutions (LBH) and applying the Newman-Janis transformation. In previous papers the Schwarzschild LBH was obtained replacing the Ashtekar connection with holonomies on a particular graph in a minisuperspace approximation which describes the black hole interior. Starting from this solution, we use a Newman-Janis transformation and we specialize to two different and natural complexifications inspired from the complexifications of the Schwarzschild and Reissner-Nordstrom metrics. We show explicitly that the space-times obtained in this way are singularity free and thus there are no naked singularities. We show that the transformation move, if any, the causality violating regions of the Kerr metric far from r=0. We study the space-time structure with particular attention to the horizons shape. We conclude the paper with a discussion on a regular Reissner-Nordstrom black hole derived from the Schwarzschild LBH and then applying again the Newmann-Janis transformation.Comment: 18 pages, 18 figure

    Minimal Scales from an Extended Hilbert Space

    Full text link
    We consider an extension of the conventional quantum Heisenberg algebra, assuming that coordinates as well as momenta fulfil nontrivial commutation relations. As a consequence, a minimal length and a minimal mass scale are implemented. Our commutators do not depend on positions and momenta and we provide an extension of the coordinate coherent state approach to Noncommutative Geometry. We explore, as toy model, the corresponding quantum field theory in a (2+1)-dimensional spacetime. Then we investigate the more realistic case of a (3+1)-dimensional spacetime, foliated into noncommutative planes. As a result, we obtain propagators, which are finite in the ultraviolet as well as the infrared regime.Comment: 16 pages, version which matches that published on CQ

    The Hawking-Page crossover in noncommutative anti-deSitter space

    Full text link
    We study the problem of a Schwarzschild-anti-deSitter black hole in a noncommutative geometry framework, thought to be an effective description of quantum-gravitational spacetime. As a first step we derive the noncommutative geometry inspired Schwarzschild-anti-deSitter solution. After studying the horizon structure, we find that the curvature singularity is smeared out by the noncommutative fluctuations. On the thermodynamics side, we show that the black hole temperature, instead of a divergent behavior at small scales, admits a maximum value. This fact implies an extension of the Hawking-Page transition into a van der Waals-like phase diagram, with a critical point at a critical cosmological constant size in Plank units and a smooth crossover thereafter. We speculate that, in the gauge-string dictionary, this corresponds to the confinement "critical point" in number of colors at finite number of flavors, a highly non-trivial parameter that can be determined through lattice simulations.Comment: 24 pages, 6 figure, 1 table, version matching that published on JHE

    Newton's law in an effective non commutative space-time

    Full text link
    The Newtonian Potential is computed exactly in a theory that is fundamentally Non Commutative in the space-time coordinates. When the dispersion for the distribution of the source is minimal (i.e. it is equal to the non commutative parameter Ξ\theta), the behavior for large and small distances is analyzed.Comment: 5 page

    Tunneling of massive and charged particles from noncommutative Reissner-Nordstr\"{o}m black hole

    Full text link
    Massive charged and uncharged particles tunneling from commutative Reissner-Nordstrom black hole horizon has been studied with details in literature. Here, by adopting the coherent state picture of spacetime noncommutativity, we study tunneling of massive and charged particles from a noncommutative inspired Reissner-Nordstrom black hole horizon. We show that Hawking radiation in this case is not purely thermal and there are correlations between emitted modes. These correlations may provide a solution to the information loss problem. We also study thermodynamics of noncommutative horizon in this setup.Comment: 10 pages, 2 figure

    Toward a Social Practice Theory of Relational Competing

    Get PDF
    This paper brings together the competitive dynamics and strategy-aspractice literatures to investigate relational competition. Drawing on a global ethnography of the reinsurance market, we develop the concept of micro-competitions, which are the focus of competitors’ everyday competitive practices. We find variation in relational or rivalrous competition by individual competitors across the phases of a micro-competition, between competitors within a micro-competition, and across multiple micro-competitions. These variations arise from the interplay between the unfolding competitive arena and the implementation of each firm’s strategic portfolio. We develop a conceptual framework that makes four contributions to: relational competition; reconceptualizing action and response; elaborating on the awareness-motivation-capability framework within competitive dynamics; and the recursive dynamic by which implementing strategy inside firms shapes, and is shaped by, the competitive arena

    Self-completeness and spontaneous dimensional reduction

    Get PDF
    A viable quantum theory of gravity is one of the biggest challenges facing physicists. We discuss the confluence of two highly expected features which might be instrumental in the quest of a finite and renormalizable quantum gravity -- spontaneous dimensional reduction and self-completeness. The former suggests the spacetime background at the Planck scale may be effectively two-dimensional, while the latter implies a condition of maximal compression of matter by the formation of an event horizon for Planckian scattering. We generalize such a result to an arbitrary number of dimensions, and show that gravity in higher than four dimensions remains self-complete, but in lower dimensions it is not. In such a way we established an "exclusive disjunction" or "exclusive or" (XOR) between the occurrence of self-completeness and dimensional reduction, with the goal of actually reducing the unknowns for the scenario of the physics at the Planck scale. Potential phenomenological implications of this result are considered by studying the case of a two-dimensional dilaton gravity model resulting from dimensional reduction of Einstein gravity.Comment: 12 pages, 3 figures; v3: final version in press on Eur. Phys. J. Plu

    Compression algorithm for Multi Element Telescope for Imaging and Spectroscopy (METIS)

    Get PDF
    The compression algorithm defined for METIS (Multi Element Telescope for Imaging and Spectroscopy) arises from the standard CCSDS 123.0-r-1, that has been modified and adapted to the mission purposes, and integrated with other pieces of software to let the compressor work in the most efficient way with the expected acquisitions of the sensor. The major modification is the insertion in the prediction loop of a uniform scalar quantizer, extending the standard to a near-lossless version; in addition a local decoder has been added as well, in order to keep a local copy of the dequantized residuals to allow a correct reconstruction at the decoder side. A lossy compression can even be executed in a variable-quality way, meaning that it is possible to change the quantization step size among successive image lines. The ability of the original software to process three-dimensional images has been kept but adapted to the mission needs: instead of considering wavelength, consecutive acquisitions are collected together to build up the 3D cube, so that time becomes the third dimension; and since solar acquisitions change really slowly in time, the effectiveness of this adjustment works very well and prediction of the current pixels becomes much more accurate if considering the previous acquisitions ones. Further, a pre-processing routine has been developed to exploit the geometry of the images; it consists in a re-mapping of the pixels in order to take advantage of the radial structure of solar acquisitions, through a function that has been named “radialization”. It receives the standard image as input, and computes for every pixel the distance and the angle with respect to the center; these become the two new coordinates, as it happens when switching from a Cartesian system to a polar one. The triangular-shaped output is then centered and padded in order to keep a rectangular structure, and matrices for the two dimensions are kept, so that the whole piece of code can be executed only once, and the “radialized” image can be then obtained by a simple mapping using these structures, resulting in a really light operation from a computational point of view; a further advantage can be identified in the lack of interpolation among pixels, so that eventually, the compression of the image, or better of a section of it, can occur losslessly. Radialization also simplifies a possible selection of areas of interest of the image: for example it would be possible to keep the nearest solar corona area coded losslessly, and decreasing linearly the quality of the reconstruction in a radial sense by successive circular corona-shaped structures, by using variable lossy compression for consecutive radialized image lines

    Testing the portal imager GLAaS algorithm for machine quality assurance

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To report about enhancements introduced in the GLAaS calibration method to convert raw portal imaging images into absolute dose matrices and to report about application of GLAaS to routine radiation tests for linac quality assurance procedures programmes.</p> <p>Methods</p> <p>Two characteristic effects limiting the general applicability of portal imaging based dosimetry are the over-flattening of images (eliminating the "horns" and "holes" in the beam profiles induced by the presence of flattening filters) and the excess of backscattered radiation originated by the detector robotic arm supports. These two effects were corrected for in the new version of GLAaS formalism and results are presented to prove the improvements for different beams, detectors and support arms. GLAaS was also tested for independence from dose rate (fundamental to measure dynamic wedges).</p> <p>With the new corrections, it is possible to use GLAaS to perform standard tasks of linac quality assurance. Data were acquired to analyse open and wedged fields (mechanical and dynamic) in terms of output factors, MU/Gy, wedge factors, profile penumbrae, symmetry and homogeneity. In addition also 2D Gamma Evaluation was applied to measurement to expand the standard QA methods. GLAaS based data were compared against calculations on the treatment planning system (the Varian Eclipse) and against ion chamber measurements as consolidated benchmark. Measurements were performed mostly on 6 MV beams from Varian linacs. Detectors were the PV-as500/IAS2 and the PV-as1000/IAS3 equipped with either the robotic R- or Exact- arms.</p> <p>Results</p> <p>Corrections for flattening filter and arm backscattering were successfully tested. Percentage difference between PV-GLAaS measurements and Eclipse calculations relative doses at the 80% of the field size, for square and rectangular fields larger than 5 × 5 cm<sup>2 </sup>showed a maximum range variation of -1.4%, + 1.7% with a mean variation of <0.5%. For output factors, average percentage difference between GLAaS and Eclipse (or ion chamber) data was -0.4 ± 0.7 (-0.2 ± 0.4) respectively on square fields. Minimum, maximum and average percentage difference between GLAaS and Eclipse (or ion chamber) data in the flattened field region were: 0.1 ± 1.0, 0.7 ± 0.8, 0.1 ± 0.4 (1.0 ± 1.4, -0.3 ± 0.2, -0.1 ± 0.2) respectively. Similar minimal deviations were observed for flatness and symmetry.</p> <p>For Dynamic wedges, percentage difference of MU/Gy between GLAaS and Eclipse (or ion chamber) was: -1.1 ± 1.6 (0.4 ± 0.7). Minimum, maximum and average percentage difference between GLAaS and Eclipse (or ion chamber) data in the flattened field region were: 0.4 ± 1.6, -1.5 ± 1.8, -0.1 ± 0.3 (-2.2 ± 2.3, 2.3 ± 1.2, 0.8 ± 0.3) respectively.</p> <p>For mechanical wedges differences of transmission factors were <1.6% (Eclipse) and <1.1% (ion chamber) for all wedges. Minimum, maximum and average percentage difference between GLAaS and Eclipse (or ion chamber) data in the flattened field region were: -1.3 ± 0.7, -0.7 ± 0.7, -0.2 ± 0.2 (-0.8 ± 0.8, 0.7 ± 1.1, 0.2 ± 0.3) respectively.</p> <p>Conclusion</p> <p>GLAaS includes now efficient methods to correct for missing "horns" and "holes" induced by flattening filter in the beam and to compensate for excessive backscattering from the support arm. These enhancements allowed to use GLAaS based dosimetric measurement to perform standard tasks of Linac quality assurance with reliable and consistent results. This fast method could be applied to routine practice being also fast in usage and because it allows the introduction of new analysis tools in routine QA by means, e.g., of the Gamma Index analysis.</p
    • 

    corecore