416,429 research outputs found

    Space-time Constructivism vs. Modal Provincialism: Or, How Special Relativistic Theories Needn't Show Minkowski Chronogeometry

    Get PDF
    Already in 1835 Lobachevski entertained the possibility of multiple (one might say "rival") geometries of the same type playing a role. This idea of rival geometries has reappeared from time to time (including Poincare' and several 20th century authors) but had yet to become a key idea in space-time philosophy prior to Brown's _Physical Relativity_. Such ideas are emphasized towards the end of Brown's book, which I suggest as the interpretive key. A crucial difference between Brown's constructivist approach to space-time theory and orthodox "space-time realism" pertains to modal scope. Constructivism takes a broad modal scope in applying (at least) to all local classical field theories---modal cosmopolitanism, one might say, including theories with multiple geometries. By contrast the orthodox view is modally provincial in assuming that there exists a _unique_ geometry, as the familiar theories (Newtonian gravity, Special Relativity, Nordstroem's gravity, and Einstein's General Relativity) have. These theories serve as the "canon" for the orthodox view. Their historical roles also suggest a Whiggish story of inevitable progress. Physics literature after c. 1920 is relevant to orthodoxy primarily as commentary on the canon, which closed in the 1910s. The orthodox view explains the spatio-temporal behavior of matter in terms of the manifestation of the real geometry of space-time, an explanation works fairly well within the canon. The orthodox view, Whiggish history, and the canon have a symbiotic relationship. If one happens to philosophize about a theory outside the canon, space-time realism sheds little light on the spatio-temporal behavior of matter. Worse, it gives the _wrong_ answer when applied to an example arguably _within_ the canon, a sector of Special Relativity, namely, _massive_ scalar gravity with universal coupling. Which is the true geometry---the flat metric from the Poincare' symmetry group, the conformally flat metric exhibited by material rods and clocks, or both---or is the question faulty? How does space-time realism explain the fact that all matter fields see the same curved geometry, when so many ways to mix and match exist? Constructivist attention to dynamical details is vindicated; geometrical shortcuts can disappoint. The more exhaustive exploration of relativistic field theories in particle physics, especially massive theories, is a largely untapped resource for space-time philosophy

    Multi-View Image Compositions

    Get PDF
    The geometry of single-viewpoint panoramas is well understood: multiple pictures taken from the same viewpoint may be stitched together into a consistent panorama mosaic. By contrast, when the point of view changes or when the scene changes (e.g., due to objects moving) no consistent mosaic may be obtained, unless the structure of the scene is very special. Artists have explored this problem and demonstrated that geometrical consistency is not the only criterion for success: incorporating multiple view points in space and time into the same panorama may produce compelling and informative pictures. We explore this avenue and suggest an approach to automating the construction of mosaics from images taken from multiple view points into a single panorama. Rather than looking at 3D scene consistency we look at image consistency. Our approach is based on optimizing a cost function that keeps into account image-to-image consistency which is measured on point-features and along picture boundaries. The optimization explicitly considers occlusion between pictures. We illustrate our ideas with a number of experiments on collections of images of objects and outdoor scenes

    Exact Solutions for the Intrinsic Geometry of Black Hole Coalescence

    Get PDF
    We describe the null geometry of a multiple black hole event horizon in terms of a conformal rescaling of a flat space null hypersurface. For the prolate spheroidal case, we show that the method reproduces the pair-of-pants shaped horizon found in the numerical simulation of the head-on-collision of black holes. For the oblate case, it reproduces the initially toroidal event horizon found in the numerical simulation of collapse of a rotating cluster. The analytic nature of the approach makes further conclusions possible, such as a bearing on the hoop conjecture. From a time reversed point of view, the approach yields a description of the past event horizon of a fissioning white hole, which can be used as null data for the characteristic evolution of the exterior space-time.Comment: 21 pages, 6 figures, revtex, to appear in Phys. Rev.

    A Descriptive Framework for Temporal Data Visualizations Based on Generalized Space-Time Cubes

    Get PDF
    International audienceWe present the generalized space-time cube, a descriptive model for visualizations of temporal data. Visualizations are described as operations on the cube, which transform the cube's 3D shape into readable 2D visualizations. Operations include extracting subparts of the cube, flattening it across space or time or transforming the cubes geometry and content. We introduce a taxonomy of elementary space-time cube operations and explain how these operations can be combined and parameterized. The generalized space-time cube has two properties: (1) it is purely conceptual without the need to be implemented, and (2) it applies to all datasets that can be represented in two dimensions plus time (e.g. geo-spatial, videos, networks, multivariate data). The proper choice of space-time cube operations depends on many factors, for example, density or sparsity of a cube. Hence, we propose a characterization of structures within space-time cubes, which allows us to discuss strengths and limitations of operations. We finally review interactive systems that support multiple operations, allowing a user to customize his view on the data. With this framework, we hope to facilitate the description, criticism and comparison of temporal data visualizations, as well as encourage the exploration of new techniques and systems. This paper is an extension of Bach et al.'s (2014) work

    Multi Camera Stereo and Tracking Patient Motion for SPECT Scanning Systems

    Get PDF
    Patient motion, which causes artifacts in reconstructed images, can be a serious problem in Single Photon Emission Computed Tomography (SPECT) imaging. If patient motion can be detected and quantified, the reconstruction algorithm can compensate for the motion. A real-time multi-threaded Visual Tracking System (VTS) using optical cameras, which will be suitable for deployment in clinical trials, is under development. The VTS tracks patients using multiple video images and image processing techniques, calculating patient motion in three-dimensional space. This research aimed to develop and implement an algorithm for feature matching and stereo location computation using multiple cameras. Feature matching is done based on the epipolar geometry constraints for a pair of images and extended to the multiple view case with an iterative algorithm. Stereo locations of the matches are then computed using sum of squared distances from the projected 3D lines in SPECT coordinates as the error metric. This information from the VTS, when coupled with motion assessment from the emission data itself, can provide a robust compensation for patient motion as part of reconstruction

    Hinode/Extreme-Ultraviolet Imaging Spectrometer Observations of the Temperature Structure of the Quiet Corona

    Full text link
    We present a Differential Emission Measure (DEM) analysis of the quiet solar corona on disk using data obtained by the Extreme-ultraviolet Imaging Spectrometer (EIS) on {\it Hinode}. We show that the expected quiet Sun DEM distribution can be recovered from judiciously selected lines, and that their average intensities can be reproduced to within 30%. We present a subset of these selected lines spanning the temperature range log\log T = 5.6 to 6.4 K that can be used to derive the DEM distribution reliably. The subset can be used without the need for extensive measurements and the observed intensities can be reproduced to within the estimated uncertainty in the pre-launch calibration of EIS. Furthermore, using this subset, we also demonstrate that the quiet coronal DEM distribution can be recovered on size scales down to the spatial resolution of the instrument (1"" pixels). The subset will therefore be useful for studies of small-scale spatial inhomogeneities in the coronal temperature structure, for example, in addition to studies requiring multiple DEM derivations in space or time. We apply the subset to 45 quiet Sun datasets taken in the period 2007 January to April, and show that although the absolute magnitude of the coronal DEM may scale with the amount of released energy, the shape of the distribution is very similar up to at least log\log T \sim 6.2 K in all cases. This result is consistent with the view that the {\it shape} of the quiet Sun DEM is mainly a function of the radiating and conducting properties of the plasma and is fairly insensitive to the location and rate of energy deposition. This {\it universal} DEM may be sensitive to other factors such as loop geometry, flows, and the heating mechanism, but if so they cannot vary significantly from quiet Sun region to region.Comment: Version accepted by ApJ and published in ApJ 705. Abridged abstrac

    Neural View-Interpolation for Sparse Light Field Video

    No full text
    We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution
    corecore