993 research outputs found
Reflecting on the Physics of Notations applied to a visualisation case study
This paper presents a critical reflection upon the concept of 'physics of notations' proposed by Moody. This is based upon the post hoc application of the concept in the analysis of a visualisation tool developed for a common place mathematics tool. Although this is not the intended design and development approach presumed or preferred by the physics of notations, there are benefits to analysing an extant visualisation. In particular, our analysis benefits from the visualisation having been developed and refined employing graphic design professionals and extensive formative user feedback. Hence the rationale for specific visualisation features is to some extent traceable. This reflective analysis shines a light on features of both the visualisation and domain visualised, illustrating that it could have been analysed more thoroughly at design time. However the same analysis raises a variety of interesting questions about the viability of scoping practical visualisation design in the framework proposed by the physics of notations
Recommended from our members
Lessons learned on closed cavity thermophotovoltaic system efficiency measurements
Previous efficiency measurements have highlighted that to accurately measure and predict thermophotovoltaic (TPV) integrated cell or array efficiencies, a thorough understanding of the system is required. This includes knowledge of intrinsic diode and filter characteristics, radiative surface properties of all materials used within the cavity, and an intimate knowledge of the radiator/photon source. As a result of these and other lessons learned, the cavity test fixture used in earlier experiments was redesigned. To reduce radiator temperature gradients, the radiator was oversized and thickened, cavity walls were eliminated, the diode heat sink and shielding material were separated, and the cold side was redesigned to incorporate a steady state heat absorbed measurement technique. This redesigned test fixture provides an isothermal radiator and significantly enhances calorimetry capabilities. This newly designed cavity test fixture, in conjunction with the Monte Carlo Photon Transport code RACER-X, was used to improve and demonstrate the understanding of in-cavity TPV diode/module system efficiency testing. A single TPV diode was tested in this new fixture and yielded good agreement between measurements and predictions
Baryon Acoustic Oscillations in 2D: Modeling Redshift-space Power Spectrum from Perturbation Theory
We present an improved prescription for matter power spectrum in redshift
space taking a proper account of both the non-linear gravitational clustering
and redshift distortion, which are of particular importance for accurately
modeling baryon acoustic oscillations (BAOs). Contrary to the models of
redshift distortion phenomenologically introduced but frequently used in the
literature, the new model includes the corrections arising from the non-linear
coupling between the density and velocity fields associated with two
competitive effects of redshift distortion, i.e., Kaiser and Finger-of-God
effects. Based on the improved treatment of perturbation theory for
gravitational clustering, we compare our model predictions with monopole and
quadrupole power spectra of N-body simulations, and an excellent agreement is
achieved over the scales of BAOs. Potential impacts on constraining dark energy
and modified gravity from the redshift-space power spectrum are also
investigated based on the Fisher-matrix formalism. We find that the existing
phenomenological models of redshift distortion produce a systematic error on
measurements of the angular diameter distance and Hubble parameter by 1~2%, and
the growth rate parameter by ~5%, which would become non-negligible for future
galaxy surveys. Correctly modeling redshift distortion is thus essential, and
the new prescription of redshift-space power spectrum including the non-linear
corrections can be used as an accurate theoretical template for anisotropic
BAOs.Comment: 18 pages, 10 figure
Note on Redshift Distortion in Fourier Space
We explore features of redshift distortion in Fourier analysis of N-body
simulations. The phases of the Fourier modes of the dark matter density
fluctuation are generally shifted by the peculiar motion along the line of
sight, the induced phase shift is stochastic and has probability distribution
function (PDF) symmetric to the peak at zero shift while the exact shape
depends on the wave vector, except on very large scales where phases are
invariant by linear perturbation theory. Analysis of the phase shifts motivates
our phenomenological models for the bispectrum in redshift space. Comparison
with simulations shows that our toy models are very successful in modeling
bispectrum of equilateral and isosceles triangles at large scales. In the
second part we compare the monopole of the power spectrum and bispectrum in the
radial and plane-parallel distortion to test the plane-parallel approximation.
We confirm the results of Scoccimarro (2000) that difference of power spectrum
is at the level of 10%, in the reduced bispectrum such difference is as small
as a few percents. However, on the plane perpendicular to the line of sight of
k_z=0, the difference in power spectrum between the radial and plane-parallel
approximation can be more than 10%, and even worse on very small scales. Such
difference is prominent for bispectrum, especially for those configurations of
tilted triangles. The non-Gaussian signals under radial distortion on small
scales are systematically biased downside than that in plane-parallel
approximation, while amplitudes of differences depend on the opening angle of
the sample to the observer. The observation gives warning to the practice of
using the power spectrum and bispectrum measured on the k_z=0 plane as
estimation of the real space statistics.Comment: 15 pages, 8 figures. Accepted for publication in ChJA
Mitochondrial DNA mutations in human degenerative diseases and aging
AbstractA wide variety of mitochondrial DNA (mtDNA) mutations have recently been identified in degenerative diseases of the brain, heart, skeletal muscle, kidney and endocrine system. Generally, individuals inheriting these mitochondrial diseases are relatively normal in early life, develop symptoms during childhood, mid-life, or old age depending on the severity of the maternally-inherited mtDNA mutation; and then undergo a progressive decline. These novel features of mtDNA disease are proposed to be the product of the high dependence of the target organs on mitochondrial bioenergetics, and the cumulative oxidative phosphorylation (OXPHOS) defect caused by the inherited mtDNA mutation together with the age-related accumulation mtDNA mutations in post-mitotic tissues
Cosmological constraints from COMBO-17 using 3D weak lensing
We present the first application of the 3D cosmic shear method developed in
Heavens et al. (2006) and the geometric shear-ratio analysis developed in
Taylor et al. (2006), to the COMBO-17 data set. 3D cosmic shear has been used
to analyse galaxies with redshift estimates from two random COMBO-17 fields
covering 0.52 square degrees in total, providing a conditional constraint in
the (sigma_8, Omega_m) plane as well as a conditional constraint on the
equation of state of dark energy, parameterised by a constant w= p/rho c^2. The
(sigma_8, Omega_m) plane analysis constrained the relation between sigma_8 and
Omega_m to be sigma_8(Omega_m/0.3)^{0.57 +- 0.19}=1.06 +0.17 -0.16, in
agreement with a 2D cosmic shear analysis of COMBO-17. The 3D cosmic shear
conditional constraint on w using the two random fields is w=-1.27 +0.64 -0.70.
The geometric shear-ratio analysis has been applied to the A901/2 field, which
contains three small galaxy clusters. Combining the analysis from the A901/2
field, using the geometric shear-ratio analysis, and the two random fields,
using 3D cosmic shear, w is conditionally constrained to w=-1.08 +0.63 -0.58.
The errors presented in this paper are shown to agree with Fisher matrix
predictions made in Heavens et al. (2006) and Taylor et al. (2006). When these
methods are applied to large datasets, as expected soon from surveys such as
Pan-STARRS and VST-KIDS, the dark energy equation of state could be constrained
to an unprecedented degree of accuracy.Comment: 10 pages, 4 figures. Accepted to MNRA
Single-scatter Monte Carlo compared to condensed history results for low energy electrons
A Monte Carlo code has been developed to simulate individual electron interactions. The code has been instrumental in determining the range of validity for the widely used condensed history method. This task was accomplished by isolating and testing the condensed history assumptions. The results show that the condensed history method fails for low energy electron transport due to inaccuracies in energy loss and spatial positioning.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/29795/1/0000141.pd
Measuring our universe from galaxy redshift surveys
Galaxy redshift surveys have achieved significant progress over the last
couple of decades. Those surveys tell us in the most straightforward way what
our local universe looks like. While the galaxy distribution traces the bright
side of the universe, detailed quantitative analyses of the data have even
revealed the dark side of the universe dominated by non-baryonic dark matter as
well as more mysterious dark energy (or Einstein's cosmological constant). We
describe several methodologies of using galaxy redshift surveys as cosmological
probes, and then summarize the recent results from the existing surveys.
Finally we present our views on the future of redshift surveys in the era of
Precision Cosmology.Comment: 82 pages, 31 figures, invited review article published in Living
Reviews in Relativity, http://www.livingreviews.org/lrr-2004-
Recommended from our members
The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code
This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options
Recommended from our members
Thermodynamic analysis of Thermophotovoltaic Efficiency and Power Density Tradeoffs
This report presents an assessment of the efficiency and power density limitations of thermophotovoltaic (TPV) energy conversion systems for both ideal (radiative-limited) and practical (defect-limited) systems. Thermodynamics is integrated into the unique process physics of TPV conversion, and used to define the intrinsic tradeoff between power density and efficiency. The results of the analysis reveal that the selection of diode bandgap sets a limit on achievable efficiency well below the traditional Carnot level. In addition it is shown that filter performance dominates diode performance in any practical TPV system and determines the optimum bandgap for a given radiator temperature. It is demonstrated that for a given radiator temperature, lower bandgap diodes enable both higher efficiency and power density when spectral control limitations are included. The goal of this work is to provide a better understanding of the basic system limitations that will enable successful long-term development of TPV energy conversion technology
- …