553 research outputs found
A Lower Bound on Opaque Sets
It is proved that the total length of any set of countably many rectifiable curves, whose union meets all straight lines that intersect the unit square U, is at least 2.00002. This is the first improvement on the lower bound of 2 by Jones in 1964. A similar bound is proved for all convex sets U other than a triangle
On the Lengths of Curves Passing through Boundary Points of a Planar Convex Shape
We study the lengths of curves passing through a fixed number of points on
the boundary of a convex shape in the plane. We show that for any convex shape
, there exist four points on the boundary of such that the length of any
curve passing through these points is at least half of the perimeter of . It
is also shown that the same statement does not remain valid with the additional
constraint that the points are extreme points of . Moreover, the factor
cannot be achieved with any fixed number of extreme points. We
conclude the paper with few other inequalities related to the perimeter of a
convex shape.Comment: 7 pages, 8 figure
Quadratic Crofton and sets that see themselves as little as possible
Let and let be a
one-dimensional set with finite length . We are interested in
minimizers of an energy functional that measures the size of a set projected
onto itself in all directions: we are thus asking for sets that see themselves
as little as possible (suitably interpreted). Obvious minimizers of the
functional are subsets of a straight line but this is only possible for L \leq
\mbox{diam}(\Omega). The problem has an equivalent formulation: the expected
number of intersections between a random line and depends only on
the length of (Crofton's formula). We are interested in sets
that minimize the variance of the expected number of
intersections. We solve the problem for convex and slightly less than
half of all values of : there, a minimizing set is the union of copies of
the boundary and a line segment
The Surface Evolver
The Surface Evolver is a computer program that minimizes the energy of a surface subject to constraints. The surface is represented as a simplicial complex. The energy can include surface tension, gravity, and other forms. Constraints can be geometrical constraints on vertex positions or constraints on integrated quantities such as body volumes. The minimization is done by evolving the surface down the energy gradient. This paper describes the mathematical model used and the operations available to interactively modify the surface
Recommended from our members
Rarefied gas dynamic simulations of planetary atmospheric systems
My doctoral research involves the advanced numerical simulation of rarefied (low-pressure) planetary atmospheres and volcanism with advanced physical modeling, in application of the Direct Simulation Monte Carlo (DSMC) method. This method is the approach of choice for modeling a wide range of continuum-to-rarefied systems - in which the average spacing between molecules in the flow becomes comparable to the flow length scales, and in which traditional means of computing fluid dynamics with the partial differential equations of continuum theory break down. DSMC is a probabilistic technique by which the motions and collisions of representative molecules are computed. Multiple gas species are modelled, along with non-equilibrium radiation, high speed collisions, photochemistry, and a wide range of other relevant physics. Comprehensive atmospheric simulations are computed in parallel on one- and three-dimensional domains that, depending on the scope of a particular project, can span entire atmospheric systems from planetary surface through vacuum. These projects are ongoing efforts in modeling and understanding global-scale atmospheric flows and the processes by which such flows are populated and propagated, and they represent advancements of the state-of-the-art in planetary atmospheric simulation.
I have produced and presented research on four distinct topics: 1) simulations of the complete atmosphere of Jupiter's volcanic moon Io including sublimation and plasma-sputtering processes; 2) the creation of a novel neutral density model for Earth's upper-atmosphere in partnership with Los Alamos' ISR division; 3) multi-species simulations of the rarefied gas dynamic, transfer, and escape processes of the Pluto-Charon system; and 4) investigations of the canopy unsteadiness and development of transient filamentary structure as observed by the New Horizons probe at the Ionian Tvashtar plume site.
In the course of these projects, and using my research group's existing planetary-science DSMC code as a foundation, I have developed a novel, generalized framework for rarefied atmospheric simulation that enables efficient and thorough construction of entire upper-atmospheric models. My dissertation offers an analysis of the methodology of rarefied gas dynamic planetary atmospheric simulation, in addition to discussion of each project's scientific context, the results of my simulations, and their relevance toward the explanation of various observed phenomena in planetary atmospheric science.Aerospace Engineerin
Search for neutrino oscillations on a long base-line at the CHOOZ nuclear power station
This final article about the CHOOZ experiment presents a complete description
of the electron antineutrino source and detector, the calibration methods and
stability checks, the event reconstruction procedures and the Monte Carlo
simulation. The data analysis, systematic effects and the methods used to reach
our conclusions are fully discussed. Some new remarks are presented on the
deduction of the confidence limits and on the correct treatment of systematic
errors.Comment: 41 pages, 59 figures, Latex file, accepted for publication by
Eur.Phys.J.
PROBING THE NON-LINEARITY IN GALAXY CLUSTERS THROUGH THE ANALYSIS OF FRACTAL DIMENSION VIA WAVELET TRANSFORM
The study of large scale structure (LSS) of the Universe armed with both all-sky surveys and numerical simulations has become an increasingly important tool to probe basic cosmology. We used the method of wavelet transforms combined with the fractal based point-processes to investigate the clustering of matter on galactic scales through the fractal analysis approach. In particular, we developed a method to calculate the angular fractal dimension of galaxy distributions as a function of the cosmological comoving space. Taking advantage of the self-similarity and lo- calization properties of the wavelets, allows us to compute the fractal dimension of galaxies in narrow redshift bins. The narrow bins assure that dynamical evolution has not occurred. We used both real and simulated data from the Baryon Oscilla- tion Spectroscopic Survey (BOSS) and the Mock Galaxy Catalogs produced by the Sloan Digital Sky Survey (SDSS). Using the wavelet packet power spectrum, we find areas in the galaxy distribution which have power law like behavior. The exponent of the power law is the Hurst exponent H, which is directly related to the fractal dimension of spatial point processes. We find the fractal dimension at all redshifts is D = 1:3+0:2 for BOSS Galaxies while D = 1:6+0:3 for Mock Galaxy Catalogs. We concluded that galaxies distribution in the redshift range z \u3c 1 can be described as angular fractal systems, and the distribution is inhomogenous and irregular
Visibility computation through image generalization
This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods
Errata and Addenda to Mathematical Constants
We humbly and briefly offer corrections and supplements to Mathematical
Constants (2003) and Mathematical Constants II (2019), both published by
Cambridge University Press. Comments are always welcome.Comment: 162 page
- …