16,035 research outputs found

    Lunar Resource Assessment: an Industry Perspective

    Get PDF
    The goals of the U.S. space program are to return to the Moon, establish a base, and continue onward to Mars. To accomplish this in a relatively short time frame and to avoid the high costs of transporting materials from the Earth, resources on the Moon will need to be mined. Oxygen will be one of the most important resources, to be used as a rocket propellant and for life support. Ilmenite and lunar regolith have both been considered as ores for the production of oxygen. Resource production on the Moon will be a very important part of the U.S. space program. To produce resources we must explore to identify the location of ore or feedback and calculate the surface and underground reserves. Preliminary resource production tests will provide the information that can be used in final plant design. Bechtel Corporation's experience in terrestrial engineering and construction has led to an interest in lunar resource assessment leading to the construction of production facilities on the Moon. There is an intimate link between adequate resource assessment to define feedstock quantity and quality, material processing requirements, and the successful production of lunar oxygen. Although lunar resource assessment is often viewed as a research process, the engineering and production aspects are very important to consider. Resource production often requires the acquisition of different types, scales, or resolutions of data than that needed for research, and it is needed early in the exploration process. An adequate assessment of the grade, areal extent, and depth distribution of the resources is a prerequisite to mining. The need for a satisfactory resource exploration program using remote sensing techniques, field sampling, and chemical and physical analysis is emphasized. These data can be used to define the ore for oxygen production and the mining, processing facilities, and equipment required

    Reconstruction of primordial density fields

    Full text link
    The Monge-Ampere-Kantorovich (MAK) reconstruction is tested against cosmological N-body simulations. Using only the present mass distribution sampled with particles, and the assumption of homogeneity of the primordial distribution, MAK recovers for each particle the non-linear displacement field between its present position and its Lagrangian position on a primordial uniform grid. To test the method, we examine a standard LCDM N-body simulation with Gaussian initial conditions and 6 models with non-Gaussian initial conditions: a chi-squared model, a model with primordial voids and four weakly non-Gaussian models. Our extensive analyses of the Gaussian simulation show that the level of accuracy of the reconstruction of the nonlinear displacement field achieved by MAK is unprecedented, at scales as small as about 3 Mpc. In particular, it captures in a nontrivial way the nonlinear contribution from gravitational instability, well beyond the Zel'dovich approximation. This is also confirmed by our analyses of the non-Gaussian samples. Applying the spherical collapse model to the probability distribution function of the divergence of the displacement field, we also show that from a well-reconstructed displacement field, such as that given by MAK, it is possible to accurately disentangle dynamical contributions induced by gravitational clustering from possible initial non-Gaussianities, allowing one to efficiently test the non-Gaussian nature of the primordial fluctuations. In addition, a simple application of MAK using the Zel'dovich approximation allows one to also recover accurately the present-day peculiar velocity field on scales of about 8 Mpc.Comment: Version to appear in MNRAS, 24 pages, 21 figures appearing (uses 35 figure files), 1 tabl

    THE WAIT-AND-SEE OPTION IN ASCENDING PRICE AUCTIONS

    Get PDF
    Cake-cutting protocols aim at dividing a ``cake'' (i.e., a divisible resource) and assigning the resulting portions to several players in a way that each of the players feels to have received a ``fair'' amount of the cake. An important notion of fairness is envy-freeness: No player wishes to switch the portion of the cake received with another player's portion. Despite intense efforts in the past, it is still an open question whether there is a \emph{finite bounded} envy-free cake-cutting protocol for an arbitrary number of players, and even for four players. We introduce the notion of degree of guaranteed envy-freeness (DGEF) as a measure of how good a cake-cutting protocol can approximate the ideal of envy-freeness while keeping the protocol finite bounded (trading being disregarded). We propose a new finite bounded proportional protocol for any number n \geq 3 of players, and show that this protocol has a DGEF of 1 + \lceil (n^2)/2 \rceil. This is the currently best DGEF among known finite bounded cake-cutting protocols for an arbitrary number of players. We will make the case that improving the DGEF even further is a tough challenge, and determine, for comparison, the DGEF of selected known finite bounded cake-cutting protocols.Comment: 37 pages, 4 figure

    Interior Point Decoding for Linear Vector Channels

    Full text link
    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem. The proposed decoding algorithm is based on a numerical optimization technique so called interior point method with barrier function. Approximate variations of the gradient descent and the Newton methods are used to solve the convex optimization problem. In a decoding process of the proposed algorithm, a search point always lies in the fundamental polytope defined based on a low-density parity-check matrix. Compared with a convectional joint message passing decoder, the proposed decoding algorithm achieves better BER performance with less complexity in the case of partial response channels in many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE Transaction on Information Theor

    Linear semigroups with coarsely dense orbits

    Full text link
    Let SS be a finitely generated abelian semigroup of invertible linear operators on a finite dimensional real or complex vector space VV. We show that every coarsely dense orbit of SS is actually dense in VV. More generally, if the orbit contains a coarsely dense subset of some open cone CC in VV then the closure of the orbit contains the closure of CC. In the complex case the orbit is then actually dense in VV. For the real case we give precise information about the possible cases for the closure of the orbit.Comment: We added comments and remarks at various places. 14 page

    Likelihood informed dimension reduction for inverse problems in remote sensing of atmospheric constituent profiles

    Full text link
    We use likelihood informed dimension reduction (LIS) (T. Cui et al. 2014) for inverting vertical profile information of atmospheric methane from ground based Fourier transform infrared (FTIR) measurements at Sodankyl\"a, Northern Finland. The measurements belong to the word wide TCCON network for greenhouse gas measurements and, in addition to providing accurate greenhouse gas measurements, they are important for validating satellite observations. LIS allows construction of an efficient Markov chain Monte Carlo sampling algorithm that explores only a reduced dimensional space but still produces a good approximation of the original full dimensional Bayesian posterior distribution. This in effect makes the statistical estimation problem independent of the discretization of the inverse problem. In addition, we compare LIS to a dimension reduction method based on prior covariance matrix truncation used earlier (S. Tukiainen et al. 2016)

    Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of halfspaces

    Get PDF
    The \emph{Chow parameters} of a Boolean function f:{1,1}n{1,1}f: \{-1,1\}^n \to \{-1,1\} are its n+1n+1 degree-0 and degree-1 Fourier coefficients. It has been known since 1961 (Chow, Tannenbaum) that the (exact values of the) Chow parameters of any linear threshold function ff uniquely specify ff within the space of all Boolean functions, but until recently (O'Donnell and Servedio) nothing was known about efficient algorithms for \emph{reconstructing} ff (exactly or approximately) from exact or approximate values of its Chow parameters. We refer to this reconstruction problem as the \emph{Chow Parameters Problem.} Our main result is a new algorithm for the Chow Parameters Problem which, given (sufficiently accurate approximations to) the Chow parameters of any linear threshold function ff, runs in time \tilde{O}(n^2)\cdot (1/\eps)^{O(\log^2(1/\eps))} and with high probability outputs a representation of an LTF ff' that is \eps-close to ff. The only previous algorithm (O'Donnell and Servedio) had running time \poly(n) \cdot 2^{2^{\tilde{O}(1/\eps^2)}}. As a byproduct of our approach, we show that for any linear threshold function ff over {1,1}n\{-1,1\}^n, there is a linear threshold function ff' which is \eps-close to ff and has all weights that are integers at most \sqrt{n} \cdot (1/\eps)^{O(\log^2(1/\eps))}. This significantly improves the best previous result of Diakonikolas and Servedio which gave a \poly(n) \cdot 2^{\tilde{O}(1/\eps^{2/3})} weight bound, and is close to the known lower bound of max{n,\max\{\sqrt{n}, (1/\eps)^{\Omega(\log \log (1/\eps))}\} (Goldberg, Servedio). Our techniques also yield improved algorithms for related problems in learning theory

    Earth resources evaluation for New Mexico by LANDSAT-2

    Get PDF
    The author has identified the following significant results. The Middle Rio Grande project has not yet progressed to the point where mineral exploration sites can be chosen; however, there does appear to be some correlation between the known structure and mineral deposits and the LANDSAT lineament map. A circular feature identified in the southern Magdalena Mountains on LANDSAT-1 imagery agrees well with the location of a newly proposed caldron complex. Several recognized and unrecognized circular features were identified on imagery of the Mogollon-Datil volcanic field. A check of aeromagnetic maps for New Mexico found that the circular features on the LANDSAT imagery showed up as areas of generally high magnetic intensity
    corecore