682 research outputs found

    GEODYN system description, volume 1

    Get PDF
    A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case

    Modelling polarized light for computer graphics

    Get PDF
    The quality of visual realism in computer generated images is largely determined by the accuracy of the reflection model. Advances in global illumination techniques have removed to a large extent, some of the limitations on the physical correctness achievable by reflection models. While models currently used by researchers are physically based, most approaches have ignored the polarization of light. The few previous efforts addressing the polarization of light were hampered by inherently unphysical light transport algorithms. This paper, besides taking polarization of light into account in the reflection computation, also provides a basis for modelling polarization as an inherent attribute of light, using the Stokes parameters. A reflection model is developed within this framework and the implementation within a global illumination algorithm called Photon is presented

    The derivation of a general perturbation solution and its application to orbit determination

    Get PDF
    Analytical solutions to spacecraft equations of motion and application to orbit determinatio

    Helical Models of the Bidirectional Vortex in a Conical Geometry

    Get PDF
    This dissertation represents the descriptive and analytical breakdown of two new fluid dynamics solutions for vortex motion. Both solutions model the bidirectional vortex within a conical geometry. The first explored solution satisfies a simple Beltramian characteristic, where the Lamb vector is identically zero. The second solution is of the generalized Beltramian type, which fulfills the condition that the curl of the Lamb vector is equal to zero. The two Beltramian solutions describe the axisymmetric, double helical motion often found in industrial cyclone separators. Other applications include cone-shaped, vortex-driven combustion chambers and the swirling flow through conical devices. Both solutions are derived from first principles and Euler’s equations of motion which showcase the stream function-vorticity relation and ultimately transforms into the Bragg-Hawthorne formulation. The Bragg-Hawthorne equation allows for various implementations of the Bernoulli and swirl functions. The angular momentum equation includes the source term for the Beltramian solution. On the other hand, the Bernoulli relation drives the generalized Beltramian model. Appropriate boundary conditions and assumptions reduce the governing partial differential equation to an ordinary differential equation which is then solved by a separation of variables approach. Resulting velocity, vorticity, and pressure variables are discussed and graphed. The tangential and axial velocities are compared to two experimental and numerical cyclone separator cases. Other features of the conical flow field such as the conical swirl number and dual mantle locations are also explored. The inviscid, incompressible, and rotational models ultimately lay the framework for complementary solutions derived from the Bragg-Hawthorne equation or similar formulation

    An Evaluation of multispectral earth-observing multi-aperture telescope designs for target detection and characterization

    Get PDF
    Earth-observing satellites have fundamental size and weight design limits since they must be launched into space. These limits serve to constrain the spatial resolutions that such imaging systems can achieve with traditional telescope design strategies. Segmented and sparse-aperture imaging system designs may offer solutions to this problem. Segmented and sparse-aperture designs can be viewed as competing technologies; both approaches offer solutions for achieving finer resolution imaging from space. Segmented-aperture systems offer greater fill factor, and therefore greater signal-to-noise ratio (SNR), for a given encircled diameter than their sparse aperture counterparts, though their larger segments often suffer from greater optical aberration than those of smaller, sparse designs. Regardless, the use of any multi-aperture imaging system comes at a price; their increased effective aperture size and improvement in spatial resolution are offset by a reduction in image quality due to signal loss (less photon-collecting area) and aberrations introduced by misalignments between individual sub-apertures as compared with monolithic collectors. Introducing multispectral considerations to a multi-aperture imaging system further starves the system of photons and reduces SNR in each spectral band. This work explores multispectral design considerations inherent in 9-element tri-arm sparse aperture, hexagonal-element segmented aperture, and monolithic aperture imaging systems. The primary thrust of this work is to develop an objective target detection-based metric that can be used to compare the achieved image utility of these competing multi-aperture telescope designs over a designated design parameter trade space. Characterizing complex multi-aperture system designs in this way may lead to improved assessment of programmatic risk and reward in the development of higher-resolution imaging capabilities. This method assumes that the stringent requirements for limiting the wavefront error (WFE) associated with multi-aperture imaging systems when producing imagery for visual assessment, can be relaxed when employing target detection-based metrics for evaluating system utility. Simple target detection algorithms were used to determine Receiver Operating Characteristic (ROC) curves for the various simulated multi-aperture system designs that could be used in an objective assessment of each system\u27s ability to support target detection activities. Also, a set of regressed equations was developed that allow one to predict multi-aperture system target detection performance within the bounds of the designated trade space. Suitable metrics for comparing the shapes of two individual ROC curves, such as the total area under the curve (AUC) and the sample Pearson correlation coefficient, were found to be useful tools in validating the predicted results of the trade space regression models. And lastly, some simple rules of thumb relating to multi-aperture system design were identified from the inspection of various points of equivalency between competing system designs, as determined from the comparison metrics employed. The goal of this work, the development of a process for simulating multi-aperture imaging systems and comparing them in terms of target detection tasks, was successfully accomplished. The process presented here could be tailored to the needs of any specific multi-aperture development effort and used as a tool for system design engineers

    Open Systems Dynamics for Propagating Quantum Fields

    Get PDF
    In this dissertation, I explore interactions between matter and propagating light. The electromagnetic field is modeled as a reservoir of quantum harmonic oscillators successively streaming past a quantum system. Each weak and fleeting interaction entangles the light and the system, and the light continues its course. Within the framework of open quantum systems, the light is eventually traced out, leaving the reduced quantum state of the system as the primary mathematical subject. Two major results are presented. The first is a master equation approach for a quantum system interacting with a traveling wave packet prepared with a definite number of photons. In contrast to quasi-classical states, such as coherent or thermal fields, these N-photon states possess temporal mode entanglement, and local interactions in time have nonlocal consequences. The second is a model for a three-dimensional light-matter interface for an atomic ensemble interacting with a paraxial laser beam and its application to the generation of QND spin squeezing. Both coherent and incoherent dynamics due to spatially inhomogeneous atom-light coupling across the ensemble are accounted for. Measurement of paraxially scattered light can generate squeezing of an atomic spin wave, while diffusely scattered photons lead to spatially local decoherence.Comment: PhD thesis. 261 page

    On incorporating inductive biases into deep neural networks

    Get PDF
    A machine learning (ML) algorithm can be interpreted as a system that learns to capture patterns in data distributions. Before the modern \emph{deep learning era}, emulating the human brain, the use of structured representations and strong inductive bias have been prevalent in building ML models, partly due to the expensive computational resources and the limited availability of data. On the contrary, armed with increasingly cheaper hardware and abundant data, deep learning has made unprecedented progress during the past decade, showcasing incredible performance on a diverse set of ML tasks. In contrast to \emph{classical ML} models, the latter seeks to minimize structured representations and inductive bias when learning, implicitly favoring the flexibility of learning over manual intervention. Despite the impressive performance, attention is being drawn towards enhancing the (relatively) weaker areas of deep models such as learning with limited resources, robustness, minimal overhead to realize simple relationships, and ability to generalize the learned representations beyond the training conditions, which were (arguably) the forte of classical ML. Consequently, a recent hybrid trend is surfacing that aims to blend structured representations and substantial inductive bias into deep models, with the hope of improving them. Based on the above motivation, this thesis investigates methods to improve the performance of deep models using inductive bias and structured representations across multiple problem domains. To this end, we inject a priori knowledge into deep models in the form of enhanced feature extraction techniques, geometrical priors, engineered features, and optimization constraints. Especially, we show that by leveraging the prior knowledge about the task in hand and the structure of data, the performance of deep learning models can be significantly elevated. We begin by exploring equivariant representation learning. In general, the real-world observations are prone to fundamental transformations (e.g., translation, rotation), and deep models typically demand expensive data-augmentations and a high number of filters to tackle such variance. In comparison, carefully designed equivariant filters possess this ability by nature. Henceforth, we propose a novel \emph{volumetric convolution} operation that can convolve arbitrary functions in the unit-ball (B3\mathbb{B}^3) while preserving rotational equivariance by projecting the input data onto the Zernike basis. We conduct extensive experiments and show that our formulations can be used to construct significantly cheaper ML models. Next, we study generative modeling of 3D objects and propose a principled approach to synthesize 3D point-clouds in the spectral-domain by obtaining a structured representation of 3D points as functions on the unit sphere (S2\mathbb{S}^2). Using the prior knowledge about the spectral moments and the output data manifold, we design an architecture that can maximally utilize the information in the inputs and generate high-resolution point-clouds with minimal computational overhead. Finally, we propose a framework to build normalizing flows (NF) based on increasing triangular maps and Bernstein-type polynomials. Compared to the existing NF approaches, our framework consists of favorable characteristics for fusing inductive bias within the model i.e., theoretical upper bounds for the approximation error, robustness, higher interpretability, suitability for compactly supported densities, and the ability to employ higher degree polynomials without training instability. Most importantly, we present a constructive universality proof, which permits us to analytically derive the optimal model coefficients for known transformations without training

    Quantitative precipitation estimates from dual-polarization weather radar in lazio region

    Get PDF
    Many phenomena (such as attenuation and range degradation) can influence the accuracy of rainfall radar estimates. They introduce errors that increase as the distance from the radar increases, thereby decreasing the reliability of radar estimates for applications that require quantitative precipitation estimation. The aim of the present work is to develop a range dependent error model called adjustment factor, that can be used as a range error pattern for allowing to correct the mean error which affects long-term quantitative precipitation estimates. A range dependent gauge adjustment technique was applied in combination with other processing of radar data in order to correct the range dependent error affecting radar measurements. Issues like beam blocking, path attenuation, vertical structure of precipitation related error, bright band, and incorrect Z-R relationship are implicitly treated with this type of method. In order to develop the adjustment factor, radar error was determined with respect to rain gauges measurements through a comparison between the two devices, based on the assumption that gauge rain was real. Therefore, the G/R ratio between the yearly rainfall amount measured in each rain gauge position during 2008 and the corresponding radar rainfall amount was calculated against the distance from radar. Trend of the G/R ratio shows two behaviors: a concave part due to the melting layer effect close to the radar location, and an almost linear increasing trend at greater distance. Then, a linear best fitting was used to find an adjustment factor, which estimates the radar error at a given range. The effectiveness of the methodology was verified by comparing pairs of rainfall time series that were observed simultaneously by collocated rain gauges and radar. Furthermore, the variability of the adjustment factor was investigated at the scale of event, both for convective and stratiform events. The main result is that there is not an univocal range error pattern, as it is also a function of the event characteristics. On the other hand, the adjustment factor tends to stabilize over long periods of observation as in the case of a whole year of measures

    A preliminary assessment of small steam Rankine and Brayton point-focusing solar modules

    Get PDF
    A preliminary assessment of three conceptual point-focusing distributed solar modules is presented. The basic power conversion units consist of small Brayton or Rankine engines individually coupled to two-axis, tracking, point-focusing solar collectors. An array of such modules can be linked together, via electric transport, to form a small power station. Each module also can be utilized on a stand-alone basis, as an individual power source
    • …
    corecore