333 research outputs found

    Computed Tomography of Chemiluminescence: A 3D Time Resolved Sensor for Turbulent Combustion

    No full text
    Time resolved 3D measurements of turbulent flames are required to further understanding of combustion and support advanced simulation techniques (LES). Computed Tomography of Chemiluminescence (CTC) allows a flame’s 3D chemiluminescence profile to be obtained by inverting a series of integral measurements. CTC provides the instantaneous 3D flame structure, and can also measure: excited species concentrations, equivalence ratio, heat release rate, and possibly strain rate. High resolutions require simultaneous measurements from many view points, and the cost of multiple sensors has traditionally limited spatial resolutions. However, recent improvements in commodity cameras makes a high resolution CTC sensor possible and is investigated in this work. Using realistic LES Phantoms (known fields), the CT algorithm (ART) is shown to produce low error reconstructions even from limited noisy datasets. Error from selfabsorption is also tested using LES Phantoms and a modification to ART that successfully corrects this error is presented. A proof-of-concept experiment using 48 non-simultaneous views is performed and successfully resolves a Matrix Burner flame to 0.01% of the domain width (D). ART is also extended to 3D (without stacking) to allow 3D camera locations and optical effects to be considered. An optical integral geometry (weighted double-cone) is presented that corrects for limited depth-of-field, and (even with poorly estimated camera parameters) reconstructs the Matrix Burner as well as the standard geometry. CTC is implemented using five PicSight P32M cameras and mirrors to provide 10 simultaneous views. Measurements of the Matrix Burner and a Turbulent Opposed Jet achieve exposure times as low as 62 μs, with even shorter exposures possible. With only 10 views the spatial resolution of the reconstructions is low. However, a cosine Phantom study shows that 20–40 viewing angles are necessary to achieve high resolutions (0.01– 0.04D). With 40 P32M cameras costing £40000, future CTC implementations can achieve high spatial and temporal resolutions

    Local and Global Illumination in the Volume Rendering Integral

    Get PDF

    Numerical computation of complex multi-body Navier-Stokes flows with applications for the integrated Space Shuttle launch vehicle

    Get PDF
    An enhanced grid system for the Space Shuttle Orbiter was built by integrating CAD definitions from several sources and then generating the surface and volume grids. The new grid system contains geometric components not modeled previously plus significant enhancements on geometry that has been modeled in the old grid system. The new orbiter grids were then integrated with new grids for the rest of the launch vehicle. Enhancements were made to the hyperbolic grid generator HYPGEN and new tools for grid projection, manipulation, and modification, Cartesian box grid and far field grid generation and post-processing of flow solver data were developed

    Astronomy with integral field spectroscopy:: observation, data analysis and results

    Get PDF
    With a new generation of facility instruments being commissioned for 8 metre telescopes, integral field spectroscopy will soon be a standard tool in astronomy, opening a range of exciting new research opportunities. It is clear, however, that reducing and analyzing integral field data is a complex problem, which will need considerable attention before the full potential of the hardware can be realized. The purpose of this thesis is therefore to explore some of the scientific capabilities of integral field spectroscopy, developing the techniques needed to produce astrophysical results from the data. Two chapters are dedicated to the problem of analyzing observations from the densely-packed optical fibre instruments pioneered at Durham. It is shown that, in the limit where each spectrum is sampled by only one detector row, data maybe treated in a similar way to those from an image slicer. The properties of raw fibre data are considered in the context of the Sampling Theorem and methods for three dimensional image reconstruction are discussed. These ideas are implemented in an IRAF data reduction package for the Thousand Element Integral Field Unit (TEIFU), with source code provided on the accompanying compact disc. Two observational studies are also presented. In the first case, the 3D infrared image slicer has been used to test for the presence of a super-massive black hole in the giant early-type galaxy NGC 1316. Measurements of the stellar kinematics do not reveal a black hole of mass 5 x l0(^9)M©, as predicted from bulge luminosity using the relationship of Kormendy & Richstone (1995). The second study is an investigation into the origin of [Fell] line emission in the Seyfert galaxy NGC4151, using Durham University's SMIRFS-IFU. By mapping [Fell] line strength and velocity at the galaxy centre, it is shown that the emission is associated with the optical narrow line region, rather than the radio jet, indicating that the excitation is primarily due to photoionizing X-rays.Finally, a report is given on the performance of TEIFU, which was commissioned at the William Herschel Telescope in 1999. Measurements of throughput and fibre response variation are given and a reconstructed test observation of the radio galaxy 3C 327 is shown, demonstrating the functionality of the instrument and software

    Classifying Periodic Astrophysical Phenomena from non-survey optimized variable-cadence observational data

    Get PDF
    Modern time-domain astronomy is capable of collecting a staggeringly large amount of data on millions of objects in real time. Therefore, the production of methods and systems for the automated classification of time-domain astronomical objects is of great importance. The Liverpool Telescope has a number of wide-field image gathering instruments mounted upon its structure, the Small Telescopes Installed at the Liverpool Telescope. These instruments have been in operation since March 2009 gathering data of large areas of sky around the current field of view of the main telescope generating a large dataset containing millions of light sources. The instruments are inexpensive to run as they do not require a separate telescope to operate but this style of surveying the sky introduces structured artifacts into our data due to the variable cadence at which sky fields are resampled. These artifacts can make light sources appear variable and must be addressed in any processing method. The data from large sky surveys can lead to the discovery of interesting new variable objects. Efficient software and analysis tools are required to rapidly determine which potentially variable objects are worthy of further telescope time. Machine learning offers a solution to the quick detection of variability by characterising the detected signals relative to previously seen exemplars. In this paper, we introduce a processing system designed for use with the Liverpool Telescope identifying potentially interesting objects through the application of a novel representation learning approach to data collected automatically from the wide-field instruments. Our method automatically produces a set of classification features by applying Principal Component Analysis on set of variable light curves using a piecewise polynomial fitted via a genetic algorithm applied to the epoch-folded data. The epoch-folding requires the selection of a candidate period for variable light curves identified using a genetic algorithm period estimation method specifically developed for this dataset. A Random Forest classifier is then used to classify the learned features to determine if a light curve is generated by an object of interest. This system allows for the telescope to automatically identify new targets through passive observations which do not affect day-to-day operations as the unique artifacts resulting from such a survey method are incorporated into the methods. We demonstrate the power of this feature extraction method compared to feature engineering performed by previous studies by training classification models on 859 light curves of 12 known variable star classes from our dataset. We show that our new features produce a model with a superior mean cross-validation F1 score of 0.4729 with a standard deviation of 0.0931 compared with the engineered features at 0.3902 with a standard deviation of 0.0619. We show that the features extracted from the representation learning are given relatively high importance in the final classification model. Additionally, we compare engineered features computed on the interpolated polynomial fits and show that they produce more reliable distributions than those fit to the raw light curve when the period estimation is correct

    Identifying Ditch Geometry and Top of the Bank Location Using Airborne LiDAR Point Cloud

    Get PDF
    The geometry of agricultural drainage ditches is very important in crop production as it impacts drainage of cropland and affects vegetation and soil erosion along the banks of the ditches. Thus, implementation of water conservation and management practices in engineered and natural ditches necessitates determination of ditch geometry along the reach of the ditch. This study explores the use of airborne commercial Light Detection and Ranging (LiDAR) technology to identify the top of the ditch banks. The method was developed to obtain the normalized cross sectional shape of the ditch using one-dimensional spline fits to ground classified points from the extracted LiDAR points in the cross sectional area and to determine the tops of corresponding banks. The method was applied iteratively along the length of the ditch. RTK GPS point validation data were collected from cross sections of seven ditches in Howard, Clinton and Boone County, Indiana. The Indiana Statewide LiDAR data products and NASA Goddard`s LiDAR, Hyperspectral and Thermal (G-LiHT) airborne imager data were used in the study. The impacts of vegetation along the ditch and LiDAR point density on the top of the bank results, as well as improvement from using the LiDAR point cloud data instead of the Digital Elevation Model (DEM) were explored

    Detecting discontinuities using nonparametric smoothing techniques in correlated data

    Get PDF
    There is increasing interest in the detection and estimation of discontinuities in regression problems with one and two covariates, due to its wide variety of applications. Moreover, in many real life applications, we are likely to encounter a certain degree of dependence in observations that are collected over time or space. Detecting changes in dependent data in the presence of a smoothly varying trend, is a much more complicated problem that previously has not been adequately studied. Hence, the aim of this thesis is to respond to the immense need for a nonparametric discontinuity test which is capable of incorporating robust estimation of the underlying dependence structure (if unknown) into the test procedure in one and two dimensions. By means of a difference-based method, using a local linear kernel smoothing technique, a global test of the hypothesis that an abrupt change is present in the smoothly varying mean level of a sequence of correlated data is developed in the one-dimensional setting. Accurate distributional calculations for the test statistic can be performed, using standard results on quadratic forms. Extensive simulations are carried out to examine the performance of the test in the cases both of correlation known and unknown. For the latter, the effectiveness of the different algorithms that have been devised to incorporate the estimation of correlation, for both the equally and unequally spaced designs, is investigated. Various factors that affect the size and power of the test are also explored. In addition, a small simulation study is performed to compare the proposed test with an isotonic regression test proposed by Wu et al. (2001). The utility of the techniques is demonstrated by applying the proposed discontinuity test to three sets of real-life data, namely the Argentina rainfall data, the global warming data and the River Clyde data. The analysis of the results are compared to those using the isotonic regression test of Wu et al. (2001) and the Bayesian test of Thomas (2001). Finally, the test is also extended to detect discontinuities in spatially correlated data. The same differencing principle as in the one-dimensional case is utilised here. However, the discontinuity in this context does not occur only at a point but over a smooth curve. Hence, the test has to take into account the additional element of direction. A two stage algorithm which makes use of a partitioning process to remove observations that are near the discontinuity curve is proposed. A motivating application for the approach is the analysis of radiometric data on cesium fallout in a particular area in Finland after a nuclear reactor accident in Chernobyl. The procedures outlined for both the one and two dimensional settings are particularly useful and relatively easy to implement. Although the main focus of the work is not to identify the exact locations of the discontinuities, useful graphical tools have been employed to infer their likely locations. The dissertation closes with a summary and discussion of the results presented, and proposes potential future work in this area
    • …
    corecore