1,955 research outputs found

    The influence of composition, annealing treatment, and texture on the fracture toughness of Ti-5Al-2.5Sn plate at cryogenic temperatures

    Get PDF
    The plane strain fracture toughness K sub Ic and conventional tensile properties of two commercially produced one-inch thick Ti-5Al-2.5Sn plates were determined at cryogenic temperatures. One plate was extra-low interstitial (ELI) grade, the other normal interstitial. Portions of each plate were mill annealed at 1088 K (1500 F) followed by either air cooling or furnace cooling. The tensile properties, flow curves, and K sub Ic of these plates were determined at 295 K (room temperature), 77 K (liquid nitrogen temperature), and 20 K (liquid hydrogen temperature)

    Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

    Full text link
    An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles

    Correlation dispersion as a measure to better estimate uncertainty in remotely sensed glacier displacements

    Get PDF
    In recent years a vast amount of glacier surface velocity data from satellite imagery has emerged based on correlation between repeat images. Thereby, much emphasis has been put on the fast processing of large data volumes and products with complete spatial coverage. The metadata of such measurements are often highly simplified when the measurement precision is lumped into a single number for the whole dataset, although the error budget of image matching is in reality neither isotropic nor constant over the whole velocity field. The spread of the correlation peak of individual image offset measurements is dependent on the image structure and the non-uniform flow of the ice and is used here to extract a proxy for measurement uncertainty. A quantification of estimation error or dispersion for each individual velocity measurement can be important for the inversion of, for instance, rheology, ice thickness and/or bedrock friction. Errors in the velocity data can propagate into derived results in a complex and exaggerating way, making the outcomes very sensitive to velocity noise and outliers. Here, we present a computationally fast method to estimate the matching precision of individual displacement measurements from repeat imaging data, focusing on satellite data. The approach is based upon Gaussian fitting directly on the correlation peak and is formulated as a linear least-squares estimation, making its implementation into current pipelines straightforward. The methodology is demonstrated for Sermeq Kujalleq (Jakobshavn Isbræ), Greenland, a glacier with regions of strong shear flow and with clearly oriented crevasses, and Malaspina Glacier, Alaska. Directionality within an image seems to be the dominant factor influencing the correlation dispersion. In our cases these are crevasses and moraine bands, while a relation to differential flow, such as shear, is less pronounced on the correlation spread.</p

    HIGH-FREQUENCY MOTION RESIDUALS IN MULTIBEAM ECHOSOUNDER DATA: ANALYSIS AND ESTIMATION

    Get PDF
    Advances in multibeam sonar mapping and data visualization have increasingly brought to light the subtle integration errors remaining in bathymetric datasets. Traditional field calibration procedures, such as the patch test, just account for static orientation bias and sonar-to-position latency. This, however, ignores the generally subtler integration problems that generate time-varying depth errors. Such dynamic depth errors are the result of an unknown offset in one or more of orientation, space, sound speed or time between the sonar and ancillary sensors. Such errors are systematic, and thus should be predictable, based on their relationship between the input data and integrated output. A first attempt at addressing this problem utilized correlations between motion and temporally smoothed, ping-averaged residuals. The known limitations of that approach, however, included only being able to estimate the dominant integration error, imperfectly accounting for irregularly spaced sounding distribution and only working in shallow water. This thesis presents a new and improved means of considering the dynamics of the integration error signatures which can address multiple issues simultaneously, better account for along-track sounding distribution, and is not restricted to shallow water geometry. The motion-driven signatures of six common errors are simultaneously identified. This is achieved through individually considering each sounding’s input-error relationship along extended sections of a single swath corridor. Such an approach provides a means of underway system optimization using nothing more than the bathymetry of typical seafloors acquired during transit. Initial results of the new algorithm are presented using data generated from a simulator, with known inputs and integration errors, to test the efficacy of the method. Results indicate that successful estimation requires conditions of significant vessel motion over periods of a few tens of seconds as well as smooth, gently rolling bathymetry along the equivalent spatial extent covered by the moving survey platform

    Development of Landsat-based Technology for Crop Inventories: Appendices

    Get PDF
    There are no author-identified significant results in this report

    Orbital Effects in Spaceborne Synthetic Aperture Radar Interferometry

    Get PDF
    This book reviews and investigates orbit-related effects in synthetic aperture Radar interferometry (InSAR). The translation of orbit inaccuracies to error signals in the interferometric phase is concisely described; estimation and correction approaches are discussed and evaluated with special focus on network adjustment of redundantly estimated baseline errors. Moreover, the effect of relative motion of the orbit reference frame is addressed

    Automated calibration of multi-sensor optical shape measurement system

    Get PDF
    A multi-sensor optical shape measurement system (SMS) based on the fringe projection method and temporal phase unwrapping has recently been commercialised as a result of its easy implementation, computer control using a spatial light modulator, and fast full-field measurement. The main advantage of a multi-sensor SMS is the ability to make measurements for 360° coverage without the requirement for mounting the measured component on translation and/or rotation stages. However, for greater acceptance in industry, issues relating to a user-friendly calibration of the multi-sensor SMS in an industrial environment for presentation of the measured data in a single coordinate system need to be addressed. The calibration of multi-sensor SMSs typically requires a calibration artefact, which consequently leads to significant user input for the processing of calibration data, in order to obtain the respective sensor's optimal imaging geometry parameters. The imaging geometry parameters provide a mapping from the acquired shape data to real world Cartesian coordinates. However, the process of obtaining optimal sensor imaging geometry parameters (which involves a nonlinear numerical optimization process known as bundle adjustment), requires labelling regions within each point cloud as belonging to known features of the calibration artefact. This thesis describes an automated calibration procedure which ensures that calibration data is processed through automated feature detection of the calibration artefact, artefact pose estimation, automated control point selection, and finally bundle adjustment itself. [Continues.

    Generic interferometric synthetic aperture radar atmospheric correction model and its application to co- and post-seismic motions

    Get PDF
    PhD ThesisThe tremendous development of Interferometric Synthetic Aperture Radar (InSAR) missions in recent years facilitates the study of smaller amplitude ground deformation over greater spatial scales using longer time series. However, this poses more challenges for correcting atmospheric effects due to the spatial-temporal variability of atmospheric delays. Previous attempts have used observations from Global Positioning System (GPS) and Numerical Weather Models (NWMs) to separate the atmospheric delays, but they are limited by (i) the availability (and distribution) of GPS stations; (ii) the time difference between NWM and radar observations; and (iii) the difficulties in quantifying their performance. To overcome the abovementioned limitations, we have developed the Iterative Tropospheric Decomposition (ITD) model to reduce the coupling effects of the troposphere turbulence and stratification and hence achieve similar performances over flat and mountainous terrains. Highresolution European Centre for Medium-Range Weather Forecasts (ECMWF) and GPS-derived tropospheric delays were properly integrated by investigating the GPS network geometry and topography variations. These led to a generic atmospheric correction model with a range of notable features: (i) global coverage, (ii) all-weather, all-time usability, (iii) available with a maximum of two-day latency, and (iv) indicators available to assess the model’s performance and feasibility. The generic atmospheric correction model enables the investigation of the small magnitude coseismic deformation of the 2017 Mw-6.4 Nyingchi earthquake from InSAR observations in spite of substantial atmospheric contamination. It can also minimize the temporal correlations of InSAR atmospheric delays so that reliable velocity maps over large spatial extents can be achieved. Its application to the post-seismic motion following the 2016 Kaikoura earthquake shows a success to recover the time-dependent afterslip distribution, which in turn evidences the deep inactive subduction slip mechanism. This procedure can be used to map surface deformation in other scenarios including volcanic eruptions, tectonic rifting, cracking, and city subsidence.This work was supported by a Chinese Scholarship Council studentship. Part of this work was also supported by the UK NERC through the Centre for the Observation and Modelling of Earthquakes, Volcanoes and Tectonics (COMET)

    Utilitarian Comparison of Nonlinear Regression Methods

    Get PDF
    To overcome the shortcomings of the least squares regression method, two methods - the normal distance, and the maximum likelihood, were developed. The maximum likelihood is a more generic method, with the normal distance being a consequence of it when error variances in the input and output measurements are equal. The methods were compared with the least squares method through Monte Carlo simulations for Titration and Packed Bed Reactor models. The methods were tested for varying magnitudes of uncertainty, for a sufficient number of realizations to ensure the results reflected the average parameter estimates, and were unique to the regression method. The results for the maximum likelihood method were found to be at par with the best method in most cases. The vertical and the normal distance method had individual preferences depending upon the relative magnitudes of uncertainty. However the programming burden for the maximum likelihood and the normal distance method, apart from the estimate of uncertainty variances for the maximum likelihood method, were the drawbacks. But, approximate estimate of the variances for the maximum likelihood method also yielded good results, as tested for a few cases. Hence for a more accurate estimate of regression parameters, the maximum likelihood method could be adopted with a higher probability of getting the desired results as compared to the other two methods.School of Chemical Engineerin

    Exoplanet Measurement to the Extreme: Novel Methods of Instrumentation and Data Extraction for Radial-velocity Spectrographs

    Get PDF
    The current generation of radial-velocity spectrographs are at the precipice of discovering the first Earth-like exoplanets orbiting in the habitable zones of nearby stars. Such detections require Doppler precision of approximately 10 cm/s, an order of magnitude better than the typical best-case measurement from the previous generation of instruments. Therefore, the radial-velocity community requires research and innovation from all angles to push our technology over the brink. This thesis presents multiple contributions to this field---ranging from the development of precision laser equipment to the implementation of advanced statistical data analysis algorithms---all in support of the EXtreme PREcision Spectrograph (EXPRES) with the goal of improving instrument precision and exoplanet detection capability. In Chapter 2, we demonstrate the effectiveness of quasi-chaotic high-amplitude agitation as an optimal form of modal noise mitigation in the optical fibers that feed into radial-velocity spectrographs. This technique is shown to improve radial-velocity error for a single-wavelength laser line from more than 10 m/s to less than 60 cm/s without affecting focal ratio degradation within the fiber. After development of an agitator based on this method for use with EXPRES, we find that combined radial-velocity precision across an entire laser frequency comb improves from 32.8 cm/s to 6.6 cm/s. In Chapter 3, I present aluminum nitride as a nonlinear optical material that can support frequency comb development from near-infrared to ultraviolet wavelengths. By injecting light from an aluminum nitride micro-ring into EXPRES, I demonstrate the material\u27s viability of producing resolvable comb lines throughout the bandpass of the instrument. I also prototype a 16 GHz electro-optic modulation comb in combination with an aluminum nitride waveguide as a device that could become a cheap broadband visible-wavelength astro-comb for radial-velocity spectrograph wavelength calibration. Finally, in Chapters 4 and 5, I present the EXPRES data extraction pipeline and the numerous novel algorithms that went into its design. Through the default version of the pipeline, including a flat-relative optimal extraction and chunk-by-chunk forward model radial-velocity measurement, we achieve 30 cm/s single-measurement precision on observations of stars with a signal-to-noise ratio of 250 measured at 550 nm. As demonstrated with 51 Peg b, the residual scatter of these observations after fitting with a single-planet Keplerian orbit is less than 90 cm/s. As alternatives to the default techniques, I also present my implementations of flat-relative spectro-perfectionism and B-spline regression stellar template forward modeling within the EXPRES pipeline. These methods provide comparable radial-velocity precision on observations of HD 3651 while also opening up many possibilities for future explorations with radial-velocity data analysis
    • …
    corecore