409 research outputs found

    Shape Reconstruction Based on Similarity in Radiance Changes under Varying Illumination

    Full text link

    Recognition of objects in orbit and their intentions with spaceā€borne subā€THz Inverse Synthetic Aperture Radar

    Get PDF
    An important aspect of Space Situational Awareness is to estimate the intent of objects in space. This paper discusses how discriminating features can be obtained from Inverse Synthetic Aperture Radar images of such objects and how these discriminators can be used to recognise the objects or to estimate their intent. If the object is, for example, a satellite of a known type, the scheme proposed is able to recognise it. The ability of the scheme to detect damage to the object is also discussed. The focus is on imagery obtained in the sub-terahertz band (typically 300 GHz) because of the greater imaging capability given by the diffuse scattering which is observed at these frequencies. The paper also discusses the importance of being able to use images obtained by electromagnetic simulation to be able to train the subsystem which recognises features of the objects and describes a practical scheme for creating these simulations for large objects at these very short wavelengths

    Modelling surface topography from reflected light.

    Get PDF
    This thesis is concerned with the use of the modulus of the Fourier spectrum to characterise object features and also to reconstruct object surfaces in the complete absence of phase information. In general, a phaseless synthesis is completely meaningless and many characteristic features of the object are obliterated when the modulus of the spectral components is inverse Fourier transformed with zero phase. However, the outcome is different when the object possesses some form of regularity and repetition in its characteristics. In such circumstances, the utilisation of both the modulus and the intensity of the spatial spectrum can reveal information regarding the characteristic features of the object surface. The first part of this research has utilised the intensity of the spectral components as a means of surface feature characterisation in the study of a machined surface. Two separate approaches were adopted for assessing the zero-phase images. Both the optically recorded Fourier spectrum and the computer simulated Fourier spectrum were used to extract surface related parameters in the zero-phase synthesis. Although merely a characterisation, the zero-phase synthesis of the spectral components revealed periodic behaviour very similar to that present in the original surface. The presence of such cyclic components was confirmed by their presence in travelling microscope images and in scanning electron microscope images of the surface. Additionally, a novel approach has been adopted to recover finer periodicities on the surface. The scale sensitivity of the frequency domain fosters an exceptional means through which digital magnification can be performed with the added advantage that it is accompanied by enhanced resolution. Magnification realised through spatial frequency data is by far superior to any spatial domain magnification. However, there are limitations to this approach. The second part of this research has been centred around the possible use of a non-iteratively based approach for extracting the unknown phases from the modulus of the Fourier spectrum and thus retrieving the 3-D geometrical structure of the unknown object surface as opposed to characterising its profile. The logarithmic Hilbert transform is one such approach which allows a non-iterative means of extracting unknown phases from the modulus of the Fourier spectrum. However, the technique is only successful for object surfaces which are well-behaved and display well-behaved spectral characteristics governed by continuity. For real object surfaces where structure, definition and repetition governs the characteristics, the spectrum is not well behaved. The spectrum is populated by maxi ma, minima and many isolated regions which are occupied by colonies of zeros disrupting the continuity. A new and unique approach has been devised by the author to reform the spectral behaviour of real object surfaces without affecting the fidelity that it conveys. The resultant information enables phase extraction to be achieved through the logarithmic Hilbert transform. It is possible to reform the spread of spectral behaviour to cultivate better continuity amongst its spectral components through an object scale change. The combination of the logarithmic Hilbert transform and the Fourier scaling principle has led to a new approach for extracting the unknown phases for real object structures which would otherwise have been impossible to perform through the use of Hilbert transformation alone. The validity of the technique has been demonstrated in a series of simulations conducted on one-dimensional objects as well as the two-dimensional object specimens. The limitations of the approach, improvements and the feasibility for practical implementation are ail issues which have been addressed

    The Impact of Surface Normals on Appearance

    Get PDF
    The appearance of an object is the result of complex light interaction with the object. Beyond the basic interplay between incident light and the object\u27s material, a multitude of physical events occur between this illumination and the microgeometry at the point of incidence, and also beneath the surface. A given object, made as smooth and opaque as possible, will have a completely different appearance if either one of these attributes - amount of surface mesostructure (small-scale surface orientation) or translucency - is altered. Indeed, while they are not always readily perceptible, the small-scale features of an object are as important to its appearance as its material properties. Moreover, surface mesostructure and translucency are inextricably linked in an overall effect on appearance. In this dissertation, we present several studies examining the importance of surface mesostructure (small-scale surface orientation) and translucency on an object\u27s appearance. First, we present an empirical study that establishes how poorly a mesostructure estimation technique can perform when translucent objects are used as input. We investigate the two major factors in determining an object\u27s translucency: mean free path and scattering albedo. We exhaustively vary the settings of these parameters within realistic bounds, examining the subsequent blurring effect on the output of a common shape estimation technique, photometric stereo. Based on our findings, we identify a dramatic effect that the input of a translucent material has on the quality of the resultant estimated mesostructure. In the next project, we discuss an optimization technique for both refining estimated surface orientation of translucent objects and determining the reflectance characteristics of the underlying material. For a globally planar object, we use simulation and real measurements to show that the blurring effect on normals that was observed in the previous study can be recovered. The key to this is the observation that the normalization factor for recovered normals is proportional to the error on the accuracy of the blur kernel created from estimated translucency parameters. Finally, we frame the study of the impact of surface normals in a practical, image-based context. We discuss our low-overhead, editing tool for natural images that enables the user to edit surface mesostructure while the system automatically updates the appearance in the natural image. Because a single photograph captures an instant of the incredibly complex interaction of light and an object, there is a wealth of information to extract from a photograph. Given a photograph of an object in natural lighting, we allow mesostructure edits and infer any missing reflectance information in a realistically plausible way

    Tele-Autonomous control involving contact

    Get PDF
    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed

    An integrated method for detection and mitigation of ice accretion on wind turbine blades

    Get PDF
    Ice formation on structures, particularly on the leading edges of curved surfaces such as cylinders and airfoils, can be dangerous, and it is necessary to use an ice sensor combined with an ice mitigation system to prevent ice from forming on these surfaces. Wind turbine blades, which are commonly used in cold climate regions, are particularly susceptible to ice accumulation due to their sensitivity to changes in aerodynamic performance. To address this issue, it is necessary to have an integrated system for detecting and mitigating ice formation on wind turbine blades. Various ice detection and mitigation techniques for wind turbine blades in cold regions are reviewed and categorized based on key parameters. The conceptual design of integrating ice sensing and mitigation systems is also investigated, along with the advantages and disadvantages of these systems. A new technique for estimating the volume of frozen water droplets on a cold solid surface based on the contact angle and thermal images is presented. This technique takes into consideration factors such as temperature, surface roughness, and droplet size. An integrated ice tracking and mitigation technique using thermal imaging and heat elements along the stagnation line of a cylindrical surface is developed. This technique, which employs IR camera to monitor ice buildup, de-icing, and relaxation, is validated using an optical camera. The average uncertainty of ice thickness determined from thermal and optical images is about 0.16 mm during ice buildup and about 0.1 mm during ice mitigation, making it suitable for many cold environment applications. Finally, the relationship between ice thickness at the stagnation line and ice thickness at the heater edge is investigated in order to control ice accumulation mass and limit the heat energy required for de-icing. It is shown through de-icing experiments that the heat energy needed to remove the ice accumulation on the surface of a cylinder can be reduced by controlling the ice thickness at the heater's edge

    Methods and Systems for Characterization of an Anomaly Using Infrared Flash Thermography

    Get PDF
    A method for characterizing an anomaly in a material comprises (a) extracting contrast data; (b) measuring a contrast evolution; (c) filtering the contrast evolution; (d) measuring a peak amplitude of the contrast evolution; (d) determining a diameter and a depth of the anomaly, and (e) repeating the step of determining the diameter and the depth of the anomaly until a change in the estimate of the depth is less than a set value. The step of determining the diameter and the depth of the anomaly comprises estimating the depth using a diameter constant C.sub.D equal to one for the first iteration of determining the diameter and the depth; estimating the diameter; and comparing the estimate of the depth of the anomaly after each iteration of estimating to the prior estimate of the depth to calculate the change in the estimate of the depth of the anomaly

    Thermal Infrared Imaging for the Charity Hospital Cemetery Archaeological Survey: Implications for Further Geological Applications

    Get PDF
    Recent work by Bob Melia of Real Time Thermal Imaging L.L.C. suggests thermal infrared (TIR) imaging can be used to identify subsurface archaeological features buried as deep as 3 meters but the basis for his work has not been tested. In November of 2002, Bob Melia and I attempted to locate unmarked graves at the Charity Hospital Cemetery using TIR imaging. Unfortunately, shortly after that survey, Bob Melia passed away without documenting his work or preparing the final report. Based on a review of previous research and modeling related to TIR imaging of subsurface features, I conclude that the high altitude that Bob Melia used for this type of study was key to his success. The larger field of view allowed recognition of longer spatial wavelength anomalies and more subtle temperature variations expected from features at greater depths than those in previous studies. Furthermore detecting features at these depths is aided by diurnal heating but is primarily made possible because annual seasonal temperature variations are significant 3-4 meters deep

    Massive Spatiotemporal Watershed Hydrological Storm Event Response Model (MHSERM) with Time-Lapsed NEXRAD Radar Feed

    Get PDF
    Correctly and efficiently estimating hydrological responses corresponding to a specific storm event at the streams in a watershed is the main goal of any sound water resource management strategy. Methods for calculating a stream flow hydrograph at the selected streams typically require a great deal of spatial and temporal watershed data such as geomorphological data, soil survey, landcover, precipitation data, and stream network information to name a few. However, extracting and preprocessing such data for estimation and analysis is a hugely time-consuming task, especially for a watershed with hundreds of streams and lakes and complicated landcover and soil characteristics. To deal with the complexity, traditional models have to simplify the watershed and the streams network, use average values for each subcatchment, and then indirectly validate the model by adjusting the parameters through calibration and verification. To obviate such difficulties, and to better utilize the new, high precision spatial/temporal data, a new massive spatiotemporal watershed hydrological storm event response model (MHSERM) was developed and implemented on ESRI ArcMap platform. Different from other hydrological modeling systems, the MHSERM calculated the rainfall run off at a resolution of finer grids that reflects high precision spatial/temporal data characteristics of the watershed, not at conventional catchment or subcatchment scales, and that can simulate the variations of terrain, vegetation and soil far more accurately. The MHSERM provides a framework to utilize the USGS DEM and Landcover data, NRCS SSURGO and STATSGO soil data and National Hydrology Dataset (NHD) by handling millions of elements (grids) and thousands of streams in a real watershed and utilizing the Spatiotemporal NEXRAD precipitation data for each grid in pseudo real-time. Specifically, the MHSERM model has the following new functionalities: (1) Grid the watershed on the basis of high precision data like USGS DEM and Landcover data, NRCS SSURGO and STATSGO soil data, e.g., at a 30 meter by 30 meter resolution; (2) Delineate catchments based on the USGS National Digital Elevation Model (DEM) and the stream network data of the National Hydrography Dataset (NHD); (3) Establish the stream network and routing sequence for a watershed with hundreds of streams and lakes extracted from the National Hydrography Dataset (NHD) either in a supervised or unsupervised manner; (4) Utilize the NCDC NEXRAD precipitation data as spatial and temporal input, and extract the precipitation data for each grid; (5) Calculate the overland runoff volume, flow path and slope to the stream for each grid; (6) Dynamically estimates time of concentration to the stream for each interval, and only for the grids with rainfall excess, not for the whole catchment; (7) Deal with different hydrologic conditions (Good, Fair, Poor) for landcover data and different Antecedent Moisture Condition (AMC); (8) Process single or a series of storm events automatically; thus, the MHSERM model is capable of simulating both discrete and continuous storm events; (9) Calculate the temporal flow rate (i.e., hydrograph) for all the streams in the stream network within the watershed, save them to a database for further analysis and evaluation of various what-if scenarios and BMP designs. In MHSERM model, the SCS Curve number method is used for calculating overland flow runoff volume, and the Muskingum-Cunge method is used for flow routing of the stream network

    Analysis and application of an underwater optical-ranging system

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Ocean Engineer at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution September 1992In order to provide a high-resolution underwater-ranging capability for scientific measurement, a commercially available optical-ranging system is analyzed for performance and feasibility. The system employs a structured-lighting technique using a laser-light plane and single-camera imaging system. The mechanics of determining range with such a system are presented along with predicted range error. Controlled testing of the system is performed and range error is empirically determined. The system is employed in a deep-sea application, and its performance is evaluated. The measurements obtained are used for a scientific application to determine seafloor roughness for very-high-spatial frequencies (greater than 10 cycles/meter). Use and application recommendations for the system are presented
    • ā€¦
    corecore