87 research outputs found

    Integration of remotely sensed data with stand-scale vegetation models

    Get PDF

    Advancements in seismic tomography with application to tunnel detection and volcano imaging

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 1998Practical geotomography is an inverse problem with no unique solution. A priori information must be imposed for a stable solution to exist. Commonly used types of a priori information smooth and attenuate anomalies, resulting in 'blurred' tomographic images. Small or discrete anomalies, such as tunnels, magma conduits, or buried channels are extremely difficult imaging objectives. Composite distribution inversion (CDI) is introduced as a theory seeking physically simple, rather than distributionally simple, solutions of non-unique problems. Parameters are assumed to be members of a composite population, including both well-known and anomalous components. Discrete and large amplitude anomalies are allowed, while a well-conditioned inverse is maintained. Tunnel detection is demonstrated using CDI tomography and data collected near the northern border of South Korea. Accurate source and receiver location information is necessary. Borehole deviation corrections are estimated by minimizing the difference between empirical distributions of apparent parameter values as a function of location correction. Improved images result. Traveltime computation and raytracing are the most computationally intensive components of seismic tomography when imaging structurally complex media. Efficient, accurate, and robust raytracing is possible by first recovering approximate raypaths from traveltime fields, and then refining the raypaths to a desired accuracy level. Dynamically binned queuing is introduced. The approach optimizes graph-theoretic traveltime computation costs. Pseudo-bending is modified to efficiently refine raypaths in general media. Hypocentral location density functions and relative phase arrival population analysis are used to investigate the Spring, 1996, earthquake swarm at Akutan Volcano, Alaska. The main swarm is postulated to have been associated with a 0.2 km\sp3 intrusion at a depth of less than four kilometers. Decay sequence seismicity is postulated to be a passive response to the stress transient caused by the intrusion. Tomograms are computed for Mt. Spurr, Augustine, and Redoubt Volcanoes, Alaska. Relatively large amplitude, shallow anomalies explain most of the traveltime residual. No large amplitude anomalies are found at depth, and no magma storage areas are imaged. A large amplitude low-velocity anomaly is coincident with a previously proposed geothermal region on the southeast flank of Mt. Spurr. Mt. St. Augustine is found to have a high velocity core

    Automated analysis of non destructive evaluation data

    No full text
    Interpretation of NDE data can be unreliable and difficult due to the complex interaction between the instrument, object under inspection and noise and uncertainties about the system or data. A common method of reducing the complexity and volume of data is to use thresholds. However, many of these methods are based on making subjective assessments from the data or assumptions about the system which can be source of error. Reducing data whilst retaining important information is difficult and normally compromises have to be made. This thesis has developed methods that are based on sound mathematical and scientific principles and require the minimum use of assumptions and subjective choices. Optimisation has been shown to reduce data acquired from a multilayer composite panel and hence show the ply layers. The problem can be ill-posed. It is possible to obtain a solution close to optimum and obtain confidence on the result. Important factors are: the size of the search space, representation of the data and any assumptions and choices made. Further work is required in the use of model based optimisation to measure layer thicknesses from a metal laminate panel. A number of important factors that must be addressed have been identified. Two novel approaches to removing features from Transient Eddy-Current (TEC) data have been shown to improve the visibility of defects. The best approach to take depends on the available knowledge of the system. Principal Value Decomposition (PVD) has been shown to remove layer interface reflections from ultrasonic data. However, PVD is not suited to all problems such as the TEC data described. PVD is best suited in the later stages of data reduction. This thesis has demonstrated new methods and a roadmap for solving multivariate problems, these methods may be applied to a wide range of data and problems

    Optimal Survey Strategies and Predicted Planet Yields for the Korean Microlensing Telescope Network

    Get PDF
    The Korean Microlensing Telescope Network (KMTNet) will consist of three 1.6m telescopes each with a 4 deg^{2} field of view (FoV) and will be dedicated to monitoring the Galactic Bulge to detect exoplanets via gravitational microlensing. KMTNet's combination of aperture size, FoV, cadence, and longitudinal coverage will provide a unique opportunity to probe exoplanet demographics in an unbiased way. Here we present simulations that optimize the observing strategy for, and predict the planetary yields of, KMTNet. We find preferences for four target fields located in the central Bulge and an exposure time of t_{exp} = 120s, leading to the detection of ~2,200 microlensing events per year. We estimate the planet detection rates for planets with mass and separation across the ranges 0.1 <= M_{p}/M_{Earth} <= 1000 and 0.4 <= a/AU <= 16, respectively. Normalizing these rates to the cool-planet mass function of Cassan (2012), we predict KMTNet will be approximately uniformly sensitive to planets with mass 5 <= M_{p}/M_{Earth} <= 1000 and will detect ~20 planets per year per dex in mass across that range. For lower-mass planets with mass 0.1 <= M_{p}/M_{Earth} < 5, we predict KMTNet will detect ~10 planets per year. We also compute the yields KMTNet will obtain for free-floating planets (FFPs) and predict KMTNet will detect ~1 Earth-mass FFP per year, assuming an underlying population of one such planet per star in the Galaxy. Lastly, we investigate the dependence of these detection rates on the number of observatories, the photometric precision limit, and optimistic assumptions regarding seeing, throughput, and flux measurement uncertainties.Comment: 29 pages, 31 figures, submitted to ApJ. For a brief video explaining the key results of this paper, please visit: https://www.youtube.com/watch?v=e5rWVjiO26

    NUEVAS ESTRATEGIAS DE DISEÑO AUTOMATIZADO DE COMPONENTES PASIVOS PARA SISTEMAS DE COMUNICACIONES DE ALTA FRECUENCIA

    Full text link
    Los filtros de microondas son elementos clave en todos los sistemas de comunicaciones ya que permiten discriminar una determinada frecuencia o gama de frecuencias de una señal que pasa a través de él. Si nos centramos en los filtros de microondas para aplicaciones vía satélite, el constante aumento de los servicios que deben prestar los satélites de comunicaciones ha llevado a que el espectro radioeléctrico esté más congestionado. Lo que ha producido una enorme demanda de filtros de altas prestaciones que satisfagan unas especificaciones muy estrictas. Hasta la fecha las aplicaciones comerciales de microondas principalmente vienen utilizando filtros en tecnología guiada compuestos únicamente de metal, debido principalmente a sus bajas pérdidas y a la gran capacidad de manejo de potencia. Como consecuencia, el análisis de este tipo de filtros ha sido objeto de numerosos estudios, lo que ha permitido a la industria del sector su fabricación y comercialización. Sin embargo, los filtros metálicos tienen restricciones significativas, especialmente cuando se diseñan para satélites de comunicaciones o otras aplicaciones espaciales, ya que su peso y tamaño suelen ser elevados muy a menudo y, debido a que operan en el vacío, el efecto Multipactor limita considerablemente la potencia que dichos filtros pueden transmitir. En la actualidad, están surgiendo nuevas tipologías de filtros avanzados basadas en la tecnología de filtros dieléctricos. Estas nuevas tecnologías ofrecen una reducción importante en masa y volumen de alrededor del 50% comparado con la tecnología solo metal y una gran estabilidad térmica para aplicaciones de alta potencia. Además, estas topologías muestran un descenso importante del riesgo de ruptura por efecto Multipactor o descarga en vacío entre las superficies metálicas, y consecuentemente el filtro podría transmitir una potencia mayor.Morro Ros, JV. (2011). NUEVAS ESTRATEGIAS DE DISEÑO AUTOMATIZADO DE COMPONENTES PASIVOS PARA SISTEMAS DE COMUNICACIONES DE ALTA FRECUENCIA [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/12100Palanci
    corecore