7,021 research outputs found

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    Quantitative non-destructive testing

    Get PDF
    The work undertaken during this period included two primary efforts. The first is a continuation of theoretical development from the previous year of models and data analyses for NDE using the Optical Thermal Infra-Red Measurement System (OPTITHIRMS) system, which involves heat injection with a laser and observation of the resulting thermal pattern with an infrared imaging system. The second is an investigation into the use of the thermoelastic effect as an effective tool for NDE. As in the past, the effort is aimed towards NDE techniques applicable to composite materials in structural applications. The theoretical development described produced several models of temperature patterns over several geometries and material types. Agreement between model data and temperature observations was obtained. A model study with one of these models investigated some fundamental difficulties with the proposed method (the primitive equation method) for obtaining diffusivity values in plates of thickness and supplied guidelines for avoiding these difficulties. A wide range of computing speeds was found among the various models, with a one-dimensional model based on Laplace's integral solution being both very fast and very accurate

    Blur Classification Using Segmentation Based Fractal Texture Analysis

    Get PDF
    The objective of vision based gesture recognition is to design a system, which can understand the human actions and convey the acquired information with the help of captured images. An image restoration approach is extremely required whenever image gets blur during acquisition process since blurred images can severely degrade the performance of such systems. Image restoration recovers a true image from a degraded version. It is referred as blind restoration if blur information is unidentified. Blur identification is essential before application of any blind restoration algorithm. This paper presents a blur identification approach which categories a hand gesture image into one of the sharp, motion, defocus and combined blurred categories. Segmentation based fractal texture analysis extraction algorithm is utilized for featuring the neural network based classification system. The simulation results demonstrate the preciseness of proposed method

    Clarity of View: An Analytic Hierarchy Process (AHP)-Based Multi-Factor Evaluation Framework for Driver Awareness Systems in Heavy Vehicles

    Get PDF
    Several emerging technologies hold great promise to improve the situational awareness of the heavy vehicle driver. However, current industry-standard evaluation methods do not measure all the comprehensive factors contributing to the overall effectiveness of such systems. The average commercial vehicle driver in the USA is 54 years old with many drivers continuing past retirement age. Current methods for evaluating visibility systems only consider field of view and do not incorporate measures of the cognitive elements critical to drivers, especially the older demographic. As a result, industry is challenged to evaluate new technologies in a way that provides enough information to make informed selection and purchase decisions. To address this problem, we introduce a new multi-factor evaluation framework, “Clarity of View,” that incorporates several important factors for visibility systems including: field of view, image detection time, distortion, glare discomfort, cost, reliability, and gap acceptance accuracy. It employs a unique application of the Analytic Hierarchy Process (AHP) that involves both expert participants acting in a Supra-Decision Maker role alongside driver-level participants giving both actual performance data as well as subjective preference feedback. Both subjective and objective measures have been incorporated into this multi-factor decision-making model that will help industry make better technology selections involving complex variables. A series of experiments have been performed to illustrate the usefulness of this framework that can be expanded to many types of automotive user-interface technology selection challenges. A unique commercial-vehicle driving simulator apparatus was developed that provides a dynamic, 360-degree, naturalistic driving environment for the evaluation of rearview visibility systems. Evaluations were performed both in the simulator and on the track. Test participants included trucking industry leadership and commercially licensed drivers with experience ranging from 1 to 40 years. Conclusions indicated that aspheric style mirrors have significant viability in the commercial vehicle market. Prior research on aspheric mirrors left questions regarding potential user adaptation, and the Clarity of View framework provides the necessary tools to reconcile that gap. Results obtained using the new Clarity of View framework were significantly different than that which would have previously been available using current industry status-quo published test methods. Additional conclusions indicated that middle-aged drivers performed better in terms of image detection time than young and elderly age categories. Experienced drivers performed better than inexperienced drivers, regardless of age. This is an important conclusion given the demographic challenges faced by the commercial vehicle industry today that is suffering a shortage of new drivers and may be seeking ways to retain its aging driver workforce. The Clarity of View evaluation framework aggregates multiple factors critical to driver visibility system effectiveness into a single selection framework that is useful for industry. It is unique both in its multi-factor approach and custom-developed apparatus, but also in its novel approach to the application of the AHP methodology. It has shown significance in ability to discern more well-informed technology selections and is flexible to expand its application toward many different types of driver interface evaluations

    Clustering Inverse Beamforming and multi-domain acoustic imaging approaches for vehicles NVH

    Get PDF
    Il rumore percepito all’interno della cabina di un veicolo Ăš un aspetto molto rilevante nella valutazione della sua qualitĂ  complessiva. Metodi sperimentali di acoustic imaging, quali beamforming e olografia acustica, sono usati per identificare le principali sorgenti che contribuiscono alla rumorositĂ  percepita all’interno del veicolo. L’obiettivo della tesi proposta Ăš di fornire strumenti per effettuare dettagliate analisi quantitative tramite tali tecniche, ad oggi relegate alle fasi di studio preliminare, proponendo un approccio modulare che si avvale di analisi dei fenomeni vibro-acustici nel dominio della frequenza, del tempo e dell’angolo di rotazione degli elementi rotanti tipicamente presenti in un veicolo. CiĂČ permette di ridurre tempi e costi della progettazione, garantendo, al contempo, una maggiore qualitĂ  del pacchetto vibro-acustico. L’innovativo paradigma proposto prevede l’uso combinato di algoritmi di pre- e post- processing con tecniche inverse di acoustic imaging per lo studio di rilevanti problematiche quali l’identificazione di sorgenti sonore esterne o interne all’abitacolo e del rumore prodotto da dispositivi rotanti. Principale elemento innovativo della tesi Ăš la tecnica denominata Clustering Inverse Beamforming. Essa si basa su un approccio statistico che permette di incrementare l’accuratezza (range dinamico, localizzazione e quantificazione) di una immagine acustica tramite la combinazione di soluzioni, del medesimo problema inverso, ottenute considerando diversi sotto-campioni dell’informazione sperimentale disponibile, variando, in questo modo, in maniera casuale la sua formulazione matematica. Tale procedimento garantisce la ricostruzione nel dominio della frequenza e del tempo delle sorgenti sonore identificate. Un metodo innovativo Ăš stato inoltre proposto per la ricostruzione, ove necessario, di sorgenti sonore nel dominio dell’angolo. I metodi proposti sono stati supportati da argomentazioni teoriche e validazioni sperimentali su scala accademica e industriale.The interior sound perceived in vehicle cabins is a very important attribute for the user. Experimental acoustic imaging methods such as beamforming and Near-field Acoustic Holography are used in vehicles noise and vibration studies because they are capable of identifying the noise sources contributing to the overall noise perceived inside the cabin. However these techniques are often relegated to the troubleshooting phase, thus requiring additional experiments for more detailed NVH analyses. It is therefore desirable that such methods evolve towards more refined solutions capable of providing a larger and more detailed information. This thesis proposes a modular and multi-domain approach involving direct and inverse acoustic imaging techniques for providing quantitative and accurate results in frequency, time and angle domain, thus targeting three relevant types of problems in vehicles NVH: identification of exterior sources affecting interior noise, interior noise source identification, analysis of noise sources produced by rotating machines. The core finding of this thesis is represented by a novel inverse acoustic imaging method named Clustering Inverse Beamforming (CIB). The method grounds on a statistical processing based on an Equivalent Source Method formulation. In this way, an accurate localization, a reliable ranking of the identified sources in frequency domain and their separation into uncorrelated phenomena is obtained. CIB is also exploited in this work for allowing the reconstruction of the time evolution of the sources sought. Finally a methodology for decomposing the acoustic image of the sound field generated by a rotating machine as a function of the angular evolution of the machine shaft is proposed. This set of findings aims at contributing to the advent of a new paradigm of acoustic imaging applications in vehicles NVH, supporting all the stages of the vehicle design with time-saving and cost-efficient experimental techniques. The proposed innovative approaches are validated on several simulated and real experiments

    The Canadian Cluster Comparison Project: detailed study of systematics and updated weak lensing masses

    Get PDF
    Masses of clusters of galaxies from weak gravitational lensing analyses of ever larger samples are increasingly used as the reference to which baryonic scaling relations are compared. In this paper we revisit the analysis of a sample of 50 clusters studied as part of the Canadian Cluster Comparison Project. We examine the key sources of systematic error in cluster masses. We quantify the robustness of our shape measurements and calibrate our algorithm empirically using extensive image simulations. The source redshift distribution is revised using the latest state-of-the-art photometric redshift catalogs that include new deep near-infrared observations. Nonetheless we find that the uncertainty in the determination of photometric redshifts is the largest source of systematic error for our mass estimates. We use our updated masses to determine b, the bias in the hydrostatic mass, for the clusters detected by Planck. Our results suggest 1-b=0.76+-0.05(stat)}+-0.06(syst)}, which does not resolve the tension with the measurements from the primary cosmic microwave background.Comment: resubmitted to MNRAS after review by refere

    Enhancing the information content of geophysical data for nuclear site characterisation

    Get PDF
    Our knowledge and understanding to the heterogeneous structure and processes occurring in the Earth’s subsurface is limited and uncertain. The above is true even for the upper 100m of the subsurface, yet many processes occur within it (e.g. migration of solutes, landslides, crop water uptake, etc.) are important to human activities. Geophysical methods such as electrical resistivity tomography (ERT) greatly improve our ability to observe the subsurface due to their higher sampling frequency (especially with autonomous time-lapse systems), larger spatial coverage and less invasive operation, in addition to being more cost-effective than traditional point-based sampling. However, the process of using geophysical data for inference is prone to uncertainty. There is a need to better understand the uncertainties embedded in geophysical data and how they translate themselves when they are subsequently used, for example, for hydrological or site management interpretations and decisions. This understanding is critical to maximize the extraction of information in geophysical data. To this end, in this thesis, I examine various aspects of uncertainty in ERT and develop new methods to better use geophysical data quantitatively. The core of the thesis is based on two literature reviews and three papers. In the first review, I provide a comprehensive overview of the use of geophysical data for nuclear site characterization, especially in the context of site clean-up and leak detection. In the second review, I survey the various sources of uncertainties in ERT studies and the existing work to better quantify or reduce them. I propose that the various steps in the general workflow of an ERT study can be viewed as a pipeline for information and uncertainty propagation and suggested some areas have been understudied. One of these areas is measurement errors. In paper 1, I compare various methods to estimate and model ERT measurement errors using two long-term ERT monitoring datasets. I also develop a new error model that considers the fact that each electrode is used to make multiple measurements. In paper 2, I discuss the development and implementation of a new method for geoelectrical leak detection. While existing methods rely on obtaining resistivity images through inversion of ERT data first, the approach described here estimates leak parameters directly from raw ERT data. This is achieved by constructing hydrological models from prior site information and couple it with an ERT forward model, and then update the leak (and other hydrological) parameters through data assimilation. The approach shows promising results and is applied to data from a controlled injection experiment in Yorkshire, UK. The approach complements ERT imaging and provides a new way to utilize ERT data to inform site characterisation. In addition to leak detection, ERT is also commonly used for monitoring soil moisture in the vadose zone, and increasingly so in a quantitative manner. Though both the petrophysical relationships (i.e., choices of appropriate model and parameterization) and the derived moisture content are known to be subject to uncertainty, they are commonly treated as exact and error‐free. In paper 3, I examine the impact of uncertain petrophysical relationships on the moisture content estimates derived from electrical geophysics. Data from a collection of core samples show that the variability in such relationships can be large, and they in turn can lead to high uncertainty in moisture content estimates, and they appear to be the dominating source of uncertainty in many cases. In the closing chapters, I discuss and synthesize the findings in the thesis within the larger context of enhancing the information content of geophysical data, and provide an outlook on further research in this topic

    Advances in Multi-User Scheduling and Turbo Equalization for Wireless MIMO Systems

    Get PDF
    Nach einer Einleitung behandelt Teil 2 Mehrbenutzer-Scheduling fĂŒr die AbwĂ€rtsstrecke von drahtlosen MIMO Systemen mit einer Sendestation und kanaladaptivem precoding: In jeder Zeit- oder Frequenzressource kann eine andere Nutzergruppe gleichzeitig bedient werden, rĂ€umlich getrennt durch unterschiedliche Antennengewichte. Nutzer mit korrelierten KanĂ€len sollten nicht gleichzeitig bedient werden, da dies die rĂ€umliche Trennbarkeit erschwert. Die Summenrate einer Nutzermenge hĂ€ngt von den Antennengewichten ab, die wiederum von der Nutzerauswahl abhĂ€ngen. Zur Entkopplung des Problems schlĂ€gt diese Arbeit Metriken vor basierend auf einer geschĂ€tzten Rate mit ZF precoding. Diese lĂ€sst sich mit Hilfe von wiederholten orthogonalen Projektionen abschĂ€tzen, wodurch die Berechnung von Antennengewichten beim Scheduling entfĂ€llt. Die RatenschĂ€tzung kann basierend auf momentanen Kanalmessungen oder auf gemittelter Kanalkenntnis berechnet werden und es können Datenraten- und Fairness-Kriterien berĂŒcksichtig werden. Effiziente Suchalgorithmen werden vorgestellt, die die gesamte Systembandbreite auf einmal bearbeiten können und zur KomplexitĂ€tsreduktion die Lösung in Zeit- und Frequenz nachfĂŒhren können. Teil 3 zeigt wie mehrere Sendestationen koordiniertes Scheduling und kooperative Signalverarbeitung einsetzen können. Mittels orthogonalen Projektionen ist es möglich, Inter-Site Interferenz zu schĂ€tzen, ohne Antennengewichte berechnen zu mĂŒssen. Durch ein Konzept virtueller Nutzer kann der obige Scheduling-Ansatz auf mehrere Sendestationen und sogar Relays mit SDMA erweitert werden. Auf den benötigten Signalisierungsaufwand wird kurz eingegangen und eine Methode zur SchĂ€tzung der Summenrate eines Systems ohne Koordination besprochen. Teil4 entwickelt Optimierungen fĂŒr Turbo Entzerrer. Diese Nutzen Signalkorrelation als Quelle von Redundanz. Trotzdem kann eine Kombination mit MIMO precoding sinnvoll sein, da bei Annahme realistischer Fehler in der Kanalkenntnis am Sender keine optimale InterferenzunterdrĂŒckung möglich ist. Mit Hilfe von EXIT Charts wird eine neuartige Methode zur adaptiven Nutzung von a-priori-Information zwischen Iterationen entwickelt, die die Konvergenz verbessert. Dabei wird gezeigt, wie man semi-blinde KanalschĂ€tzung im EXIT chart berĂŒcksichtigen kann. In Computersimulationen werden alle Verfahren basierend auf 4G-Systemparametern ĂŒberprĂŒft.After an introduction, part 2 of this thesis deals with downlink multi-user scheduling for wireless MIMO systems with one transmitting station performing channel adaptive precoding:Different user subsets can be served in each time or frequency resource by separating them in space with different antenna weight vectors. Users with correlated channel matrices should not be served jointly since correlation impairs the spatial separability.The resulting sum rate for each user subset depends on the precoding weights, which in turn depend on the user subset. This thesis manages to decouple this problem by proposing a scheduling metric based on the rate with ZF precoding such as BD, written with the help of orthogonal projection matrices. It allows estimating rates without computing any antenna weights by using a repeated projection approximation.This rate estimate allows considering user rate requirements and fairness criteria and can work with either instantaneous or long term averaged channel knowledge.Search algorithms are presented to efficiently solve user grouping or selection problems jointly for the entire system bandwidth while being able to track the solution in time and frequency for complexity reduction. Part 3 shows how multiple transmitting stations can benefit from cooperative scheduling or joint signal processing. An orthogonal projection based estimate of the inter-site interference power, again without computing any antenna weights, and a virtual user concept extends the scheduling approach to cooperative base stations and finally included SDMA half-duplex relays in the scheduling.Signalling overhead is discussed and a method to estimate the sum rate without coordination. Part 4 presents optimizations for Turbo Equalizers. There, correlation between user signals can be exploited as a source of redundancy. Nevertheless a combination with transmit precoding which aims at reducing correlation can be beneficial when the channel knowledge at the transmitter contains a realistic error, leading to increased correlation. A novel method for adaptive re-use of a-priori information between is developed to increase convergence by tracking the iterations online with EXIT charts.A method is proposed to model semi-blind channel estimation updates in an EXIT chart. Computer simulations with 4G system parameters illustrate the methods using realistic channel models.Im Buchhandel erhĂ€ltlich: Advances in Multi-User Scheduling and Turbo Equalization for Wireless MIMO Systems / Fuchs-Lautensack,Martin Ilmenau: ISLE, 2009,116 S. ISBN 978-3-938843-43-

    Sensitivity analysis in a scoping review on police accountability : assessing the feasibility of reporting criteria in mixed studies reviews

    Get PDF
    In this paper, we report on the findings of a sensitivity analysis that was carried out within a previously conducted scoping review, hoping to contribute to the ongoing debate about how to assess the quality of research in mixed methods reviews. Previous sensitivity analyses mainly concluded that the exclusion of inadequately reported or lower quality studies did not have a significant effect on the results of the synthesis. In this study, we conducted a sensitivity analysis on the basis of reporting criteria with the aims of analysing its impact on the synthesis results and assessing its feasibility. Contrary to some previous studies, our analysis showed that the exclusion of inadequately reported studies had an impact on the results of the thematic synthesis. Initially, we also sought to propose a refinement of reporting criteria based on the literature and our own experiences. In this way, we aimed to facilitate the assessment of reporting criteria and enhance its consistency. However, based on the results of our sensitivity analysis, we opted not to make such a refinement since many publications included in this analysis did not sufficiently report on the methodology. As such, a refinement would not be useful considering that researchers would be unable to assess these (sub-)criteria
    • 

    corecore