957 research outputs found

    Tracking Meteoroids in the Atmosphere: Fireball Trajectory Analysis

    Get PDF
    This thesis improves and develops algorithms for fireball trajectory analysis. Stochastic estimators outside the current field of fireball modelling have been applied, from Kalman filters to 3D particle filters. These techniques are fully automated and rigorously incorporate errors, providing a means to routinely analyse fireball data in an unbiased manner

    A comparison of multiple techniques for the reconstruction of entry, descent, and landing trajectories and atmospheres

    Get PDF
    The primary importance of trajectory reconstruction is to assess the accuracy of pre-flight predictions of the entry trajectory. While numerous entry systems have flown, often these systems are not adequately instrumented or the flight team not adequately funded to perform the statistical engineering reconstruction required to quantify performance and feed-forward lessons learned into future missions. As such, entry system performance and reliability levels remain unsubstantiated and improvement in aerothermodynamic and flight dynamics modeling remains data poor. The comparison is done in an effort to quantitatively and qualitatively compare Kalman filtering methods of reconstructing trajectories and atmospheric conditions from entry systems flight data. The first Kalman filter used is the extended Kalman filter. Extended Kalman filtering has been used extensively in trajectory reconstruction both for orbiting spacecraft and for planetary probes. The second Kalman filter is the unscented Kalman filter. Additionally, a technique for using collocation to reconstruct trajectories is formulated, and collocation's usefulness for trajectory simulation is demonstrated for entry, descent, and landing trajectories using a method developed here to deterministically find the state variables of the trajectory without nonlinear programming. Such an approach could allow one to utilize the same collocation trajectory design tools for the subsequent reconstruction.Ph.D.Committee Chair: Braun, Robert; Committee Member: Lisano, Michael; Committee Member: Russell, Ryan; Committee Member: Striepe, Scott; Committee Member: Volovoi, Vital

    Deep Tissue Light Delivery and Fluorescence Tomography with Applications in Optogenetic Neurostimulation

    Get PDF
    Study of the brain microcircuits using optogenetics is an active area of research. This method has a few advantages over the conventional electrical stimulation including the bi-directional control of neural activity, and more importantly, specificity in neuromodulation. The first step in all optogenetic experiments is to express certain light sensitive ion channels/pumps in the target cell population and then confirm the proper expression of these proteins before running any experiment. Fluorescent bio-markers, such as green fluorescent protein (GFP), have been used for this purpose and co-expressed in the same cell population. The fluorescent signal from such proteins provides a monitory mechanism to evaluate the expression of optogenetic opsins over time. The conventional method to confirm the success in gene delivery is to sacrifice the animal, retract and slice the brain tissue, and image the corresponding slices using a fluorescent microscope. Obviously, determining the level of expression over time without sacrificing the animal is highly desirable. Also, optogenetics can be combined with cell-type specific optical recording of neural activity for example by imaging the fluorescent signal of genetically encoded calcium indicators. One challenging step in any optogenetic experiment is delivering adequate amount of light to target areas for proper stimulation of light sensitive proteins. Delivering sufficient light density to a target area while minimizing the off-target stimulation requires a precise estimation of the light distribution in the tissue. Having a good estimation of the tissue optical properties is necessary for predicting the distribution of light in any turbid medium. The first objective of this project was the design and development of a high resolution optoelectronic device to extract optical properties of rats\u27 brain tissue (including the absorption coefficient, scattering coefficient, and anisotropy factor) for three different wavelengths: 405nm, 532nm and 635nm and three different cuts: transverse, sagittal, and coronal. The database of the extracted optical properties was linked to a 3D Monte Carlo simulation software to predict the light distribution for different light source configurations. This database was then used in the next phase of the project and in the development of a fluorescent tomography scanner. Based on the importance of the fluorescent imaging in optogenetics, another objective of this project was to design a fluorescence tomography system to confirm the expression of the light sensitive proteins and optically recording neural activity using calcium indicators none or minimally invasively. The method of fluorescence laminar optical tomography (FLOT) has been used successfully in imaging superficial areas up to 2mm deep inside a scattering medium with the spatial resolution of ~200µm. In this project, we developed a FLOT system which was specifically customized for in-vivo brain imaging experiments. While FLOT offers a relatively simple and non-expensive design for imaging superficial areas in the brain, still it has imaging depth limited to 2mm and its resolution drops as the imaging depth increases. To address this shortcoming, we worked on a complementary system based on the digital optical phase conjugation (DOPC) method which was shown previously that is capable of performing fluorescent tomography up to 4mm deep inside a biological tissue with lateral resolution of ~50 µm. This system also provides a non-invasive method to deliver light deep inside the brain tissue for neurostimulation applications which are not feasible using conventional techniques because of the high level of scattering in most tissue samples. In the developed DOPC system, the performance of the system in focusing light through and inside scattering mediums was quantified. We also showed how misalignments and imperfections of the optical components can immensely reduce the capability of a DOPC setup. Then, a systematic calibration algorithm was proposed and experimentally applied to our DOPC system to compensate main aberrations such as reference beam aberrations and also the backplane curvature of the spatial light modulator. In a highly scattering sample, the calibration algorithm achieved up to 8 fold increase in the PBR

    Design of a Specialized UAV Platform for the Discharge of a Fire Extinguishing Capsule

    Get PDF
    Tato práce se zabývá návrhem systému specializovaného pro autonomní detekci a lokalizaci požárů z palubních senzorů bezpilotních helikoptér. Hašení požárů je zajištěno automatickým vystřelením ampule s hasící kapalinou do zdroje požáru z palubního vystřelovače. Hlavní část této práce se soustředí na detekci požárů v datech termální kamery a jejich následnou lokalizaci ve světě za pomoci palubní hloubkové kamery. Bezpilotní helikoptéra je poté optimálně navigována na pozici pro zajištění průletu ampule s hasící kapalinou do zdroje požáru. Vyvinuté metody jsou detailně analyzovány a jejich chování je testováno jak v simulaci, tak současně i při reálných experimentech. Kvalitativní a kvantitativní analýza ukazuje na použitelnost a robustnost celého systému.This thesis deals with the design of an unmanned multirotor aircraft system specialized for autonomous detection and localization of fires from onboard sensors, and the task of fast and effective fire extinguishment. The main part of this thesis focuses on the detection of fires in thermal images and their localization in the world using an onboard depth camera. The localized fires are used to optimally position the unmanned aircraft in order to effectively discharge an ampoule filled with a fire extinguishant from an onboard launcher. The developed methods are analyzed in detail and their performance is evaluated in simulation scenarios as well as in real-world experiments. The included quantitative and qualitative analysis verifies the feasibility and robustness of the system

    A Probabilistic-Based Approach to Monitoring Tool Wear State and Assessing Its Effect on Workpiece Quality in Nickel-Based Alloys

    Get PDF
    The objective of this research is first to investigate the applicability and advantage of statistical state estimation methods for predicting tool wear in machining nickel-based superalloys over deterministic methods, and second to study the effects of cutting tool wear on the quality of the part. Nickel-based superalloys are among those classes of materials that are known as hard-to-machine alloys. These materials exhibit a unique combination of maintaining their strength at high temperature and have high resistance to corrosion and creep. These unique characteristics make them an ideal candidate for harsh environments like combustion chambers of gas turbines. However, the same characteristics that make nickel-based alloys suitable for aggressive conditions introduce difficulties when machining them. High strength and low thermal conductivity accelerate the cutting tool wear and increase the possibility of the in-process tool breakage. A blunt tool nominally deteriorates the surface integrity and damages quality of the machined part by inducing high tensile residual stresses, generating micro-cracks, altering the microstructure or leaving a poor roughness profile behind. As a consequence in this case, the expensive superalloy would have to be scrapped. The current dominant solution for industry is to sacrifice the productivity rate by replacing the tool in the early stages of its life or to choose conservative cutting conditions in order to lower the wear rate and preserve workpiece quality. Thus, monitoring the state of the cutting tool and estimating its effects on part quality is a critical task for increasing productivity and profitability in machining superalloys. This work aims to first introduce a probabilistic-based framework for estimating tool wear in milling and turning of superalloys and second to study the detrimental effects of functional state of the cutting tool in terms of wear and wear rate on part quality. In the milling operation, the mechanisms of tool failure were first identified and, based on the rapid catastrophic failure of the tool, a Bayesian inference method (i.e., Markov Chain Monte Carlo, MCMC) was used for parameter calibration of tool wear using a power mechanistic model. The calibrated model was then used in the state space probabilistic framework of a Kalman filter to estimate the tool flank wear. Furthermore, an on-machine laser measuring system was utilized and fused into the Kalman filter to improve the estimation accuracy. In the turning operation the behavior of progressive wear was investigated as well. Due to the nonlinear nature of wear in turning, an extended Kalman filter was designed for tracking progressive wear, and the results of the probabilistic-based method were compared with a deterministic technique, where significant improvement (more than 60% increase in estimation accuracy) was achieved. To fulfill the second objective of this research in understanding the underlying effects of wear on part quality in cutting nickel-based superalloys, a comprehensive study on surface roughness, dimensional integrity and residual stress was conducted. The estimated results derived from a probabilistic filter were used for finding the proper correlations between wear, surface roughness and dimensional integrity, along with a finite element simulation for predicting the residual stress profile for sharp and worn cutting tool conditions. The output of this research provides the essential information on condition monitoring of the tool and its effects on product quality. The low-cost Hall effect sensor used in this work to capture spindle power in the context of the stochastic filter can effectively estimate tool wear in both milling and turning operations, while the estimated wear can be used to generate knowledge of the state of workpiece surface integrity. Therefore the true functionality and efficiency of the tool in superalloy machining can be evaluated without additional high-cost sensing

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour

    Pattern-theoretic foundations of automatic target recognition in clutter

    Get PDF
    Issued as final reportAir Force Office of Scientific Research (U.S.

    Synchrotron x-ray radiography, fluorescence, and imaging of coaxial rocket injector sprays

    Get PDF
    The mixing of gas and liquid fluids in atomizing flows is a physical phenomena of fundamental and practical importance. The individual fluid streams, each with their own momentum, viscosity, surface tension and thermodynamic states, mix to create a turbulent and chaotic flow. The complexity of the mixing process is often too great to accurately predict, even using modern computational models, and the resulting flow is often too optically dense to probe experimentally using current methods. In the current thesis, a study into the use of x-ray based diagnostics was performed on an optically dense, multiphase co-axial rocket flow using a variety of new and established diagnostic techniques. The injector studied was a NASA designed 110 N swirl-coaxial rocket injector designed to operate on gaseous methane and liquid oxygen. During the investigation, a range of fluids combinations were studied including water, liquid nitrogen and liquid argon as simulants for liquid oxygen, as well as air, gaseous nitrogen, argon, and krypton as simulants for gaseous methane. A range of diagnostics were performed in the study, with all experiments performed at the 7-BM beamline at the Advanced Photon Source (APS) at Argonne National Laboratory. The x-ray source at the APS provided both a narrowband monochromatic x-ray beam for line of sight investigations of radiography and fluorescence and a polychromatic \u27white beam\u27 for two dimensional, time-sequential radiography. The advantages, limitations and accuracy of the techniques are discussed, and the results of investigations into fluid mixing are given

    Photonics simulation and modelling of skin for design of spectrocutometer

    Get PDF
    fi=vertaisarvioitu|en=peerReviewed

    Shrinkage Based Particle Filters for Tracking in Wireless Sensor Networks with Correlated Sparse Measurements

    Get PDF
    This thesis focuses on the development of mobile tracking approaches in wireless sensor networks (WSNs) with correlated and sparse measurements. In wireless networks, devices have the ability to transfer information over the network nodes via wireless signals. The strength of a wireless signal at a receiver is referred as the received signal strength (RSS) and many wireless technologies such as Wi-Fi, ZigBee, the Global Positioning Systems (GPS), and other Satellite systems provide the RSS measurements for signal transmission. Due to the availability of RSS measurements, various tracking approaches in WSNs were developed based on the RSS measurements. Unfortunately, the feasibility of tracking using the RSS measurements is highly dependent on the connectivity of the wireless signals. The existing connectivity may be intermittently disrupted due to the low-battery status on the sensor node or temporarily sensor malfunction. In ad-hoc networks, the number of observation of the RSS measurements rapidly changing due to the movements of network nodes and mobile user. As a result, the tracking algorithms have limited data to perform state inference and this prevents accurate tracking. Furthermore, consecutive RSS measurements obtained from nearby sensor nodes exhibit spatio-temporal correlation, which provides extra information to be exploited. Exploiting the statistical information on the measurements noise covariance matrix increases the tracking accuracy. When the number of observations is relatively large, estimating the measurement noise covariance matrix is feasible. However, when they are relatively small, the covariance matrix estimation becomes ill-conditioned and non-invertible. In situations where the RSS measurements are corrupted by outliers, state inference can be misleading. Outliers can come from the sudden environmental disturbances, temporary sensor failures or even from the intrinsic noise of the sensor device. The outliers existence should be considered accordingly to avoid false and poor estimates. This thesis proposes first a shrinkage-based particle filter for mobile tracking in WSNs. It estimates the correlation in the RSS measurement using the shrinkage estimator. The shrinkage estimator overcomes the problems of ill-conditioned and non-invertibility of the measurement noise covariance matrix. The estimated covariance matrix is then applied to the particle filter. Secondly, it develops a robust shrinkage based particle filter for the problem of outliers in the RSS measurements. The proposed algorithm provides a non-parametric shrinkage estimate and represents a multiple model particle filter. The performances of both proposed filters are demonstrated over challenging scenarios for mobile tracking
    corecore