147 research outputs found

    Dynamic super-resolution in particle tracking problems

    Full text link
    Particle tracking in biological imaging is concerned with reconstructing the trajectories, locations, or velocities of the targeting particles. The standard approach of particle tracking consists of two steps: first reconstructing statically the source locations in each time step, and second applying tracking techniques to obtain the trajectories and velocities. In contrast, the dynamic reconstruction seeks to simultaneously recover the source locations and velocities from all frames, which enjoys certain advantages. In this paper, we provide a rigorous mathematical analysis for the resolution limit of reconstructing source number, locations, and velocities by general dynamical reconstruction in particle tracking problems, by which we demonstrate the possibility of achieving super-resolution for the dynamic reconstruction. We show that when the location-velocity pairs of the particles are separated beyond certain distances (the resolution limits), the number of particles and the location-velocity pair can be stably recovered. The resolution limits are related to the cut-off frequency of the imaging system, signal-to-noise ratio, and the sparsity of the source. By these estimates, we also derive a stability result for a sparsity-promoting dynamic reconstruction. In addition, we further show that the reconstruction of velocities has a better resolution limit which improves constantly as the particles moving. This result is derived by an observation that the inherent cut-off frequency for the velocity recovery can be viewed as the total observation time multiplies the cut-off frequency of the imaging system, which may lead to a better resolution limit as compared to the one for each diffraction-limited frame. It is anticipated that this observation can inspire new reconstruction algorithms that improve the resolution of particle tracking in practice

    Parametric modeling for damped sinusoids from multiple channels

    Get PDF

    Reduced-order modeling of power electronics components and systems

    Get PDF
    This dissertation addresses the seemingly inevitable compromise between modeling fidelity and simulation speed in power electronics. Higher-order effects are considered at the component and system levels. Order-reduction techniques are applied to provide insight into accurate, computationally efficient component-level (via reduced-order physics-based model) and system-level simulations (via multiresolution simulation). Proposed high-order models, verified with hardware measurements, are, in turn, used to verify the accuracy of final reduced-order models for both small- and large-signal excitations. At the component level, dynamic high-fidelity magnetic equivalent circuits are introduced for laminated and solid magnetic cores. Automated linear and nonlinear order-reduction techniques are introduced for linear magnetic systems, saturated systems, systems with relative motion, and multiple-winding systems, to extract the desired essential system dynamics. Finite-element models of magnetic components incorporating relative motion are set forth and then reduced. At the system level, a framework for multiresolution simulation of switching converters is developed. Multiresolution simulation provides an alternative method to analyze power converters by providing an appropriate amount of detail based on the time scale and phenomenon being considered. A detailed full-order converter model is built based upon high-order component models and accurate switching transitions. Efficient order-reduction techniques are used to extract several lower-order models for the desired resolution of the simulation. This simulation framework is extended to higher-order converters, converters with nonlinear elements, and closed-loop systems. The resulting rapid-to-integrate component models and flexible simulation frameworks could form the computational core of future virtual prototyping design and analysis environments for energy processing units

    Multivariable isoperformance methodology for precision opto-mechanical systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2001.Includes bibliographical references (p. 277-285).Precision opto-mechanical systems, such as space telescopes, combine structures, optics and controls in order to meet stringent pointing and phasing requirements. In this context a novel approach to the design of complex, multi-disciplinary systems is presented in the form of a multivariable isoperformance methodology. The isoperformance approach first finds a point design within a given topology, which meets the performance requirements with sufficient margins. The performance outputs are then treated as equality constraints and the non-uniqueness of the design space is exploited by trading key disturbance, plant, optics and controls parameters with respect to each other. Three algorithms (branch-and-bound, tangential front following and vector spline approximation) are developed for the bivariate and multivariable problem. The challenges of large order models are addressed by presenting a fast diagonal Lyapunov solver, apriori error bounds for model reduction as well as a governing sensitivity equation for similarity transformed state space realizations. Specific applications developed with this technique are error budgeting and multiobjective design optimization. The goal of the multiobjective design optimization is to achieve a design which is pareto optimal, such that multiple competing objectives can be satisfied within the performance invariant set. Thus, situations are avoided where very costly and hard-to-meet requirements are levied onto one subsystem, while other subsystems hold substantial margins.(cont.) An experimental validation is carried out on the DOLCE laboratory testbed. The testbed allows verification of the predictive capability of the isoperformance technique on models of increasing fidelity. A comparison with experimental results, trading excitation amplitude and payload mass, is demonstrated. The predicted performance contours match the experimental data very well at low excitation levels, typical of the disturbance environment on precision opto-mechanical systems. The relevance of isoperformance to space systems engineering is demonstrated with a comprehensive NEXUS spacecraft dynamics and controls analysis. It is suggested that isoperformance is a useful concept in other fields of engineering science such as crack growth calculations in structures. The isoperformance approach enhances the understanding of complex opto-mechanical systems beyond the local neighborhood of a particular point design.by Olivier L. de Weck.Ph.D

    Development and experimental validation of direct controller tuning for spaceborne telescopes

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2000.Includes bibliographical references (p. 285-294).Strict requirements in the performance of future space-based observatories such as the Space Interferometry Mission (SIM) and the Next Generation Space Telescope (NGST), will extend the state-of-the-art of critical mission spaceflight-proven active control design. A control design strategy, which combines the high performance and stability robustness guarantees of modem, robust-control design with the spaceflight heritage of conventional control design, is proposed which will meet the strict requirements and maintain traceability to the successful controllers from predecessor spacecraft. Two principal tools are developed: an analysis algorithm that quantifies each sensor/actuator combination's effectiveness for control, and a design engine which tunes a baseline controller to improve performance and/or stability robustness. The sensor/actuator effectiveness indexing tool requires a reduced-order state-space model of the plant. A modification of the balanced reduction method is introduced which improves numerical conditioning so that the order of large models of flexible spacecraft may be decreased. For each sensor and actuator an index is computed using the modal controllability from an actuator weighted by the modal cost in the performance, and the model observability of a sensor weighted by the modal cost of the disturbance. The special case of actuators that are used for active output isolation is handled separately. The designer makes use of the sensor/actuator indexing tool to select which control channels to emphasize in the tuning. The tuning tool is based on forming an augmented cost from weighting performance, stability robustness, deviation from the baseline controller, and controller gain. The tuning algorithm can operate with the plant's state-space design model or directly with the plant's measured frequency-response data. Two differentiable multivariable stability robustness metrics are formed, one based on the maximum singular value of the Sensitivity transfer matrix and one based on the multivariable Nyquist locus. The controller is parameterized with a general tridiagonal parameterization based on the real-modal state-space form. The augmented cost is chosen to be differentiable and a closed-loop stability-preserving unconstrained nonlinear descent program is used to directly compute controller parameters that decrease the augmented cost. To automate the closed-loop stability determination in the measured-data-based designs, a rule-based algorithm is created to invoke the multivariable Nyquist stability criteria. The use of the tuning technique is placed in context with a high-level control design methodology. The tuning technique is evaluated on a sample problem and then experimentally demonstrated on a laboratory test article with dynamics, sensor suite, and actuator suite all similar to future spaceborne observatories. The developed test article is the first spacetelescope- like experimental facility to combine large-angle slewing with nanometer optical phasing and sub-arcsecond pointing in the presence of spacecraft-like disturbances. The technique is applied to generate an improved controller for a model of the SIM spacecraft.by Gregory J.W. Mallory.Ph.D

    An Integrated Approach to Performance Monitoring and Fault Diagnosis of Nuclear Power Systems

    Get PDF
    In this dissertation an integrated framework of process performance monitoring and fault diagnosis was developed for nuclear power systems using robust data driven model based methods, which comprises thermal hydraulic simulation, data driven modeling, identification of model uncertainty, and robust residual generator design for fault detection and isolation. In the applications to nuclear power systems, on the one hand, historical data are often not able to characterize the relationships among process variables because operating setpoints may change and thermal fluid components such as steam generators and heat exchangers may experience degradation. On the other hand, first-principle models always have uncertainty and are often too complicated in terms of model structure to design residual generators for fault diagnosis. Therefore, a realistic fault diagnosis method needs to combine the strength of first principle models in modeling a wide range of anticipated operation conditions and the strength of data driven modeling in feature extraction. In the developed robust data driven model-based approach, the changes in operation conditions are simulated using the first principle models and the model uncertainty is extracted from plant operation data such that the fault effects on process variables can be decoupled from model uncertainty and normal operation changes. It was found that the developed robust fault diagnosis method was able to eliminate false alarms due to model uncertainty and deal with changes in operating conditions throughout the lifetime of nuclear power systems. Multiple methods of robust data driven model based fault diagnosis were developed in this dissertation. A complete procedure based on causal graph theory and data reconciliation method was developed to investigate the causal relationships and the quantitative sensitivities among variables so that sensor placement could be optimized for fault diagnosis in the design phase. Reconstruction based Principal Component Analysis (PCA) approach was applied to deal with both simple faults and complex faults for steady state diagnosis in the context of operation scheduling and maintenance management. A robust PCA model-based method was developed to distinguish the differences between fault effects and model uncertainties. In order to improve the sensitivity of fault detection, a hybrid PCA model based approach was developed to incorporate system knowledge into data driven modeling. Subspace identification was proposed to extract state space models from thermal hydraulic simulations and a robust dynamic residual generator design algorithm was developed for fault diagnosis for the purpose of fault tolerant control and extension to reactor startup and load following operation conditions. The developed robust dynamic residual generator design algorithm is unique in that explicit identification of model uncertainty is not necessary. Finally, it was demonstrated that the developed new methods for the IRIS Helical Coil Steam Generator (HCSG) system. A simulation model was first developed for this system. It was revealed through steady state simulation that the primary coolant temperature profile could be used to indicate the water inventory inside the HCSG tubes. The performance monitoring and fault diagnosis module was then developed to monitor sensor faults, flow distribution abnormality, and heat performance degradation for both steady state and dynamic operation conditions. This dissertation bridges the gap between the theoretical research on computational intelligence and the engineering design in performance monitoring and fault diagnosis for nuclear power systems. The new algorithms have the potential of being integrated into the Generation III and Generation IV nuclear reactor I&C design after they are tested on current nuclear power plants or Generation IV prototype reactors

    Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations

    Full text link
    This paper establishes information-theoretic limits in estimating a finite field low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the low-rank matrix with random sensing matrices. Necessary and sufficient conditions on the number of measurements required are provided. It is shown that these conditions are sharp and the minimum-rank decoder is asymptotically optimal. The reliability function of this decoder is also derived by appealing to de Caen's lower bound on the probability of a union. The sufficient condition also holds when the sensing matrices are sparse - a scenario that may be amenable to efficient decoding. More precisely, it is shown that if the n\times n-sensing matrices contain, on average, \Omega(nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the field. Analogies are drawn between the above results and rank-metric codes in the coding theory literature. In fact, we are also strongly motivated by understanding when minimum rank distance decoding of random rank-metric codes succeeds. To this end, we derive distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one. Finally, we provide a non-exhaustive procedure to search for the unknown low-rank matrix.Comment: Accepted to the IEEE Transactions on Information Theory; Presented at IEEE International Symposium on Information Theory (ISIT) 201

    Two dimensional signal processing for storage channels

    Get PDF
    Over the past decade, storage channels have undergone a steady increase in capacity. With the prediction of achieving 10 Tb/in2 areal density for magnetic recording channels in sight, the industry is pushing towards di erent technologies for storage channels. Heat-assisted magnetic recording, bit-patterned media, and twodimensional magnetic recording (TDMR) are cited as viable alternative technologies to meet the increasing market demand. Among these technologies, the twodimensional magnetic recording channel has the advantage of using conventional medium while relying on improvement from signal processing. Capacity approaching codes and detection methods tailored to the magnetic recording channels are the main signal processing tools used in magnetic recording. The promise is that two-dimensional signal processing will play a role in bringing about the theoretical predictions. The main challenges in TDMR media are as follows: i) the small area allocated to each bit on the media, and the sophisticated read and write processes in shingled magnetic recording devices result in signi cant amount of noise, ii) the twodimensional inter-symbol interference is intrinsic to the nature of shingled magnetic recording. Thus, a feasible two-dimensional communication system is needed to combat the errors that arise from aggressive read and write processes. In this dissertation, we present some of the work done on signal processing aspect for storage channels. We discuss i) the nano-scale model of the storage channel, ii) noise characteristics and corresponding detection strategies, iii) two-dimensional signal processing targeted at shingled magnetic recording
    • …
    corecore