559,602 research outputs found

    On Sparse Vector Recovery Performance in Structurally Orthogonal Matrices via LASSO

    Get PDF
    In this paper, we consider the compressed sensing problem of reconstructing a sparse signal from an undersampled set of noisy linear measurements. The regularized least squares or least absolute shrinkage and selection operator (LASSO) formulation is used for signal estimation. The measurement matrix is assumed to be constructed by concatenating several randomly orthogonal bases, which we refer to as structurally orthogonal matrices. Such measurement matrix is highly relevant to large-scale compressive sensing applications because it facilitates rapid computation and parallel processing. Using the replica method in statistical physics, we derive the mean-squared-error (MSE) formula of reconstruction over the structurally orthogonal matrix in the large-system regime. Extensive numerical experiments are provided to verify the analytical result. We then consider the analytical result to investigate the MSE behaviors of the LASSO over the structurally orthogonal matrix, with an emphasis on performance comparisons with matrices with independent and identically distributed (i.i.d.) Gaussian entries. We find that structurally orthogonal matrices are at least as good as their i.i.d. Gaussian counterparts. Thus, the use of structurally orthogonal matrices is attractive in practical applications

    Load sensitive stable current source for complex precision pulsed electroplating

    Get PDF
    Electrodeposition is a highly versatile and well explored technology. However, it also depends strongly on the experience level of the operator. This experience includes the pretreatment of the sample, and the composition of the electrolyte settings of the plating parameters. Accurate control over the electroplating current is needed especially for the formation of small structures, where pulsed electrodeposition has proven to reduce many unwanted effects. To bring precision into the formation of optimal recipes, a highly flexible current source based on a microcontroller was developed. It allows a large variety of pulse waveforms, as well as maintaining a feedback loop that controls the current and monitors the output voltage, allowing for both galvanostatic (current driven) and potentiostatic (voltage driven) electrodeposition. The system has been implemented with multiple channels, permitting the simultaneous electrodeposition of multiple substrates in parallel. Being based on a microcomputer, the system can be programmed using predefined recipes individually for each channel, or even adapt the recipes during plating. All measurement values are continuously recorded for the purpose of documentation and diagnosis. The current source is based on a high power operational amplifier in a modified Howland current source configuration. This paper describes the functionality of the electrodeposition system, with a focus on the stability of the source current under different electrodeposition current densities and frequencies. The performance and high capability of the system is demonstrated by performing and analyzing two nontrivial plating applications

    Improving biologics development by high performance glycoanalysis

    Get PDF
    Glycomics is a rapidly emerging field that can be viewed as a complement to other „omics“ approaches including proteomics and genomics. Hence, there is a dramatic increase in the demand for analytical tools and specific databases in glycobiology, respectively, glycobiotechnology. In order to enhance and improve the comparatively small existing glycoanalytical toolbox, fully automated, highly sensitive, reliable, high-throughput and high-resolution analysis methods including automated data evaluation are required. One very promising method is based on multiplexed capillary gelelectrophoresis with laser induced fluorescence detection (xCGE-LIF). The glycoanalytical approach established includes sample preparation and measuring methods, software, and database solutions to tackle challenges in a great number of application fields. First, an optimized modular sample preparation workflow is presented with respect to performance and feasibility regarding high-throughput analytics [1-5]. Second, parallel sample-measurement is shown to result in massive reduction of the effective run-time per sample [4]. Third, automated data analysis with a newly developed modular software-tool for data processing and data analysis is demonstrated that involves integration of a corresponding oligosaccharide-database [6-8]. Using this high-performance xCGE-LIF based glycoanalysis system, the generated “normalized” electropherograms of glycomoieties (“fingerprints”) can be evaluated on three levels: (1) “simple” qualitative and quantitative pattern comparison (“fingerprinting”), (2) identification of compounds in complex mixtures via database matching (“glycoprofiling”) and (3) extended structural analysis using exoglycosidase sequencing in combination with xCGE-LIF based glycoprofiling. The broad applicability of the system is demonstrated for different types of glycosamples: from manufacturing of biologics and vaccines (including recombinant and viral glycoproteins) [1-3], to human stem cells, blood serum [4,5] and milk [8]

    Dynamic Power Management for Neuromorphic Many-Core Systems

    Full text link
    This work presents a dynamic power management architecture for neuromorphic many core systems such as SpiNNaker. A fast dynamic voltage and frequency scaling (DVFS) technique is presented which allows the processing elements (PE) to change their supply voltage and clock frequency individually and autonomously within less than 100 ns. This is employed by the neuromorphic simulation software flow, which defines the performance level (PL) of the PE based on the actual workload within each simulation cycle. A test chip in 28 nm SLP CMOS technology has been implemented. It includes 4 PEs which can be scaled from 0.7 V to 1.0 V with frequencies from 125 MHz to 500 MHz at three distinct PLs. By measurement of three neuromorphic benchmarks it is shown that the total PE power consumption can be reduced by 75%, with 80% baseline power reduction and a 50% reduction of energy per neuron and synapse computation, all while maintaining temporary peak system performance to achieve biological real-time operation of the system. A numerical model of this power management model is derived which allows DVFS architecture exploration for neuromorphics. The proposed technique is to be used for the second generation SpiNNaker neuromorphic many core system

    Policy-based techniques for self-managing parallel applications

    Get PDF
    This paper presents an empirical investigation of policy-based self-management techniques for parallel applications executing in loosely-coupled environments. The dynamic and heterogeneous nature of these environments is discussed and the special considerations for parallel applications are identified. An adaptive strategy for the run-time deployment of tasks of parallel applications is presented. The strategy is based on embedding numerous policies which are informed by contextual and environmental inputs. The policies govern various aspects of behaviour, enhancing flexibility so that the goals of efficiency and performance are achieved despite high levels of environmental variability. A prototype self-managing parallel application is used as a vehicle to explore the feasibility and benefits of the strategy. In particular, several aspects of stability are investigated. The implementation and behaviour of three policies are discussed and sample results examined

    Modeling and Control of Post-Combustion CO2 Capture Process Integrated with a 550MWe Supercritical Coal-fired Power Plant

    Get PDF
    This work focuses on the development of both steady-state and dynamic models for an monoethanolamine (MEA)-based CO2 capture process for a commercial-scale supercritical pulverized coal (PC) power plant, using Aspen PlusRTM and Aspen Plus DynamicsRTM. The dynamic model also facilitates the design of controllers for both traditional proportional-integral-derivative (PID) and advanced controllers, such as linear model predictive control (LMPC), nonlinear model predictive control (NMPC) and H? robust control.;A steady-state MEA-based CO2 capture process is developed in Aspen PlusRTM. The key process units, CO2 absorber and stripper columns, are simulated using the rate-based method. The steady-state simulation results are validated using experimental data from a CO2 capture pilot plant. The process parameters are optimized with the goal of minimizing the energy penalty. Subsequently, the optimized rate-based, steady-state model with appropriate modifications, such as the inclusion of the size and metal mass of the equipment, is exported into Aspen Plus DynamicsRTM to study transient characteristics and to design the control system. Since Aspen Plus DynamicsRTM does not support the rate-based model, modifications to the Murphree efficiencies in the columns and a rigorous pressure drop calculation method are implemented in the dynamic model to ensure consistency between the design and off-design results from the steady-state and dynamic models. The results from the steady-state model indicate that between three and six parallel trains of CO2 capture processes are required to capture 90% CO2 from a 550MWe supercritical PC plant depending on the maximum column diameter used and the approach to flooding at the design condition. However, in this work, only two parallel trains of CO2 capture process are modeled and integrated with a 550MWe post-combustion, supercritical PC plant in the dynamic simulation due to the high calculation expense of simulating more than two trains.;In the control studies, the performance of PID-based, LMPC-based, and NMPC-based approaches are evaluated for maintaining the overall CO2 capture rate and the CO2 stripper reboiler temperature at the desired level in the face of typical input and output disturbances in flue gas flow rate and composition as well as change in the power plant load and variable CO2 capture rate. Scenarios considered include cases using different efficiencies to mimic different conditions between parallel trains in real industrial processes. MPC-based approaches are found to provide superior performance compared to a PID-based one. Especially for parallel trains of CO2 capture processes, the advantage of MPC is observed as the overall extent of CO2 capture for the process is maintained by adjusting the extent of capture for each train based on the absorber efficiencies. The NMPC-based approach is preferred since the optimization problem that must be solved for model predictive control of CO2 capture process is highly nonlinear due to tight performance specifications, environmental and safety constraints, and inherent nonlinearity in the chemical process. In addition, model uncertainties are unavoidable in real industrial processes and can affect the plant performance. Therefore, a robust controller is designed for the CO2 capture process based on ?-synthesis with a DK-iteration algorithm. Effects of uncertainties due to measurement noise and model mismatches are evaluated for both the NMPC and robust controller. The simulation results show that the tradeoff between the fast tracking performance of the NMPC and the superior robust performance of the robust controller must be considered while designing the control system for the CO2 capture units. Different flooding control strategies for the situation when the flue gas flow rate increases are also covered in this work

    Visual Calibration, Identification and Control of 6-RSS Parallel Robots

    Get PDF
    Parallel robots present some outstanding advantages in high force-to-weight ratio, better stiffness and theoretical higher accuracy compared with serial manipulators. Hence parallel robots have been utilized increasingly in various applications. However, due to the manufacturing tolerances and defections in the robot structure, the positioning accuracy of parallel robots is basically equivalent with that of serial manipulators according to previous researches on the accuracy analysis of the Stewart Platform [1], which is difficult to meet the precision requirement of many potential applications. In addition, the existence of closed-chain mechanism yields difficulties in designing control system for practical applications, due to its highly coupled dynamics. Visual sensor is a good choice for providing non-contact measurement of the end-effector pose (position and orientation) with simplicity in operation and low cost compared to other measurement methods such as the coordinate measurement machine (CMM) [2] and the laser tracker [3]. In this research, a series of solutions including kinematic calibration, dynamic identification and visual servoing are proposed to improve the positioning and tracking performance of the parallel robot based on the visual sensor. The main contributions of this research include three parts. In the first part, a relative pose-based algorithm (RPBA) is proposed to solve the kinematic calibration problem of a six-revolute-spherical-spherical (6-RSS) parallel robot by using the optical CMM sensor. Based on the relative poses between the candidate and the initial configurations, a calibration algorithm is proposed to determine the optimal error parameters of the robot kinematic model and external parameters introduced by the optical sensor. The experimental results demonstrate that the proposal RPBA using optical CMM is an implementable and effective method for the parallel robot calibration. The second part focuses on the dynamic model identification of the 6-RSS parallel robots. A visual closed-loop output-error identification method based on an optical CMM sensor is proposed for the purpose of the advanced model-based visual servoing control design of parallel robots. By using an outer loop visual servoing controller to stabilize both the parallel robot and the simulated model, the visual closed-loop output-error identification method is developed and the model parameters are identified by using a nonlinear optimization technique. The effectiveness of the proposed identification algorithm is validated by experimental tests. In the last part, a dynamic sliding mode control (DSMC) scheme combined with the visual servoing method is proposed to improve the tracking performance of the 6-RSS parallel robot based on the optical CMM sensor. By employing a position-to-torque converter, the torque command generated by DSMC can be applied to the position controlled industrial robot. The stability of the proposed DSMC has been proved by using Lyapunov theorem. The real-time experiment tests on a 6-RSS parallel robot demonstrate that the developed DSMC scheme is robust to the modeling errors and uncertainties. Compared with the classical kinematic level controllers, the proposed DSMC exhibits the superiority in terms of tracking performance and robustness

    Parallel Algorithm for Solving Kepler's Equation on Graphics Processing Units: Application to Analysis of Doppler Exoplanet Searches

    Full text link
    [Abridged] We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., chi^2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed-precision, our GPU code provides a speed-up factor of over 600, when evaluating N_sys > 1024 models planetary systems each containing N_pl = 4 planets and assuming N_obs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.Comment: 19 pages, to appear in New Astronom
    • …
    corecore