3,936 research outputs found

    3D Reconstruction of Building Rooftop and Power Line Models in Right-of-Ways Using Airborne LiDAR Data

    Get PDF
    The research objectives aimed to achieve thorough the thesis are to develop methods for reconstructing models of building and PL objects of interest in the power line (PL) corridor area from airborne LiDAR data. For this, it is mainly concerned with the model selection problem for which model is more optimal in representing the given data set. This means that the parametric relations and geometry of object shapes are unknowns and optimally determined by the verification of hypothetical models. Therefore, the proposed method achieves high adaptability to the complex geometric forms of building and PL objects. For the building modeling, the method of implicit geometric regularization is proposed to rectify noisy building outline vectors which are due to noisy data. A cost function for the regularization process is designed based on Minimum Description Length (MDL) theory, which favours smaller deviation between a model and observation as well as orthogonal and parallel properties between polylines. Next, a new approach, called Piecewise Model Growing (PMG), is proposed for 3D PL model reconstruction using a catenary curve model. It piece-wisely grows to capture all PL points of interest and thus produces a full PL 3D model. However, the proposed method is limited to the PL scene complexity, which causes PL modeling errors such as partial, under- and over-modeling errors. To correct the incompletion of PL models, the inner and across span analysis are carried out, which leads to replace erroneous PL segments by precise PL models. The inner span analysis is performed based on the MDL theory to correct under- and over-modeling errors. The across span analysis is subsequently carried out to correct partial-modeling errors by finding start and end positions of PLs which denotes Point Of Attachment (POA). As a result, this thesis addresses not only geometrically describing building and PL objects but also dealing with noisy data which causes the incompletion of models. In the practical aspects, the results of building and PL modeling should be essential to effectively analyze a PL scene and quickly alleviate the potentially hazardous scenarios jeopardizing the PL system

    MODELING, SIMULATION AND OPTIMIZATION OF INDUSTRIAL HEAT EXCHANGER NETWORK FOR OPTIMAL CLEANING SCHEDULE

    Get PDF
    Sustaining the thermal and hydraulic performances of heat exchanger network (HEN) for crude oil preheating is one of the major concerns in refining industry. Virtually, the overall economy of the refineries revolves around the performance of crude preheat train (CPT). Fouling in the heat exchangers deteriorates the thermal performance of the CPT leading to an increase in energy consumption and hence giving rise to economic losses. Normally the energy consumption is compensated by additional fuel gas in the fired heater. Thus, increase of energy consumption causes an increase in carbon dioxide emission and contributes to green house effect. Due to these factors, heat exchanger cleaning is performed on a regular basis either by chemical or mechanical cleanings. The disadvantage of these cleanings is the potential environmental problem through the application, handling, storage and disposal of cleaning effluents. Nevertheless, the loss of production caused by plant downtime for cleaning is often more significant than the cost of cleaning itself, particularly in refineries. Thus, it is essential to optimize the cleaning schedule of heat exchangers in the HEN of CPT. The present research focuses on the analysis of the effects of fouling on heat transfer performance and optimization of the cleaning schedule for the CPT. The study involves collection and analysis of plant historical operating data from a Malaysian refinery processing sweet crude oils. A simulation model of the CPT comprising 7 shell and tube heat exchangers post desalter with different mechanical designs and physical arrangements was developed under Petro-SIM™ environment to perform the studies. In the analysis of effects of fouling on heat transfer performance in CPT, the simulation model was integrated with threshold fouling models that are unique to each heat exchanger. The fouling model parameters are estimated from the historical data. The simulation study was performed for 300 days and the analysis indicated that the position of heat exchangers has a dominant role in the heat transfer performance of CPT under fouled conditions. It is observed from this simulation study, fouling of upstream heat exchangers of the CPT will have higher impact to overall heat transfer performance of the CPT. For the downstream heat exchangers, the decline in their heat transfer performances due to fouling can be compensated by the log-mean temperature difference (LMTD) effect, which will reduce or even increase the heat transfer performance of these heat exchangers. An optimization problem for the cleaning schedule of the CPT was formulated and solved. The optimization problem considered un-recovered energy cost and cleaning cost of the heat exchangers in the objective function. Optimization of the cleaning schedule was illustrated with a case study of simulation over a period of two years. Constant fouling rates that are extracted from the historical data are used to estimate the fouling characteristics of each heat exchanger in the CPT. For the purpose of comparison, a base case was developed based on the assumption that the heat exchangers will be cleaned when the maximum allowable fouling resistance was reached. mixed integer programming approach was used to optimize the cleaning schedule of heat exchangers. An optimized cleaning schedule with significant cost savings has been determined and reported over a period of two years

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Groundwater Management Optimization and Saltwater Intrusion Mitigation under Uncertainty

    Get PDF
    Groundwater is valuable to supply fresh water to the public, industries, agriculture, etc. However, excessive pumping has caused groundwater storage degradation, water quality deterioration and saltwater intrusion problems. Reliable groundwater flow and solute transport modeling is needed for sustainable groundwater management and aquifer remediation design. However, challenges exist because of highly complex subsurface environments, computationally intensive groundwater models as well as inevitable uncertainties. The first research goal is to explore conjunctive use of feasible hydraulic control approaches for groundwater management and aquifer remediation. Water budget analysis is conducted to understand how groundwater withdrawals affect water levels. A mixed integer multi-objective optimization model is constructed to derive optimal freshwater pumping strategies and investigate how to promote the optimality through regulating pumping locations. A solute transport model for the Baton Rouge multi-aquifer system is developed to assess saltwater encroachment under current condition. Potential saltwater scavenging approach is proposed to mitigate the salinization issue in the Baton Rouge area. The second research goal aims to develop robust surrogate-assisted simulation-optimization modeling methods for saltwater intrusion mitigation. Machine learning based surrogate models (response surface regression model, artificial neural network and support vector machine) were developed to replace a complex high-fidelity solute transport model for predicting saltwater intrusion. Two different methods including Bayesian model averaging and Bayesian set pair analysis are used to construct ensemble surrogates and quantify model prediction uncertainties. Besides. different optimization models that incorporate multiple ensemble surrogates are formulated to obtain optimal saltwater scavenging strategies. Chance-constrained programming is used to account for model selection uncertainty in probabilistic nonlinear concentration constraints. The results show that conjunctive use of hydraulic control approaches would be effective to mitigate saltwater intrusion but needs decades. Machine learning based ensemble surrogates can build accurate models with high computing efficiency, and hence save great efforts in groundwater remediation design. Including model selection uncertainty through multimodel inference and model averaging provides more reliable remediation strategies compared with the single-surrogate assisted approach

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Simulation Studies of Digital Filters for the Phase-II Upgrade of the Liquid-Argon Calorimeters of the ATLAS Detector at the High-Luminosity LHC

    Get PDF
    Am Large Hadron Collider und am ATLAS-Detektor werden umfangreiche Aufrüstungsarbeiten vorgenommen. Diese Arbeiten sind in mehrere Phasen gegliedert und umfassen unter Anderem Änderungen an der Ausleseelektronik der Flüssigargonkalorimeter; insbesondere ist es geplant, während der letzten Phase ihren Primärpfad vollständig auszutauschen. Die Elektronik besteht aus einem analogen und einem digitalen Teil: während ersterer die Signalpulse verstärkt und sie zur leichteren Abtastung verformt, führt letzterer einen Algorithmus zur Energierekonstruktion aus. Beide Teile müssen während der Aufrüstung verbessert werden, damit der Detektor interessante Kollisionsereignisse präzise rekonstruieren und uninteressante effizient verwerfen kann. In dieser Dissertation werden Simulationsstudien präsentiert, die sowohl die analoge als auch die digitale Auslese der Flüssigargonkalorimeter optimieren. Die Korrektheit der Simulation wird mithilfe von Kalibrationsdaten geprüft, die im sog. Run 2 des ATLAS-Detektors aufgenommen worden sind. Der Einfluss verschiedener Parameter der Signalverformung auf die Energieauflösung wird analysiert und die Nützlichkeit einer erhöhten Abtastrate von 80 MHz untersucht. Des Weiteren gibt diese Arbeit eine Übersicht über lineare und nichtlineare Energierekonstruktionsalgorithmen. Schließlich wird eine Auswahl von ihnen hinsichtlich ihrer Leistungsfähigkeit miteinander verglichen. Es wird gezeigt, dass ein Erhöhen der Ordnung des Optimalfilters, der gegenwärtig verwendete Algorithmus, die Energieauflösung um 2 bis 3 % verbessern kann, und zwar in allen Regionen des Detektors. Der Wiener Filter mit Vorwärtskorrektur, ein nichtlinearer Algorithmus, verbessert sie um bis zu 10 % in einigen Regionen, verschlechtert sie aber in anderen. Ein Zusammenhang dieses Verhaltens mit der Wahrscheinlichkeit fälschlich detektierter Kalorimetertreffer wird aufgezeigt und mögliche Lösungen werden diskutiert.:1 Introduction 2 An Overview of High-Energy Particle Physics 2.1 The Standard Model of Particle Physics 2.2 Verification of the Standard Model 2.3 Beyond the Standard Model 3 LHC, ATLAS, and the Liquid-Argon Calorimeters 3.1 The Large Hadron Collider 3.2 The ATLAS Detector 3.3 The ATLAS Liquid-Argon Calorimeters 4 Upgrades to the ATLAS Liquid-Argon Calorimeters 4.1 Physics Goals 4.2 Phase-I Upgrade 4.3 Phase-II Upgrade 5 Noise Suppression With Digital Filters 5.1 Terminology 5.2 Digital Filters 5.3 Wiener Filter 5.4 Matched Wiener Filter 5.5 Matched Wiener Filter Without Bias 5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria 5.7 Forward Correction 5.8 Sparse Signal Restoration 5.9 Artificial Neural Networks 6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics 6.1 AREUS 6.2 Hit Generation and Sampling 6.3 Pulse Shapes 6.4 Thermal Noise 6.5 Quantization 6.6 Digital Filters 6.7 Statistical Analysis 7 Results of the Readout Electronics Simulation Studies 7.1 Statistical Treatment 7.2 Simulation Verification Using Run-2 Data 7.3 Dependence of the Noise on the Shaping Time 7.4 The Analog Readout Electronics and the ADC 7.5 The Optimal Filter (OF) 7.6 The Wiener Filter 7.7 The Wiener Filter with Forward Correction (WFFC) 7.8 Final Comparison and Conclusions 8 Conclusions and Outlook AppendicesThe Large Hadron Collider and the ATLAS detector are undergoing a comprehensive upgrade split into multiple phases. This effort also affects the liquid-argon calorimeters, whose main readout electronics will be replaced completely during the final phase. The electronics consist of an analog and a digital portion: the former amplifies the signal and shapes it to facilitate sampling, the latter executes an energy reconstruction algorithm. Both must be improved during the upgrade so that the detector may accurately reconstruct interesting collision events and efficiently suppress uninteresting ones. In this thesis, simulation studies are presented that optimize both the analog and the digital readout of the liquid-argon calorimeters. The simulation is verified using calibration data that has been measured during Run 2 of the ATLAS detector. The influence of several parameters of the analog shaping stage on the energy resolution is analyzed and the utility of an increased signal sampling rate of 80 MHz is investigated. Furthermore, a number of linear and non-linear energy reconstruction algorithms is reviewed and the performance of a selection of them is compared. It is demonstrated that increasing the order of the Optimal Filter, the algorithm currently in use, improves energy resolution by 2 to 3 % in all detector regions. The Wiener filter with forward correction, a non-linear algorithm, gives an improvement of up to 10 % in some regions, but degrades the resolution in others. A link between this behavior and the probability of falsely detected calorimeter hits is shown and possible solutions are discussed.:1 Introduction 2 An Overview of High-Energy Particle Physics 2.1 The Standard Model of Particle Physics 2.2 Verification of the Standard Model 2.3 Beyond the Standard Model 3 LHC, ATLAS, and the Liquid-Argon Calorimeters 3.1 The Large Hadron Collider 3.2 The ATLAS Detector 3.3 The ATLAS Liquid-Argon Calorimeters 4 Upgrades to the ATLAS Liquid-Argon Calorimeters 4.1 Physics Goals 4.2 Phase-I Upgrade 4.3 Phase-II Upgrade 5 Noise Suppression With Digital Filters 5.1 Terminology 5.2 Digital Filters 5.3 Wiener Filter 5.4 Matched Wiener Filter 5.5 Matched Wiener Filter Without Bias 5.6 Timing Reconstruction, Optimal Filtering, and Selection Criteria 5.7 Forward Correction 5.8 Sparse Signal Restoration 5.9 Artificial Neural Networks 6 Simulation of the ATLAS Liquid-Argon Calorimeter Readout Electronics 6.1 AREUS 6.2 Hit Generation and Sampling 6.3 Pulse Shapes 6.4 Thermal Noise 6.5 Quantization 6.6 Digital Filters 6.7 Statistical Analysis 7 Results of the Readout Electronics Simulation Studies 7.1 Statistical Treatment 7.2 Simulation Verification Using Run-2 Data 7.3 Dependence of the Noise on the Shaping Time 7.4 The Analog Readout Electronics and the ADC 7.5 The Optimal Filter (OF) 7.6 The Wiener Filter 7.7 The Wiener Filter with Forward Correction (WFFC) 7.8 Final Comparison and Conclusions 8 Conclusions and Outlook Appendice

    Lost in optimisation of water distribution systems? A literature review of system operation

    Get PDF
    This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this record.Optimisation of the operation of water distribution systems has been an active research field for almost half a century. It has focused mainly on optimal pump operation to minimise pumping costs and optimal water quality management to ensure that standards at customer nodes are met. This paper provides a systematic review by bringing together over two hundred publications from the past three decades, which are relevant to operational optimisation of water distribution systems, particularly optimal pump operation, valve control and system operation for water quality purposes of both urban drinking and regional multiquality water distribution systems. Uniquely, it also contains substantial and thorough information for over one hundred publications in a tabular form, which lists optimisation models inclusive of objectives, constraints, decision variables, solution methodologies used and other details. Research challenges in terms of simulation models, optimisation model formulation, selection of optimisation method and postprocessing needs have also been identified
    corecore