42 research outputs found

    Superstructure optimisation of a water minimisation network with a embedded multicontaminant electrodialysis model

    Get PDF
    A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering, 2016The water-energy nexus considers the relationship between water and energy resources. Increases in environmental degradation and social pressures in recent years have necessitated the development of manufacturing processes that are conservative with respect to both these resources, while maintaining financial viability. This can be achieved by process integration (PI); a holistic approach to design which emphasises the unity of processes. Within the realm of PI, water network synthesis (WNS) explores avenues for reuse, recycle and regeneration of effluent in order to minimise freshwater consumption and wastewater production. When regeneration is required, membrane-based treatment processes may be employed. These processes are energy intensive and result in a trade-off between water and energy minimisation, thus creating an avenue for optimisation. Previous work in WNS employed a black box approach to represent regenerators in water minimisation problems. However, this misrepresents the cost of regeneration and underestimates the energy requirements of a system. The aim of the research presented in this dissertation is to develop an integrated water regeneration network synthesis model to simultaneously minimise water and energy in a water network. A novel MINLP model for the design of an electrodialysis (ED) unit that is capable of treating a binary mixture of simple salts was developed from first principles. This ED model was embedded into a water network superstructure optimisation model, where the objective was to minimise freshwater and energy consumption, wastewater productions, and associated costs. The model was applied to a pulp and paper case study, considering several scenarios. Global optimisation of the integrated water network and ED design model, with variable contaminant removal ratios, was found to yield the best results. A total of 38% savings in freshwater, 68% reduction in wastewater production and 55% overall cost reduction were observed when compared with the original design. This model also led to a 80% reduction in regeneration (energy) cost.GS201

    Optimization of integrated water and multiregenerator membrane systems

    Get PDF
    A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy (Chemical Engineering), September 2017Water and energy are key resources in the process industry. The water-energy nexus considers the interdependence of water and energy resources and their effect on the environment. The increasing awareness of environmental regulations has heightened the need for process integration techniques that are environmentally benign and economically feasible. Process integration techniques within water network synthesis require a holistic approach for the sustainable use of water through reuse and recycle and regeneration reuse and recycle. Conventional methods for water minimisation through water network synthesis often use the “black-box” approach to represent the performance of the regenerators. The degree of contaminant removal and cost of regeneration are represented by linear functions. This, therefore, leads to suboptimal operating conditions and inaccurate cost representation of the regeneration units. This work proposes a robust water network superstructure optimisation approach for the synthesis of a multi-regenerator network for the simultaneous minimisation of water and energy. Two types of membrane regenerators are considered for this work, namely, electrodialysis and reverse osmosis. Detailed models of the regeneration units are embedded into the water network superstructure optimisation model to simultaneously minimise water, energy, operating and capital costs. The presence of continuous and integer variables, as well as nonlinear constraints renders the problem a mixed integer nonlinear program (MINLP). The developed model is applied to two illustrative examples involving a single contaminant and multiple contaminants and one industrial case study of a power utility plant involving a single contaminant to demonstrate its applicability. The application of the model to the single contaminant illustrative example lead to a 43.7% freshwater reduction, 50.9% decrease in wastewater generation and 46% savings in total water network cost. The multi-contaminant illustrative example showed 11.6% freshwater savings, 15.3% wastewater reduction, 57.3% savings in regeneration and energy cost compared to the water network superstructure with “black-box” regeneration model. The industrial case study showed a savings of up to 18.7% freshwater consumption, 82.4% wastewater reduction and up to 17% savings on total water network cost.XL201

    Robust simulation and optimization methods for natural gas liquefaction processes

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 313-324).Natural gas is one of the world's leading sources of fuel in terms of both global production and consumption. The abundance of reserves that may be developed at relatively low cost, paired with escalating societal and regulatory pressures to harness low carbon fuels, situates natural gas in a position of growing importance to the global energy landscape. However, the nonuniform distribution of readily-developable natural gas sources around the world necessitates the existence of an international gas market that can serve those regions without reasonable access to reserves. International transmission of natural gas via pipeline is generally cost-prohibitive beyond around two thousand miles, and so suppliers instead turn to the production of liquefied natural gas (LNG) to yield a tradable commodity. While the production of LNG is by no means a new technology, it has not occupied a dominant role in the gas trade to date. However, significant growth in LNG exports has been observed within the last few years, and this trend is expected to continue as major new liquefaction operations have and continue to become operational worldwide. Liquefaction of natural gas is an energy-intensive process requiring specialized cryogenic equipment, and is therefore expensive both in terms of operating and capital costs. However, optimization of liquefaction processes is greatly complicated by the inherently complex thermodynamic behavior of process streams that simultaneously change phase and exchange heat at closely-matched cryogenic temperatures. The determination of optimal conditions for a given process will also generally be nontransferable information between LNG plants, as both the specifics of design (e.g. heat exchanger size and configuration) and the operation (e.g. source gas composition) may have significantly variability between sites. Rigorous evaluation of process concepts for new production facilities is also challenging to perform, as economic objectives must be optimized in the presence of constraints involving equipment size and safety precautions even in the initial design phase. The absence of reliable and versatile software to perform such tasks was the impetus for this thesis project. To address these challenging problems, the aim of this thesis was to develop new models, methods and algorithms for robust liquefaction process simulation and optimization, and to synthesize these advances into reliable and versatile software. Recent advances in the sensitivity analysis of nondifferentiable functions provided an advantageous foundation for the development of physically-informed yet compact process models that could be embedded in established simulation and optimization algorithms with strong convergence properties. Within this framework, a nonsmooth model for the core unit operation in all industrially-relevant liquefaction processes, the multi-stream heat exchanger, was first formulated. The initial multistream heat exchanger model was then augmented to detect and handle internal phase transitions, and an extension of a classic vapor-liquid equilibrium model was proposed to account for the potential existence of solutions in single-phase regimes, all through the use of additional nonsmooth equations. While these initial advances enabled the simulation of liquefaction processes under the conditions of simple, idealized thermodynamic models, it became apparent that these methods would be unable to handle calculations involving nonideal thermophysical property models reliably. To this end, robust nonsmooth extensions of the celebrated inside-out algorithms were developed. These algorithms allow for challenging phase equilibrium calculations to be performed successfully even in the absence of knowledge about the phase regime of the solution, as is the case when model parameters are chosen by a simulation or optimization algorithm. However, this still was not enough to equip realistic liquefaction process models with a completely reliable thermodynamics package, and so new nonsmooth algorithms were designed for the reasonable extrapolation of density from an equation of state under conditions where a given phase does not exist. This procedure greatly enhanced the ability of the nonsmooth inside-out algorithms to converge to physical solutions for mixtures at very high temperature and pressure. These models and submodels were then integrated into a flowsheeting framework to perform realistic simulations of natural gas liquefaction processes robustly, efficiently and with extremely high accuracy. A reliable optimization strategy using an interior-point method and the nonsmooth process models was then developed for complex problem formulations that rigorously minimize thermodynamic irreversibilities. This approach significantly outperforms other strategies proposed in the literature or implemented in commercial software in terms of the ease of initialization, convergence rate and quality of solutions found. The performance observed and results obtained suggest that modeling and optimizing such processes using nondifferentiable models and appropriate sensitivity analysis techniques is a promising new approach to these challenging problems. Indeed, while liquefaction processes motivated this thesis, the majority of the methods described herein are applicable in general to processes with complex thermodynamic or heat transfer considerations embedded. It is conceivable that these models and algorithms could therefore inform a new, robust generation of process simulation and optimization software.by Harry Alexander James Watson.Ph. D

    Confirmation and Evidence

    Get PDF
    The question how experience acts on our beliefs and how beliefs are changed in the light of experience is one of the oldest and most controversial questions in philosophy in general and epistemology in particular. Philosophy of science has replaced this question by the more specific enquiry how results of experiments act on scientific hypotheses and theories. Why do we maintain some theories while discarding others? Two general questions emerge: First, what is our reason to accept the justifying power of experience and more specifically, scientific experiments? Second, how can the relationship between theory and evidence be described and under which circumstances is a scientific theory confirmed by a piece of evidence? The book focuses on the second question, on explicating the relationship between theory and evidence and capturing the structure of a valid inductive argument. Special attention is paid to statistical applications that are prevalent in modern empirical science. After an introductory chapter about the link between confirmation and induction, the project starts with discussing qualitative accounts of confirmation in first-order predicate logic. Two major approaches, the Hempelian satisfaction criterion and the hypothetico-deductivist tradition, are contrasted to each other. This is subsequently extended to an account of the confirmation of entire theories as opposed to the confirmation of single hypothesis. Then the quantative Bayesian account of confirmation is explained and discussed on the basis of a theory of rational degrees of belief. After that, I present the various schools of statistical inference and explain the foundations of these competing schemes. Finally, I argue for a specific concept of statistical evidence, summarize the results, and sketch some open questions. </p

    The sound of risk and the art of change. Towards a deeper understanding of nuclear risk perception

    Get PDF
    The dissertation is an approach to cover different views on risks and specifically on nuclear risk perception. The dissertation aims to contribute theoretical foundations as well as empirical evidence on the questions of how to perceive and to govern global risks, such as nuclear technology. Technological risks, we face today, can be seen as products of the era of modernity. We created them, seeing them as chance and controllable. Major nuclear accidents repeatedly provide evidence that the assumption of controllability is too strong. A paradigm shift is needed in scientific reasoning. In the beginning of this work an approach is developed to define risks as a semi-normative concept of individual judgement as well as objective foundation. The dissertation emphasizes that individual’s risk perception can in its extremes be solely based on feelings, on the one hand, or on pure objective information, on the other. This can create tension in societies if expert’s risk evaluation does not match individualistic judgements of risks. As a result, social protests on local level happen, broader civil society movements are formed or new political parties, such as the Green party in Germany are established. On the individual level, empirical studies indicate that risk perception is clustered within societies. Women, for example, systematically evaluate the use of nuclear energy as more dangerous as man. To provide empirical evidence, this work contains three empirical chapter using data from the International Social Survey Programme (ISSP). All three chapters are interested in the question of how dangerous individuals perceive nuclear energy. Furthermore all studies are linked to the Fukushima Daiichi nuclear power plant accident in March 2011. The first study examines individual nuclear risk perception in the U.S., Great Britain, France, Germany, and Japan, before the accident. The second study uses data of country samples that contain observations before and after the accident to show how the event has an immediate effect on risk perception in societies. A third chapter using observations after the accident compares nuclear risk perception with individual’s environmental concern. The studies show how social factors, such as gender, education, social status or values shape our concerns and our view of nuclear risk. It becomes also clear that a major nuclear accident is affecting more risk averse groups, such as educated men, stronger than already concerned people creating higher levels of risk perception. A main driver of risk perception is a loss of trust in governments and its controlling agencies. The questions that remain are the question of how to govern risks and how to avoid creating risks that are perceived as very dangerous for future generations. Unanswered remains in this dissertation the question of how to design institutions that are able to pass on the knowledge of risks, such as nuclear technology, from one generation to its next generation. In order to avoid the creation of too dangerous risks for future generations, this work is advocating an adaptive and integrative risk-management model opposed to a command-and-control management model to control for social ignorance and to avoid an accumulation of risks. Nuclear technology is an example of how new risks have created new dilemma and questions we face in societies and across countries as well as across generations. Nuclear technology forces humanity also to overcome common levels of ignorance and not knowing, for example by using a different language and worldview, incorporating more information into risk assessments. It also forces all actors to create institutions and citizen’s trust into its institutions that are able create control mechanisms to avoid the harm to human beings or a catastrophe, by a series of nuclear accidents or a nuclear war. The sound of risk is teaching us how we need to change. Change is an art

    Time Localization of Abrupt Changes in Cutting Process using Hilbert Huang Transform

    Get PDF
    Cutting process is extremely dynamical process influenced by different phenomena such as chip formation, dynamical responses and condition of machining system elements. Different phenomena in cutting zone have signatures in different frequency bands in signal acquired during process monitoring. The time localization of signal’s frequency content is very important. An emerging technique for simultaneous analysis of the signal in time and frequency domain that can be used for time localization of frequency is Hilbert Huang Transform (HHT). It is based on empirical mode decomposition (EMD) of the signal into intrinsic mode functions (IMFs) as simple oscillatory modes. IMFs obtained using EMD can be processed using Hilbert Transform and instantaneous frequency of the signal can be computed. This paper gives a methodology for time localization of cutting process stop during intermittent turning. Cutting process stop leads to abrupt changes in acquired signal correlated to certain frequency band. The frequency band related to abrupt changes is localized in time using HHT. The potentials and limitations of HHT application in machining process monitoring are shown

    Proceedings of the NASA Conference on Space Telerobotics, volume 4

    Get PDF
    Papers presented at the NASA Conference on Space Telerobotics are compiled. The theme of the conference was man-machine collaboration in space. The conference provided a forum for researchers and engineers to exchange ideas on the research and development required for the application of telerobotic technology to the space systems planned for the 1990's and beyond. Volume 4 contains papers related to the following subject areas: manipulator control; telemanipulation; flight experiments (systems and simulators); sensor-based planning; robot kinematics, dynamics, and control; robot task planning and assembly; and research activities at the NASA Langley Research Center
    corecore