255 research outputs found

    Optimisation of Product Recovery Options in End-of-Life Product Disassembly by Robots

    Get PDF
    In a circular economy, strategies for product recovery, such as reuse, recycling, and remanufacturing, play an important role at the end of a product’s life. A sustainability model was developed to solve the problem of sequence-dependent robotic disassembly line balancing. This research aimed to assess the viability of the model, which was optimised using the Multi-Objective Bees Algorithm in a robotic disassembly setting. Two industrial gear pumps were used as case studies. Four objectives (maximising profit, energy savings, emissions reductions and minimising line imbalance) were set. Several product recovery scenarios were developed to find the best recovery plans for each component. An efficient metaheuristic, the Bees Algorithm, was used to find the best solution. The robotic disassembly plans were generated and assigned to robotic workstations simultaneously. Using the proposed sustainability model on end-of-life industrial gear pumps shows the applicability of the model to real-world problems. The Multi-Objective Bees Algorithm was able to find the best scenario for product recovery by assigning each component to recycling, reuse, remanufacturing, or disposal. The performance of the algorithm is consistent, producing a similar performance for all sustainable strategies. This study addresses issues that arise with product recovery options for end-of-life products and provides optimal solutions through case studies

    Is the Cell Really a Machine?

    Get PDF
    It has become customary to conceptualize the living cell as an intricate piece of machinery, different to a man-made machine only in terms of its superior complexity. This familiar understanding grounds the conviction that a cell's organization can be explained reductionistically, as well as the idea that its molecular pathways can be construed as deterministic circuits. The machine conception of the cell owes a great deal of its success to the methods traditionally used in molecular biology. However, the recent introduction of novel experimental techniques capable of tracking individual molecules within cells in real time is leading to the rapid accumulation of data that are inconsistent with an engineering view of the cell. This paper examines four major domains of current research in which the challenges to the machine conception of the cell are particularly pronounced: cellular architecture, protein complexes, intracellular transport, and cellular behaviour. It argues that a new theoretical understanding of the cell is emerging from the study of these phenomena which emphasizes the dynamic, self-organizing nature of its constitution, the fluidity and plasticity of its components, and the stochasticity and non-linearity of its underlying processes

    OPTIMUM DESIGN AND OPERATION OF COMBINED COOLING HEATING AND POWER SYSTEM WITH UNCERTAINTY

    Get PDF
    Combined cooling, heating, and power (CCHP) systems utilize renewable energy sources, waste heat energy, and thermally driven cooling technology to simultaneously provide energy in three forms. They are reliable by virtue of main grid independence and ultra-efficient because of cascade energy utilization. These merits make CCHP systems potential candidates as energy suppliers for commercial buildings. Due to the complexity of CCHP systems and environmental uncertainty, conventional design and operation strategies that depend on expertise or experience might lose effectiveness and protract the prototyping process. Automation-oriented approaches, including machine learning and optimization, can be utilized at both design and operation stages to accelerate decision-making without losing energy efficiency for CCHP systems. As the premise of design and operation for the combined system, information about building energy consumption should be determined initially. Therefore, this thesis first constructs deep learning (DL) models to forecast energy demands for a large-scale dataset. The building types and multiple energy demands are embedded in the DL model for the first time to make it versatile for prediction. The long short-term memory (LSTM) model forecasts 50.7% of the tasks with a coefficient of variation of root mean square error (CVRMSE) lower than 20%. Moreover, 60% of the tasks predicted by LSTM satisfy ASHRAE Guideline 14 with a CVRMSE under 30%. Thermal conversion systems, including power generation subsystems and waste heat recovery units, play a vital role in the overall performance of CCHP systems. Whereas a wide choice of components, nonlinear characteristics of these components challenge the automation process of system design. Therefore, this thesis second designs a configuration optimization framework consisting of thermodynamic cycle representation, evaluation, and optimizer to accelerate the system design process and maximize thermal efficiency. The framework is the first one to implement graphic knowledge and thermodynamic laws to generate new CO2 power generation (S-CO2) system configurations. The framework is then validated by optimizing the S-CO2 system's configurations under simple and complex component number limitations. The optimized S-CO2 system reaches 49.8% thermal efficiency. This efficiency is 2.3% higher than the state of the art. Third, operation strategy with uncertainty for CCHP systems is proposed in this thesis for a hospital with a floor area of 22,422 m2 at College Park, Maryland. The hospital energy demands are forecasted from the DL model. And the S-CO2 power subsystem is implemented in CCHP after optimizing from the configuration optimizer. A stochastic approximation is combined with an autoregression model to extract uncertain energy demands for the hospital. Load-following strategies, stochastic dynamic programming (SDP), and approximation approaches are implemented for CCHP system operation without and with uncertainties. As a case study, the optimization-based operation overperforms the best load-following strategy by 14% of the annual cost. Approximation-based operation strategy highly improves the computational efficiency of SDP. The daily operating cost with uncertain cooling, heating, and electricity demands is about 0.061 /m2,andapotentialannualcostisabout22.33/m2, and a potential annual cost is about 22.33 /m2. This thesis fills the gap in multiple energy types forecast for multiple building types via DL models, prompts the design automation of S-CO2 systems by configuration optimization, and accelerates operation optimization of a CCHP system with uncertainty by an approximation approach. In-depth data-driven methods and diversified optimization techniques should be investigated further to boost the system efficiency and advance the automation process of the CCHP system

    Solving Multi-objective Integer Programs using Convex Preference Cones

    Get PDF
    Esta encuesta tiene dos objetivos: en primer lugar, identificar a los individuos que fueron víctimas de algún tipo de delito y la manera en que ocurrió el mismo. En segundo lugar, medir la eficacia de las distintas autoridades competentes una vez que los individuos denunciaron el delito que sufrieron. Adicionalmente la ENVEI busca indagar las percepciones que los ciudadanos tienen sobre las instituciones de justicia y el estado de derecho en Méxic

    Adapting Swarm Intelligence for the Self-Assembly of Prespecified Artificial Structures

    Get PDF
    The self-assembly problem involves designing individual behaviors that a collection of agents can follow in order to form a given target structure. An effective solution would potentially allow self-assembly to be used as an automated construction tool, for example, in dangerous or inaccessible environments. However, existing methodologies are generally limited in that they are either only capable of assembling a very limited range of simple structures, or only applicable in an idealized environment having few or no constraints on the agents' motion. The research presented here seeks to overcome these limitations by studying the self-assembly of a diverse class of non-trivial structures (building, bridge, etc.) from different-sized blocks, whose motion in a continuous, three-dimensional environment is constrained by gravity and block impenetrability. These constraints impose ordering restrictions on the self-assembly process, and necessitate the assembly and disassembly of temporary structures such as staircases. It is shown that self-assembly under these conditions can be accomplished through an integration of several techniques from the field of swarm intelligence. Specifically, this work extends and incorporates computational models of distributed construction, collective motion, and communication via local signaling. These mechanisms enable blocks to determine where to deposit themselves, to effectively move through continuous space, and to coordinate their behavior over time, while using only local information. Further, an algorithm is presented that, given a target structure, automatically generates distributed control rules that encode individual block behaviors. It is formally proved that under reasonable assumptions, these rules will lead to the emergence of correct system-level coordination that allows self-assembly to complete in spite of environmental constraints. The methodology is also verified experimentally by generating rules for a diverse set of structures, and testing them via simulations. Finally, it is shown that for some structures, the generated rules are able to parsimoniously capture the necessary behaviors. This work yields a better understanding of the complex relationship between local behaviors and global structures in non-trivial self-assembly processes, and presents a step towards their use in the real world

    Program variation for software security

    Get PDF

    Investigating the latency cost of statistical learning of a Gaussian mixture simulating on a convolutional density network with adaptive batch size technique for background modeling

    Get PDF
    Background modeling is a promising field of study in video analysis, with a wide range of applications in video surveillance. Deep neural networks have proliferated in recent years as a result of effective learning-based approaches to motion analysis. However, these strategies only provide a partial description of the observed scenes' insufficient properties since they use a single-valued mapping to estimate the target background's temporal conditional averages. On the other hand, statistical learning in the imagery domain has become one of the most widely used approaches due to its high adaptability to dynamic context transformation, especially Gaussian Mixture Models. Specifically, these probabilistic models aim to adjust latent parameters to gain high expectation of realistically observed data; however, this approach only concentrates on contextual dynamics in short-term analysis. In a prolonged investigation, it is challenging so that statistical methods cannot reserve the generalization of long-term variation of image data. Balancing the trade-off between traditional machine learning models and deep neural networks requires an integrated approach to ensure accuracy in conception while maintaining a high speed of execution. In this research, we present a novel two-stage approach for detecting changes using two convolutional neural networks in this work. The first architecture is based on unsupervised Gaussian mixtures statistical learning, which is used to classify the salient features of scenes. The second one implements a light-weighted pipeline of foreground detection. Our two-stage system has a total of approximately 3.5K parameters but still converges quickly to complex motion patterns. Our experiments on publicly accessible datasets demonstrate that our proposed networks are not only capable of generalizing regions of moving objects with promising results in unseen scenarios, but also competitive in terms of performance quality and effectiveness foreground segmentation. Apart from modeling the data's underlying generator as a non-convex optimization problem, we briefly examine the communication cost associated with the network training by using a distributed scheme of data-parallelism to simulate a stochastic gradient descent algorithm with communication avoidance for parallel machine learnin
    corecore