105 research outputs found

    Performance analysis of a palletizing system

    Get PDF
    When designing the layout of the material handling system for a warehouse there is a need for the analysis of overall system performance. Since warehouses are typically very large and complex systems it is infeasible to build a simulation model for the entire system. Our approach is to divide the system into subsystems that are small enough to be captured in simulation models. These models can then later be assembled to acquire a simulation model of the entire system. In this case study we assess the feasibility of this approach by creating a simulation model of a part of a warehouse and verify whether it can be used to embed it in a larger simulation model. The subsystem we use for our case study is a container unloading and automatic palletizing system. This system is chosen because it has already been studied extensively using another simulation tool. We also do a performance analysis of this system in order to come to an optimal layout for this subsystem as well as to reproduce the results of the earlier study for validation. For our performance analysis we created a chi model of the unloading and palletizing area. The process algebra chi has been extensively used for modeling and simulation of real-time manufacturing systems. Our case study is also used as a means to assess the suitability of chi for modeling and simulation in a logistics environment. Our experiments resulted in roughly the same outcomes as the earlier study. It turns out that for the required throughput the layout chosen in that study is optimal. We also concluded that chi is perfectly suitable for modeling logistic systems. Considering the extensive time it takes to run simulations of a rather small part of a warehouse using chi, we conclude that it is infeasible to perform simulations of entire warehousing systems by integrating the simulation models of all subsystems into one simulation model. To overcome this problem, aggregate modeling can be used

    Climate related sea-level variations over the past two millennia

    Get PDF
    Author Posting. © The Author(s), 2011. This is the author's version of the work. It is posted here by permission of National Academy of Sciences for personal use, not for redistribution. The definitive version was published in Proceedings of the National Academy of Sciences of the United States of America 108 (2011): 11017-11022, doi:10.1073/pnas.1015619108.We present new sea-level reconstructions for the past 2100 years based on salt-marsh sedimentary sequences from the US Atlantic coast. The data from North Carolina reveal four phases of persistent sea-level change after correction for glacial isostatic adjustment. Sea level was stable from at least BC 100 until AD 950. It then increased for 400 years at a rate of 0.6 mm/yr, followed by a further period of stable, or slightly falling, sea level that persisted until the late 19th century. Since then, sea level has risen at an average rate of 2.1 mm/yr, representing the steepest, century-scale increase of the past two millennia. This rate was initiated between AD 1865 and 1892. Using an extended semi-empirical modeling approach, we show that these sea-level changes are consistent with global temperature for at least the past millennium.Research was supported by NSF grants (EAR-0951686) to BPH and JPD. ACK thanks a NOSAMS internship, UPenn paleontology stipend and grants from GSA and NAMS. North Carolina sea-level research was funded by NOAA (NA05NOS4781182), USGS (02ERAG0044) and NSF (EAR-0717364) grants to BPH with S. Culver and R. Corbett (East Carolina University). JPD (EAR-0309129) and MEM (ATM-0542356) acknowledge NSF support. MV acknowledges Academy of Finland Project 123113 and COST Action ES0701

    Methyltetrazine as a small live-cell compatible bioorthogonal handle for imaging enzyme activities in situ

    Get PDF
    Bioorthogonal chemistry combines well with activity-based protein profiling, as it allows for the introduction of detection tags without significantly influencing the physiochemical and biological functions of the probe. In this work, we introduced methyltetrazinylalanine (MeTz-Ala), a close mimic of phenylalanine, into a dipeptide fluoromethylketone cysteine protease inhibitor. Following covalent and irreversible inhibition, the tetrazine allows vizualisation of the captured cathepsin activity by means of inverse electron demand Diels Alder ligation in cell lysates and live cells, demonstrating that tetrazines can be used as live cell compatible, minimal bioorthogonal tags in activity-based protein profiling.Bio-organic Synthesi

    Low potency toxins reveal dense interaction networks in metabolism

    Get PDF
    Background The chemicals of metabolism are constructed of a small set of atoms and bonds. This may be because chemical structures outside the chemical space in which life operates are incompatible with biochemistry, or because mechanisms to make or utilize such excluded structures has not evolved. In this paper I address the extent to which biochemistry is restricted to a small fraction of the chemical space of possible chemicals, a restricted subset that I call Biochemical Space. I explore evidence that this restriction is at least in part due to selection again specific structures, and suggest a mechanism by which this occurs. Results Chemicals that contain structures that our outside Biochemical Space (UnBiological groups) are more likely to be toxic to a wide range of organisms, even though they have no specifically toxic groups and no obvious mechanism of toxicity. This correlation of UnBiological with toxicity is stronger for low potency (millimolar) toxins. I relate this to the observation that most chemicals interact with many biological structures at low millimolar toxicity. I hypothesise that life has to select its components not only to have a specific set of functions but also to avoid interactions with all the other components of life that might degrade their function. Conclusions The chemistry of life has to form a dense, self-consistent network of chemical structures, and cannot easily be arbitrarily extended. The toxicity of arbitrary chemicals is a reflection of the disruption to that network occasioned by trying to insert a chemical into it without also selecting all the other components to tolerate that chemical. This suggests new ways to test for the toxicity of chemicals, and that engineering organisms to make high concentrations of materials such as chemical precursors or fuels may require more substantial engineering than just of the synthetic pathways involved

    New H-mode regimes with small ELMs and high thermal confinement in the Joint European Torus

    Get PDF
    New H-mode regimes with high confinement, low core impurity accumulation, and small edge-localized mode perturbations have been obtained in magnetically confined plasmas at the Joint European Torus tokamak. Such regimes are achieved by means of optimized particle fueling conditions at high input power, current, and magnetic field, which lead to a self-organized state with a strong increase in rotation and ion temperature and a decrease in the edge density. An interplay between core and edge plasma regions leads to reduced turbulence levels and outward impurity convection. These results pave the way to an attractive alternative to the standard plasmas considered for fusion energy generation in a tokamak with a metallic wall environment such as the ones expected in ITER.& nbsp;Published under an exclusive license by AIP Publishing

    Shattered pellet injection experiments at JET in support of the ITER disruption mitigation system design

    Get PDF
    A series of experiments have been executed at JET to assess the efficacy of the newly installed shattered pellet injection (SPI) system in mitigating the effects of disruptions. Issues, important for the ITER disruption mitigation system, such as thermal load mitigation, avoidance of runaway electron (RE) formation, radiation asymmetries during thermal quench mitigation, electromagnetic load control and RE energy dissipation have been addressed over a large parameter range. The efficiency of the mitigation has been examined for the various SPI injection strategies. The paper summarises the results from these JET SPI experiments and discusses their implications for the ITER disruption mitigation scheme

    Overview of JET results for optimising ITER operation

    Get PDF
    The JET 2019–2020 scientific and technological programme exploited the results of years of concerted scientific and engineering work, including the ITER-like wall (ILW: Be wall and W divertor) installed in 2010, improved diagnostic capabilities now fully available, a major neutral beam injection upgrade providing record power in 2019–2020, and tested the technical and procedural preparation for safe operation with tritium. Research along three complementary axes yielded a wealth of new results. Firstly, the JET plasma programme delivered scenarios suitable for high fusion power and alpha particle (α) physics in the coming D–T campaign (DTE2), with record sustained neutron rates, as well as plasmas for clarifying the impact of isotope mass on plasma core, edge and plasma-wall interactions, and for ITER pre-fusion power operation. The efficacy of the newly installed shattered pellet injector for mitigating disruption forces and runaway electrons was demonstrated. Secondly, research on the consequences of long-term exposure to JET-ILW plasma was completed, with emphasis on wall damage and fuel retention, and with analyses of wall materials and dust particles that will help validate assumptions and codes for design and operation of ITER and DEMO. Thirdly, the nuclear technology programme aiming to deliver maximum technological return from operations in D, T and D–T benefited from the highest D–D neutron yield in years, securing results for validating radiation transport and activation codes, and nuclear data for ITER

    A control oriented strategy of disruption prediction to avoid the configuration collapse of tokamak reactors

    Get PDF

    Disruption prediction at JET through deep convolutional neural networks using spatiotemporal information from plasma profiles

    Get PDF
    In view of the future high power nuclear fusion experiments, the early identification of disruptions is a mandatory requirement, and presently the main goal is moving from the disruption mitigation to disruption avoidance and control. In this work, a deep-convolutional neural network (CNN) is proposed to provide early detection of disruptive events at JET. The CNN ability to learn relevant features, avoiding hand-engineered feature extraction, has been exploited to extract the spatiotemporal information from 1D plasma profiles. The model is trained with regularly terminated discharges and automatically selected disruptive phase of disruptions, coming from the recent ITER-like-wall experiments. The prediction performance is evaluated using a set of discharges representative of different operating scenarios, and an in-depth analysis is made to evaluate the performance evolution with respect to the considered experimental conditions. Finally, as real-time triggers and termination schemes are being developed at JET, the proposed model has been tested on a set of recent experiments dedicated to plasma termination for disruption avoidance and mitigation. The CNN model demonstrates very high performance, and the exploitation of 1D plasma profiles as model input allows us to understand the underlying physical phenomena behind the predictor decision

    Testing a prediction model for the H-mode density pedestal against JET-ILW pedestals

    Get PDF
    The neutral ionisation model proposed by Groebner et al (2002 Phys. Plasmas 9 2134) to determine the plasma density profile in the H-mode pedestal, is extended to include charge exchange processes in the pedestal stimulated by the ideas of Mahdavi et al (2003 Phys. Plasmas 10 3984). The model is then tested against JET H-mode pedestal data, both in a 'standalone' version using experimental temperature profiles and also by incorporating it in the Europed version of EPED. The model is able to predict the density pedestal over a wide range of conditions with good accuracy. It is also able to predict the experimentally observed isotope effect on the density pedestal that eludes simpler neutral ionization models
    corecore