1,029 research outputs found

    Development of an X-band Photoinjector at SLAC

    Full text link
    As part of a National Cancer Institute contract to develop a compact source of monoenergetic X-rays via Compton backscattering, we have completed the design and construction of a 5.5 cell Photoinjector operating at 11.424 GHz. Successful completion of this project will result in the capability of generating a monoenergetic X-ray beam, continuously tunable from 20 - 85 KeV. The immediate goal is the development of a Photoinjector producing 7 MeV, 0.5 nC, sub-picosecond electron bunches with normalized RMS emittances of approximately 1 pi-mm-mR at repetition rates up to 60 Hz. This beam will then be further accelerated to 60 MeV using a 1.05 m accelerating structure. This Photoinjector is somewhat different than the traditional 1.5 cell design both because of the number of cells and the symmetrically fed input coupler cell. Its operating frequency is also unique. Since the cathode is non-removable, cold-test tuning was somewhat more difficult than in other designs. We will present results of "bead-drop" measurements used in tuning this structure. Initial beam measurements are currently in progress and results will be presented as well as results of RF conditioning to high gradients at X-band. Details of the RF system, emittance-compensating solenoid, and cathode laser system as well as PARMELA simulations will also be presented.Comment: 3 pages, 6 figures, 1 Table, LINAC 200

    Stabilizing reinforcement learning control: A modular framework for optimizing over all stable behavior

    Full text link
    We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees provided by using the Youla-Kucera parameterization to define the search domain. Recent advances in behavioral systems allow us to construct a data-driven internal model; this enables an alternative realization of the Youla-Kucera parameterization based entirely on input-output exploration data. Perhaps of independent interest, we formulate and analyze the stability of such data-driven models in the presence of noise. The Youla-Kucera approach requires a stable "parameter" for controller design. For the training of reinforcement learning agents, the set of all stable linear operators is given explicitly through a matrix factorization approach. Moreover, a nonlinear extension is given using a neural network to express a parameterized set of stable operators, which enables seamless integration with standard deep learning libraries. Finally, we show how these ideas can also be applied to tune fixed-structure controllers.Comment: Preprint; 18 pages. arXiv admin note: text overlap with arXiv:2304.0342

    Local stressors mask the effects of warming in freshwater ecosystems

    Get PDF
    Climate warming is a ubiquitous stressor in freshwater ecosystems, yet its interactive effects with other stressors are poorly understood. We address this knowledge gap by testing the ability of three contrasting null models to predict the joint impacts of warming and a range of other aquatic stressors using a new database of 296 experimental combinations. Despite concerns that stressors will interact to cause synergisms, we found that net impacts were usually best explained by the effect of the stronger stressor alone (the dominance null model), especially if this stressor was a local disturbance associated with human land use. Prediction accuracy depended on stressor identity and how asymmetric stressors were in the magnitude of their effects. These findings suggest we can effectively predict the impacts of multiple stressors by focusing on the stronger stressor, as habitat alteration, nutrients and contamination often override the biological consequences of higher temperatures in freshwater ecosystems

    Genes, psychological traits and civic engagement

    Get PDF
    Civic engagement is a classic example of a collective action problem: while civic participation improves life in the community as a whole, it is individually costly and thus there is an incentive to free ride on the actions of others. Yet, we observe significant inter-individual variation in the degree to which people are in fact civically engaged. Early accounts reconciling the theoretical prediction with empirical reality focused either on variation in individuals\u27 material resources or their attitudes, but recent work has turned to genetic differences between individuals. We show an underlying genetic contribution to an index of civic engagement (0.41), as well as for the individual acts of engagement of volunteering for community or public service activities (0.33), regularly contributing to charitable causes (0.28) and voting in elections (0.27). There are closer genetic relationships between donating and the other two activities; volunteering and voting are not genetically correlated. Further, we show that most of the correlation between civic engagement and both positive emotionality and verbal IQ can be attributed to genes that affect both traits. These results enrich our understanding of the way in which genetic variation may influence the wide range of collective action problems that individuals face in modern community life

    Reinforcement Learning with Partial Parametric Model Knowledge

    Full text link
    We adapt reinforcement learning (RL) methods for continuous control to bridge the gap between complete ignorance and perfect knowledge of the environment. Our method, Partial Knowledge Least Squares Policy Iteration (PLSPI), takes inspiration from both model-free RL and model-based control. It uses incomplete information from a partial model and retains RL's data-driven adaption towards optimal performance. The linear quadratic regulator provides a case study; numerical experiments demonstrate the effectiveness and resulting benefits of the proposed method.Comment: IFAC World Congress 202

    Identification and characterization of Dlc1 isoforms in the mouse and study of the biological function of a single gene trapped isoform

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Dlc1 (deleted in liver cancer 1) tumour suppressor gene codes for a RhoGTPase activating protein that is found inactivated in many tumour types. Several transcriptional isoforms have been described but the functional significance and tissue distribution of each form is presently poorly understood. Also, differences in the number of isoforms and splice variants reported still exist between different mammalian species. In order to better understand the number and function of the different variants of the Dlc1 gene in the mouse, we have carried out a detailed analysis. Extensive 3' RACE experiments were carried out in order to identify all possible Dlc1 isoforms and splice variants in the mouse. In addition, we have generated a gene trapped mouse that targets one of these isoforms in order to study its biological function. The effect of this gene trap insertion on the splicing of other isoforms has also been studied.</p> <p>Results</p> <p>In addition to the known 6.1 and 6.2 Kb transcripts of Dlc1, our study revealed the existence of a novel 7.6 Kb transcriptional isoform in the mouse, which corresponds to the human 7.4 Kb (KIAA1723) cDNA transcript. A gene trapped embryonic cell line, with an insertion between Exon 1 and 2 of the 6.1 Kb transcriptional isoform, was used to generate a transgenic mouse. This line showed a significant reduction in the expression of the trapped isoform. However, reduced expression of the other isoforms was not seen. Mice heterozygous for the gene trapped allele were phenotypically normal, but homozygous mutant embryos did not survive beyond 10.5 days post coitum. Dlc1<sup>gt/gt </sup>embryos showed defects in the brain, heart, and placental blood vessels. Cultured serum-free mouse embryo cells from Dlc1 deficient embryos had elevated RhoA activity and displayed alterations in the organization of actin filaments and focal adhesions. The Dlc1 deficient cells also exhibited increased wound closure in an <it>in vitro </it>scratch assay.</p> <p>Conclusions</p> <p>The mouse has three major transcriptional isoforms of the Dlc1 gene that are differentially expressed in various tissues. A mouse with exon 1 of the 6.1 Kb transcript gt resulted in hypomorphic expression of Dlc1 protein and an embryonic lethal phenotype in the homozygous condition, which indicates that this isoform plays a major role in mouse development. The Dlc1 deficient cells showed altered cytoskeleton structure, increased RhoA activity and cellular migration.</p

    Meta-Reinforcement Learning for the Tuning of PI Controllers: An Offline Approach

    Full text link
    Meta-learning is a branch of machine learning which trains neural network models to synthesize a wide variety of data in order to rapidly solve new problems. In process control, many systems have similar and well-understood dynamics, which suggests it is feasible to create a generalizable controller through meta-learning. In this work, we formulate a meta reinforcement learning (meta-RL) control strategy that can be used to tune proportional--integral controllers. Our meta-RL agent has a recurrent structure that accumulates "context" to learn a system's dynamics through a hidden state variable in closed-loop. This architecture enables the agent to automatically adapt to changes in the process dynamics. In tests reported here, the meta-RL agent was trained entirely offline on first order plus time delay systems, and produced excellent results on novel systems drawn from the same distribution of process dynamics used for training. A key design element is the ability to leverage model-based information offline during training in simulated environments while maintaining a model-free policy structure for interacting with novel processes where there is uncertainty regarding the true process dynamics. Meta-learning is a promising approach for constructing sample-efficient intelligent controllers.Comment: 23 pages; postprin

    Meta-Reinforcement Learning for Adaptive Control of Second Order Systems

    Full text link
    Meta-learning is a branch of machine learning which aims to synthesize data from a distribution of related tasks to efficiently solve new ones. In process control, many systems have similar and well-understood dynamics, which suggests it is feasible to create a generalizable controller through meta-learning. In this work, we formulate a meta reinforcement learning (meta-RL) control strategy that takes advantage of known, offline information for training, such as a model structure. The meta-RL agent is trained over a distribution of model parameters, rather than a single model, enabling the agent to automatically adapt to changes in the process dynamics while maintaining performance. A key design element is the ability to leverage model-based information offline during training, while maintaining a model-free policy structure for interacting with new environments. Our previous work has demonstrated how this approach can be applied to the industrially-relevant problem of tuning proportional-integral controllers to control first order processes. In this work, we briefly reintroduce our methodology and demonstrate how it can be extended to proportional-integral-derivative controllers and second order systems.Comment: AdCONIP 2022. arXiv admin note: substantial text overlap with arXiv:2203.0966
    • ā€¦
    corecore