156 research outputs found

    Composição química e variação sazonal dos óleos voláteis das folhas de Michelia champaca L., Magnoliaceae

    Get PDF
    The volatile oils from leaves of Michelia champaca L. collected bimonthly during one year (four times on the fifteenth day of January, March, May, July, September, and November - 2004) were subjected to GC/FID and GC-MS analysis, from which thirteen components were identified. Additionally, part of the oil obtained from January collection was subjected to fractionation over silica gel soaked with AgNO3 to afford five of the main sesquiterpenes (β-elemene, β-caryophyllene, α-humulene, β-selinene, and α-cadinol). The obtained data showed a significative variation in the proportions of the components, which could be associated to climatic parameters in each collection periods.Os óleos voláteis das folhas de Michelia champaca L., coletadas bimestralmente ao longo de um ano (quatro vezes no décimo quinto dia de janeiro, março, maio, julho, setembro e novembro de 2004), foram submetidos à análise por CG/DIC e CG-EM, de onde foram identificados treze componentes. Adicionalmente, parte do óleo obtido na coleta de janeiro foi submetida a fracionamento em gel de sílica impregnada com AgNO3 fornecendo cinco dos principais sesquiterpenos (β-elemeno, β-cariofileno, α-humuleno, β-selineno e α-cadinol). Os dados obtidos mostram uma variação significativa na proporção dos componentes, a qual pode estar associada a parâmetros microclimáticos em cada período de coleta.Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Universidade Federal de São Paulo (UNIFESP) Departamento de Ciências Exatas e da TerraUniversidade Presbiteriana Mackenzie Centro de Ciências Biológicas e da Saúde Centro de Ciências e HumanidadesUNIFESP, Depto. de Ciências Exatas e da TerraSciEL

    Minimax Exploiter: A Data Efficient Approach for Competitive Self-Play

    Full text link
    Recent advances in Competitive Self-Play (CSP) have achieved, or even surpassed, human level performance in complex game environments such as Dota 2 and StarCraft II using Distributed Multi-Agent Reinforcement Learning (MARL). One core component of these methods relies on creating a pool of learning agents -- consisting of the Main Agent, past versions of this agent, and Exploiter Agents -- where Exploiter Agents learn counter-strategies to the Main Agents. A key drawback of these approaches is the large computational cost and physical time that is required to train the system, making them impractical to deploy in highly iterative real-life settings such as video game productions. In this paper, we propose the Minimax Exploiter, a game theoretic approach to exploiting Main Agents that leverages knowledge of its opponents, leading to significant increases in data efficiency. We validate our approach in a diversity of settings, including simple turn based games, the arcade learning environment, and For Honor, a modern video game. The Minimax Exploiter consistently outperforms strong baselines, demonstrating improved stability and data efficiency, leading to a robust CSP-MARL method that is both flexible and easy to deploy
    corecore