1,047 research outputs found
The development of a universal diagnostic probe system for Tokamak fusion test reactor
The Tokamak Fusion Test Reactor (TFTR), the largest such facility in the U.S., is discussed with respect to instrumentation in general and mechanisms in particular. The design philosophy and detailed implementation of a universal probe mechanism for TFTR is discussed
Numerical integration of variational equations
We present and compare different numerical schemes for the integration of the
variational equations of autonomous Hamiltonian systems whose kinetic energy is
quadratic in the generalized momenta and whose potential is a function of the
generalized positions. We apply these techniques to Hamiltonian systems of
various degrees of freedom, and investigate their efficiency in accurately
reproducing well-known properties of chaos indicators like the Lyapunov
Characteristic Exponents (LCEs) and the Generalized Alignment Indices (GALIs).
We find that the best numerical performance is exhibited by the
\textit{`tangent map (TM) method'}, a scheme based on symplectic integration
techniques which proves to be optimal in speed and accuracy. According to this
method, a symplectic integrator is used to approximate the solution of the
Hamilton's equations of motion by the repeated action of a symplectic map ,
while the corresponding tangent map , is used for the integration of the
variational equations. A simple and systematic technique to construct is
also presented.Comment: 27 pages, 11 figures, to appear in Phys. Rev.
A ground-based experimental test program to duplicate and study the spacecraft glow phenomenon
The use of a plasma device, the Advanced Concepts Torus-I, for producing atoms and molecules to study spacecraft glow mechanisms is discussed. A biased metal plate, located in the plasma edge, is used to accelerate and neutralize plasma ions, thus generating a neutral beam with a flux approx. 5 x 10 to the 14th power/sq cm/sec at the end of a drift tube. Our initial experiments are to produce a 10 eV molecular and atomic nitrogen beam directed onto material targets. Photon emission in the spectral range 2000 to 9000 A from excited species formed on the target surface will be investigated
A Mechanistic Model on Catalyst Deactivation by Coke Formation in a CSTR Reactor
A mechanistic model on catalyst deactivation by coke formation in a continuous stirred tank reactor (CSTR) has been developed in the paper. Catalyst deactivation by coke formation was treated as a surface reaction. Four reaction mechanisms representing coke formation through different routes were proposed. The evolved system of ordinary differential equations (ODEs) was solved numerically using MATLAB. This approach was validated by applying it to the skeletal isomerization of 1-pentene over ferrierite. Simulation results were compared qualitatively to those
obtained from the literature. Simulation results indicated that coke formation is an extremely rapid process with fast formation of coke components on the strongest acid sites leading to final coke. The coke deposition is slower at higher residence times resulting in more stable product formation and weaker deactivation. The results obtained from this work revealed that the developed model is indeed able to successfully demonstrate the most essential features of catalyst deactivation by coke formation and are in agreement with the findings in the literature. Future work is aimed to extend the study to different reactors such as a plug flow reactor, in addition to analysis of the reaction system’s
sensitivity to variables such as temperature and pressure
Avaliação dos impactos de produtos biofortificados: metodologia de referência.
A proposta apresentada neste estudo busca levantar os impactos preliminares da utilização de produtos biofortificados. Essa avaliação constitui-se numa preparação dos pesquisadores entrevistadores e dos agricultores adotantes para a realização de uma posterior avaliação ex-post de impactos, propriamente dita. Em suma, objetiva diagnosticar os primeiros impactos que a adoção de tais tecnologias causa sobre os aspectos econômicos, sociais e ambientais nas localidades adotantes
Avaliação da adoção de produtos biofortificados.
O projeto Biofort é uma iniciativa da Embrapa Agroindústria de Alimentos em parceria com várias outras unidades da Embrapa e outras instituições que busca desenvolver tecnologias e/ou produtos biofortificados. Para avaliar o impacto dos produtos desenvolvidos nesse projeto foi preciso desenvolver uma metodologia específica que captasse informações desde o momento da adoção. Assim, o objetivo desse estudo é apresentar a metodologia para monitoramento e avaliação da adoção de produtos biofortificados que será utilizada para levantamento de informações acerca desse processo de adoção, identificando o perfil do produtor interessado em plantar, consumir e/ou vender esses produtos diferenciados, bem como, identificar os fatores que levaram o produtor a adotar e consumir um produto biofortificado. O produto final desse estudo é apresentar uma metodologia organizada e padronizada de modo a atender a demanda de avaliação do projeto a ser aplicada em todas as comunidades que serão alvos da transferência dos produtos biofortificados
Interplay Between Chaotic and Regular Motion in a Time-Dependent Barred Galaxy Model
We study the distinction and quantification of chaotic and regular motion in
a time-dependent Hamiltonian barred galaxy model. Recently, a strong
correlation was found between the strength of the bar and the presence of
chaotic motion in this system, as models with relatively strong bars were shown
to exhibit stronger chaotic behavior compared to those having a weaker bar
component. Here, we attempt to further explore this connection by studying the
interplay between chaotic and regular behavior of star orbits when the
parameters of the model evolve in time. This happens for example when one
introduces linear time dependence in the mass parameters of the model to mimic,
in some general sense, the effect of self-consistent interactions of the actual
N-body problem. We thus observe, in this simple time-dependent model also, that
the increase of the bar's mass leads to an increase of the system's chaoticity.
We propose a new way of using the Generalized Alignment Index (GALI) method as
a reliable criterion to estimate the relative fraction of chaotic vs. regular
orbits in such time-dependent potentials, which proves to be much more
efficient than the computation of Lyapunov exponents. In particular, GALI is
able to capture subtle changes in the nature of an orbit (or ensemble of
orbits) even for relatively small time intervals, which makes it ideal for
detecting dynamical transitions in time-dependent systems.Comment: 21 pages, 9 figures (minor typos fixed) to appear in J. Phys. A:
Math. Theo
Experimental perspectives for systems based on long-range interactions
The possibility of observing phenomena peculiar to long-range interactions,
and more specifically in the so-called Quasi-Stationary State (QSS) regime is
investigated within the framework of two devices, namely the Free-Electron
Laser (FEL) and the Collective Atomic Recoil Laser (CARL). The QSS dynamics has
been mostly studied using the Hamiltonian Mean-Field (HMF) toy model,
demonstrating in particular the presence of first versus second order phase
transitions from magnetized to unmagnetized regimes in the case of HMF. Here,
we give evidence of the strong connections between the HMF model and the
dynamics of the two mentioned devices, and we discuss the perspectives to
observe some specific QSS features experimentally. In particular, a dynamical
analog of the phase transition is present in the FEL and in the CARL in its
conservative regime. Regarding the dissipative CARL, a formal link is
established with the HMF model. For both FEL and CARL, calculations are
performed with reference to existing experimental devices, namely the
FERMI@Elettra FEL under construction at Sincrotrone Trieste (Italy) and the
CARL system at LENS in Florence (Italy)
Research Cloud Data Communities
Big Data, big science, the data deluge, these are topics we are hearing about more and more in our
research pursuits. Then, through media hype, comes cloud computing, the saviour that is going to
resolve our Big Data issues. However, it is difficult to pinpoint exactly what researchers can actually
do with data and with clouds, how they get to exactly solve their Big Data problems, and how they
get help in using these relatively new tools and infrastructure.
Since the beginning of 2012, the NeCTAR Research Cloud has been running at the University of
Melbourne, attracting over 1,650 users from around the country. This has not only provided an
unprecedented opportunity for researchers to employ clouds in their research, but it has also given us
an opportunity to clearly understand how researchers can more easily solve their Big Data problems.
The cloud is now used daily, from running web servers and blog sites, through to hosting virtual
laboratories that can automatically create hundreds of servers depending on research demand. Of
course, it has also helped us understand that infrastructure isn’t everything. There are many other
skillsets needed to help researchers from the multitude of disciplines use the cloud effectively.
How can we solve Big Data problems on cloud infrastructure? One of the key aspects are
communities based on research platforms: Research is built on collaboration, connection and
community, and researchers employ platforms daily, whether as bio-imaging platforms,
computational platforms or cloud platforms (like DropBox).
There are some important features which enabled this to work.. Firstly, the borders to collaboration
are eased, allowing communities to access infrastructure that can be instantly built to be completely
open, through to completely closed, all managed securely through (nationally) standardised
interfaces. Secondly, it is free and easy to build servers and infrastructure, but it is also cheap to fail,
allowing for experimentation not only at a code-level, but at a server or infrastructure level as well.
Thirdly, this (virtual) infrastructure can be shared with collaborators, moving the practice of
collaboration from sharing papers and code to sharing servers, pre-configured and ready to go. And
finally, the underlying infrastructure is built with Big Data in mind, co-located with major data
storage infrastructure and high-performance computers, and interconnected with high-speed networks
nationally to research instruments.
The research cloud is fundamentally new in that it easily allows communities of researchers, often
connected by common geography (research precincts), discipline or long-term established
collaborations, to build open, collaborative platforms. These open, sharable, and repeatable platforms
encourage coordinated use and development, evolving to common community-oriented methods for
Big Data access and data manipulation.
In this paper we discuss in detail critical ingredients in successfully establishing these communities,
as well as some outcomes as a result of these communities and their collaboration enabling platforms.
We consider astronomy as an exemplar of a research field that has already looked to the cloud as a
solution to the ensuing data tsunami
- …