1,045 research outputs found
Infrared and Raman spectroscopic studies of structural variations in minerals from Apollo 11, 12, 14 and 15 samples, volume 3
Infrared and Raman vibrational spectroscopic data, yielding direct information on molecular structure, were obtained for single grains ( 150 microns) of minerals, basalts, and glasses isolated from Apollo 11, 12, 14, and 15 rock and dust samples, and for grains in Apollo 14 polished butt samples. From the vibrational data, specification substitutions were determined for the predominant silicate minerals of plagioclase, pyroxene, and olivine. Unique spectral variations for grains of K-feldspar, orthopyroxene, pyroxenoid, and ilmenite were observed to exceed the ranges of terrestrial samples, and these variations may be correlatable with formation histories. Alpha-quartz was isolated as pure single grains, in granitic grains composited with sanidine, and in unique grains that were intimately mixed with varying amounts of glass. Accessory minerals of chromite and ulvospinel were isolated as pure grains and structurally characterized from their distinctive infrared spectra. Fundamental vibrations of the SiO4 tetrahedra in silicate minerals were used to classify bulk compositions in dust sieved fractions, basalt grains and glass particles, and to compare modal characteristics for maria, highland and rille samples. No hydrated minerals were found in any of the samples studied, indicating anhydrous formation conditions
Henrik Ibsen and Thomas Hardy : a sociological comparison
Although much has been written about the plays of Henrik Ibsen and the novels of Thomas Hardy, there have been no notable comparisons of the works of the two men, perhaps because they wrote in the two different media. Another possible explanation is the fact that Ibsen is universally regarded as the father of modern drama, but Hardy\u27s status-whether he is the last Victorian novelist or the first modern one-is disputed. It is the purpose of this paper to demonstrate, by comparison, that Hardy should very definitely be classed with the modern, realistic writers.
The final Chapter of this paper will draw together the ideas which may be logically concluded from the comparison of Ibsen and Hardy from a sociological point of view. The view was chosen because it provided the best way of demonstrating the modernity of Thomas Hardy, who should be considered not only as a great novelist but also as one of the leaders in the movement away from hypocrisy and toward realism in literature
Evaluation of rate law approximations in bottom-up kinetic models of metabolism.
BackgroundThe mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question.ResultsIn this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations.ConclusionsOverall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches
Structural identifiability of dynamic systems biology models
22 páginas, 5 figuras, 2 tablas.-- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areasAFV acknowledges funding from the Galician government (Xunta de Galiza, Consellería de Cultura, Educación e Ordenación Universitaria http://www.edu.xunta.es/portal/taxonomy/term/206) through the I2C postdoctoral program, fellowship ED481B2014/133-0. AB and AFV were partially supported by grant DPI2013-47100-C2-2-P from the Spanish Ministry of Economy and Competitiveness (MINECO). AFV acknowledges additional funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 686282 (CanPathPro). AP was partially supported through EPSRC projects EP/M002454/1 and EP/J012041/1.Peer reviewe
Prime Focus Spectrograph - Subaru's future -
The Prime Focus Spectrograph (PFS) of the Subaru Measurement of Images and
Redshifts (SuMIRe) project has been endorsed by Japanese community as one of
the main future instruments of the Subaru 8.2-meter telescope at Mauna Kea,
Hawaii. This optical/near-infrared multi-fiber spectrograph targets cosmology
with galaxy surveys, Galactic archaeology, and studies of galaxy/AGN evolution.
Taking advantage of Subaru's wide field of view, which is further extended with
the recently completed Wide Field Corrector, PFS will enable us to carry out
multi-fiber spectroscopy of 2400 targets within 1.3 degree diameter. A
microlens is attached at each fiber entrance for F-ratio transformation into a
larger one so that difficulties of spectrograph design are eased. Fibers are
accurately placed onto target positions by positioners, each of which consists
of two stages of piezo-electric rotary motors, through iterations by using
back-illuminated fiber position measurements with a wide-field metrology
camera. Fibers then carry light to a set of four identical fast-Schmidt
spectrographs with three color arms each: the wavelength ranges from 0.38
{\mu}m to 1.3 {\mu}m will be simultaneously observed with an average resolving
power of 3000. Before and during the era of extremely large telescopes, PFS
will provide the unique capability of obtaining spectra of 2400
cosmological/astrophysical targets simultaneously with an 8-10 meter class
telescope. The PFS collaboration, led by IPMU, consists of USP/LNA in Brazil,
Caltech/JPL, Princeton, & JHU in USA, LAM in France, ASIAA in Taiwan, and
NAOJ/Subaru.Comment: 13 pages, 11 figures, submitted to "Ground-based and Airborne
Instrumentation for Astronomy IV, Ian S. McLean, Suzanne K. Ramsay, Hideki
Takami, Editors, Proc. SPIE 8446 (2012)
Measurement of the B0 anti-B0 oscillation frequency using l- D*+ pairs and lepton flavor tags
The oscillation frequency Delta-md of B0 anti-B0 mixing is measured using the
partially reconstructed semileptonic decay anti-B0 -> l- nubar D*+ X. The data
sample was collected with the CDF detector at the Fermilab Tevatron collider
during 1992 - 1995 by triggering on the existence of two lepton candidates in
an event, and corresponds to about 110 pb-1 of pbar p collisions at sqrt(s) =
1.8 TeV. We estimate the proper decay time of the anti-B0 meson from the
measured decay length and reconstructed momentum of the l- D*+ system. The
charge of the lepton in the final state identifies the flavor of the anti-B0
meson at its decay. The second lepton in the event is used to infer the flavor
of the anti-B0 meson at production. We measure the oscillation frequency to be
Delta-md = 0.516 +/- 0.099 +0.029 -0.035 ps-1, where the first uncertainty is
statistical and the second is systematic.Comment: 30 pages, 7 figures. Submitted to Physical Review
Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector
The inclusive and dijet production cross-sections have been measured for jets
containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass
energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The
measurements use data corresponding to an integrated luminosity of 34 pb^-1.
The b-jets are identified using either a lifetime-based method, where secondary
decay vertices of b-hadrons in jets are reconstructed using information from
the tracking detectors, or a muon-based method where the presence of a muon is
used to identify semileptonic decays of b-hadrons inside jets. The inclusive
b-jet cross-section is measured as a function of transverse momentum in the
range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet
cross-section is measured as a function of the dijet invariant mass in the
range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets
and the angular variable chi in two dijet mass regions. The results are
compared with next-to-leading-order QCD predictions. Good agreement is observed
between the measured cross-sections and the predictions obtained using POWHEG +
Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet
cross-section. However, it does not reproduce the measured inclusive
cross-section well, particularly for central b-jets with large transverse
momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final
version published in European Physical Journal
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. © 2014 Hogg et al
Genome-scale modeling of the protein secretory machinery in yeast
The protein secretory machinery in Eukarya is involved in post-translational modification (PTMs) and sorting of the secretory and many transmembrane proteins. While the secretory machinery has been well-studied using classic reductionist approaches, a holistic view of its complex nature is lacking. Here, we present the first genome-scale model for the yeast secretory machinery which captures the knowledge generated through more than 50 years of research. The model is based on the concept of a Protein Specific Information Matrix (PSIM: characterized by seven PTMs features). An algorithm was developed which mimics secretory machinery and assigns each secretory protein to a particular secretory class that determines the set of PTMs and transport steps specific to each protein. Protein abundances were integrated with the model in order to gain system level estimation of the metabolic demands associated with the processing of each specific protein as well as a quantitative estimation of the activity of each component of the secretory machinery
- …
