32 research outputs found

    Genome-wide SNP typing of ancient DNA: Determination of hair and eye color of Bronze Age humans from their skeletal remains.

    Get PDF
    Objective A genome-wide high-throughput single nucleotide polymorphism (SNP) typing method was tested with respect of the applicability to ancient and degraded DNA. The results were compared to mini-sequencing data achieved through single base extension (SBE) typing. The SNPs chosen for the study allow to determine the hair colors and eye colors of humans. Material and methods The DNA samples were extracted from the skeletal remains of 59 human individuals dating back to the Late Bronze Age. The 3,000 years old bones had been discovered in the Lichtenstein Cave in Lower Saxony, Germany. The simultaneous typing of 24 SNPs for each of the ancient DNA samples was carried out using the 192.24 Dynamic Array (TM) by Fluidigm (R). Results Thirty-eight of the ancient samples (=64%) revealed full and reproducible SNP genotypes allowing hair and eye color phenotyping. In 10 samples (=17%) at least half of the SNPs were unambiguously determined, in 11 samples (=19%) the SNP typing failed. For 23 of the 59 individuals, a comparison of the SNP typing results with genotypes from an earlier performed SBE typing approach was possible. The comparison confirmed the full concordance of the results for 90% of the SNP typings. In the remaining 10% allelic dropouts were identified. Discussion The high genotyping success rate could be achieved by introducing modifications to the preamplification protocol mainly by increasing the DNA input and the amplification cycle number. The occurrence of allelic dropouts indicates that a further increase of DNA input to the preamplification step is desirable

    Cosmic Mass Functions from Gaussian Stochastic Diffusion Processes

    Get PDF
    Gaussian stochastic diffusion processes are used to derive cosmic mass functions. To get analytic relations previous studies exploited the sharp kk-space filter assumption yielding zero drift terms in the corresponding Fokker-Planck (Kolmogorov's forward) equation and thus simplifying analytic treatments significantly (excursion set formalism). In the present paper methods are described to derive for given diffusion processes and Gaussian random fields the corresponding mass and filter functions by solving the Kolmogorov's forward and backward equations including nonzero drift terms. This formalism can also be used in cases with non-sharp kk-space filters and for diffusion processes exhibiting correlations between different mass scales

    Local covariant quantum field theory over spectral geometries

    Full text link
    A framework which combines ideas from Connes' noncommutative geometry, or spectral geometry, with recent ideas on generally covariant quantum field theory, is proposed in the present work. A certain type of spectral geometries modelling (possibly noncommutative) globally hyperbolic spacetimes is introduced in terms of so-called globally hyperbolic spectral triples. The concept is further generalized to a category of globally hyperbolic spectral geometries whose morphisms describe the generalization of isometric embeddings. Then a local generally covariant quantum field theory is introduced as a covariant functor between such a category of globally hyperbolic spectral geometries and the category of involutive algebras (or *-algebras). Thus, a local covariant quantum field theory over spectral geometries assigns quantum fields not just to a single noncommutative geometry (or noncommutative spacetime), but simultaneously to ``all'' spectral geometries, while respecting the covariance principle demanding that quantum field theories over isomorphic spectral geometries should also be isomorphic. It is suggested that in a quantum theory of gravity a particular class of globally hyperbolic spectral geometries is selected through a dynamical coupling of geometry and matter compatible with the covariance principle.Comment: 21 pages, 2 figure

    Dark matter with invisible light from heavy double charged leptons of almost-commutative geometry?

    Full text link
    A new candidate of cold dark matter arises by a novel elementary particle model: the almostcommutative AC-geometrical framework. Two heavy leptons are added to the Standard Model, each one sharing a double opposite electric charge and an own lepton flavor number The novel mathematical theory of almost-commutative geometry [1] wishes to unify gauge models with gravity. In this scenario two new heavy (m_L>100GeV), oppositely double charged leptons (A,C),(A with charge -2 and C with charge +2), are born with no twin quark companions. The model naturally involves a new U(1) gauge interaction, possessed only by the AC-leptons and providing a Coulomblike attraction between them. AC-leptons posses electro-magnetic as well as Z-boson interaction and, according to the charge chosen for the new U(1) gauge interaction, a new "invisible light" interaction. Their final cosmic relics are bounded into "neutral" stable atoms (AC) forming the mysterious cold dark matter, in the spirit of the Glashow's Sinister model. An (AC) state is reached in the early Universe along a tail of a few secondary frozen exotic components. They should be now here somehow hidden in the surrounding matter. The two main secondary manifest relics are C (mostly hidden in a neutral (Cee) "anomalous helium" atom, at a 10-8 ratio) and a corresponding "ion" A bounded with an ordinary helium ion (4He); indeed the positive helium ions are able to attract and capture the free A fixing them into a neutral relic cage that has nuclear interaction (4HeA).Comment: This paper has been merged with [astro-ph/0603187] for publication in Classical and Quantum Gravit

    A hierarchy of voids: Much ado about nothing

    Get PDF
    We present a model for the distribution of void sizes and its evolution in the context of hierarchical scenarios of gravitational structure formation. We find that at any cosmic epoch the voids have a size distribution which is well-peaked about a characteristic void size which evolves self-similarly in time. This is in distinct contrast to the distribution of virialized halo masses which does not have a small-scale cut-off. In our model, the fate of voids is ruled by two processes. The first process affects those voids which are embedded in larger underdense regions: the evolution is effectively one in which a larger void is made up by the mergers of smaller voids, and is analogous to how massive clusters form from the mergers of less massive progenitors. The second process is unique to voids, and occurs to voids which happen to be embedded within a larger scale overdensity: these voids get squeezed out of existence as the overdensity collapses around them. It is this second process which produces the cut-off at small scales. In the excursion set formulation of cluster abundance and evolution, solution of the cloud-in-cloud problem, i.e., counting as clusters only those objects which are not embedded in larger clusters, requires study of random walks crossing one barrier. We show that a similar formulation of void evolution requires study of a two-barrier problem: one barrier is required to account for voids-in-voids, and the other for voids-in-clouds. Thus, in our model, the void size distribution is a function of two parameters, one of which reflects the dynamics of void formation, and the other the formation of collapsed objects.Comment: 23 pages, 9 figures, submitted to MNRA

    Algebraic structure of gravity in Ashtekar variables

    Get PDF
    The BRST transformations for gravity in Ashtekar variables are obtained by using the Maurer-Cartan horizontality conditions. The BRST cohomology in Ashtekar variables is calculated with the help of an operator δ\delta introduced by S.P. Sorella, which allows to decompose the exterior derivative as a BRST commutator. This BRST cohomology leads to the differential invariants for four-dimensional manifolds.Comment: 19 pages, report REF. TUW 94-1

    The placebo effect in the motor domain is differently modulated by the external and internal focus of attention

    Get PDF
    Among the cognitive strategies that can facilitate motor performance in sport and physical practice, a prominent role is played by the direction of the focus of attention and the placebo effect. Consistent evidence converges in indicating that these two cognitive functions can influence the motor outcome, although no study up-to-now tried to study them together in the motor domain. In this explorative study, we combine for the first time these approaches, by applying a placebo procedure to increase force and by manipulating the focus of attention with explicit verbal instructions. Sixty healthy volunteers were asked to perform abduction movements with the index finger as strongly as possible against a piston and attention could be directed either toward the movements of the finger (internal focus, IF) or toward the movements of the piston (external focus, EF). Participants were randomized in 4 groups: two groups underwent a placebo procedure (Placebo-IF and Placebo-EF), in which an inert treatment was applied on the finger with verbal information on its positive effects on force; two groups underwent a control procedure (Control-IF and Control-EF), in which the same treatment was applied with overt information about its inefficacy. The placebo groups were conditioned about the effects of the treatment with a surreptitious amplification of a visual feedback signalling the level of force. During the whole procedure, we recorded actual force, subjective variables and electromyography from the hand muscles. The Placebo-IF group had higher force levels after the procedure than before, whereas the Placebo-EF group had a decrease of force. Electromyography showed that the Placebo-IF group increased the muscle units recruitment without changing the firing rate. These findings show for the first time that the placebo effect in motor performance can be influenced by the subject\u2019s attentional focus, being enhanced with the internal focus of attention

    The Effects of Mental Fatigue on Physical Performance: A Systematic Review.

    Get PDF
    Background: Mental fatigue is a psychobiological state caused by prolonged periods of demanding cognitive activity. It has recently been suggested that mental fatigue can affect physical performance. Objective: Our objective was to evaluate the literature on impairment of physical performance due to mental fatigue and to create an overview of the potential factors underlying this effect. \ud Methods: Two electronic databases, PubMed and Web of Science (until 28 April 2016), were searched for studies designed to test whether mental fatigue influenced performance of a physical task or influenced physiological and/or perceptual responses during the physical task. Studies using short (<30 min) self-regulatory depletion tasks were excluded from the review. Results: A total of 11 articles were included, of which six were of strong and five of moderate quality. The general finding was a decline in endurance performance (decreased time to exhaustion and self-selected power output/velocity or increased completion time) associated with a higher than normal perceived exertion. Physiological variables traditionally associated with endurance performance (heart rate, blood lactate, oxygen uptake, cardiac output, maximal aerobic capacity) were unaffected by mental fatigue. Maximal strength, power, and anaerobic work were not affected by mental fatigue. Conclusion: The duration and intensity of the physical task appear to be important factors in the decrease in physical performance due to mental fatigue. The most important factor responsible for the negative impact of mental fatigue on endurance performance is a higher perceived exertion

    Building a transdisciplinary expert consensus on the cognitive drivers of performance under pressure: An international multi-panel Delphi study

    Get PDF
    IntroductionThe ability to perform optimally under pressure is critical across many occupations, including the military, first responders, and competitive sport. Despite recognition that such performance depends on a range of cognitive factors, how common these factors are across performance domains remains unclear. The current study sought to integrate existing knowledge in the performance field in the form of a transdisciplinary expert consensus on the cognitive mechanisms that underlie performance under pressure.MethodsInternational experts were recruited from four performance domains [(i) Defense; (ii) Competitive Sport; (iii) Civilian High-stakes; and (iv) Performance Neuroscience]. Experts rated constructs from the Research Domain Criteria (RDoC) framework (and several expert-suggested constructs) across successive rounds, until all constructs reached consensus for inclusion or were eliminated. Finally, included constructs were ranked for their relative importance.ResultsSixty-eight experts completed the first Delphi round, with 94% of experts retained by the end of the Delphi process. The following 10 constructs reached consensus across all four panels (in order of overall ranking): (1) Attention; (2) Cognitive Control—Performance Monitoring; (3) Arousal and Regulatory Systems—Arousal; (4) Cognitive Control—Goal Selection, Updating, Representation, and Maintenance; (5) Cognitive Control—Response Selection and Inhibition/Suppression; (6) Working memory—Flexible Updating; (7) Working memory—Active Maintenance; (8) Perception and Understanding of Self—Self-knowledge; (9) Working memory—Interference Control, and (10) Expert-suggested—Shifting.DiscussionOur results identify a set of transdisciplinary neuroscience-informed constructs, validated through expert consensus. This expert consensus is critical to standardizing cognitive assessment and informing mechanism-targeted interventions in the broader field of human performance optimization
    corecore