2,228 research outputs found
Surface-mediated attraction between colloids
We investigate the equilibrium properties of a colloidal solution in contact
with a soft interface. As a result of symmetry breaking, surface effects are
generally prevailing in confined colloidal systems. In this Letter, particular
emphasis is given to surface fluctuations and their consequences on the local
(re)organization of the suspension. It is shown that particles experience a
significant effective interaction in the vicinity of the interface. This
potential of mean force is always attractive, with range controlled by the
surface correlation length. We suggest that, under some circumstances,
surface-induced attraction may have a strong influence on the local particle
distribution
Oscillatory Bursting as a Mechanism for Temporal Coupling and Information Coding
© Copyright © 2020 Tal, Neymotin, Bickel, Lakatos and Schroeder. Even the simplest cognitive processes involve interactions between cortical regions. To study these processes, we usually rely on averaging across several repetitions of a task or across long segments of data to reach a statistically valid conclusion. Neuronal oscillations reflect synchronized excitability fluctuations in ensembles of neurons and can be observed in electrophysiological recordings in the presence or absence of an external stimulus. Oscillatory brain activity has been viewed as sustained increase in power at specific frequency bands. However, this perspective has been challenged in recent years by the notion that oscillations may occur as transient burst-like events that occur in individual trials and may only appear as sustained activity when multiple trials are averaged together. In this review, we examine the idea that oscillatory activity can manifest as a transient burst as well as a sustained increase in power. We discuss the technical challenges involved in the detection and characterization of transient events at the single trial level, the mechanisms that might generate them and the features that can be extracted from these events to study single-trial dynamics of neuronal ensemble activity
Triangulating Abuse Liability Assessment for Flavoured Cigar Products Using Physiological, Behavioural Economic and Subjective Assessments: A Within-subjects Clinical Laboratory Protocol
Introduction In the USA, Food and Drug Administration regulations prohibit the sale of flavoured cigarettes, with menthol being the exception. However, the manufacture, advertisement and sale of flavoured cigar products are permitted. Such flavourings influence positive perceptions of tobacco products and are linked to increased use. Flavourings may mask the taste of tobacco and enhance smoke inhalation, influencing toxicant exposure and abuse liability among novice tobacco users. Using clinical laboratory methods, this study investigates how flavour availability affects measures of abuse liability in young adult cigarette smokers. The specific aims are to evaluate the effect of cigar flavours on nicotine exposure, and behavioural and subjective measures of abuse liability.
Methods and analyses Participants (projected n=25) are healthy smokers of five or more cigarettes per day over the past 3 months, 18–25 years old, naive to cigar use (lifetime use of 50 or fewer cigar products and no more than 10 cigars smoked in the past 30 days) and without a desire to quit cigarette smoking in the next 30 days. Participants complete five laboratory sessions in a Latin square design with either their own brand cigarette or a session-specific Black & Mild cigar differing in flavour (apple, cream, original and wine). Participants are single-blinded to cigar flavours. Each session consists of two 10-puff smoking bouts (30 s interpuff interval) separated by 1 hour. Primary outcomes include saliva nicotine concentration, behavioural economic task performance and response to various questionnaire items assessing subjective effects predictive of abuse liability. Differences in outcomes across own brand cigarette and flavoured cigar conditions will be tested using linear mixed models
Human immunodeficiency virus rebound after suppression to < 400 copies/mL during initial highly active antiretroviral therapy regimens, according to prior nucleoside experience and duration of suppression
This study evaluated 1433 human immunodeficiency virus (HIV)-infected patients starting highly active antiretroviral therapy (HAART), 409 (28%) of whom had prior nucleoside experience and achieved an HIV load of <400 copies/mL by 24 weeks of therapy. Three hundred seven patients experienced virus rebound during a total of 2773.3 person-years of follow-up. There was a higher rate of virus rebound among the patients with pre-HAART nucleoside experience (relative hazard [RH], 2.86; 95% confidence interval, 2.22-3.84; P < .0001) and a decreasing rate of virus rebound with increasing duration of virus suppression (i.e., time since achieving a virus load of <400 HIV RNA copies/mL) among both the nucleoside-experienced and naive patients (P < .0001), but the difference between the groups persisted into the third year of follow-up (P = .0007). Even patients who had experienced <2 months of nucleoside therapy before beginning HAART had an increased risk of virus rebound (RH, 1.95; P = .009). It appears that only a small period of pre-HAART nucleoside therapy is sufficient to confer a disadvantage, in terms of risk of virus rebound, that persists for several years
Ablation debris control by means of closed thick film filtered water immersion
The performance of laser ablation generated debris control by means of open immersion techniques have been shown to be limited by flow surface ripple effects on the beam and the action of ablation plume pressure loss by splashing of the immersion fluid. To eradicate these issues a closed technique has been developed which ensured a controlled geometry for both the optical interfaces of the flowing liquid film. This had the action of preventing splashing, ensuring repeatable machining conditions and allowed for control of liquid flow velocity. To investigate the performance benefits of this closed immersion technique bisphenol A polycarbonate samples have been machined using filtered water at a number of flow velocities. The results demonstrate the efficacy of the closed immersion technique: a 93% decrease in debris is produced when machining under closed filtered water immersion; the average debris particle size becomes larger, with an equal proportion of small and medium sized debris being produced when laser machining under closed flowing filtered water immersion; large debris is shown to be displaced further by a given flow velocity than smaller debris, showing that the action of flow turbulence in the duct has more impact on smaller debris. Low flow velocities were found to be less effective at controlling the positional trend of deposition of laser ablation generated debris than high flow velocities; but, use of excessive flow velocities resulted in turbulence motivated deposition. This work is of interest to the laser micromachining community and may aide in the manufacture of 2.5D laser etched patterns covering large area wafers and could be applied to a range of wavelengths and laser types
Data-driven efficient score tests for deconvolution problems
We consider testing statistical hypotheses about densities of signals in
deconvolution models. A new approach to this problem is proposed. We
constructed score tests for the deconvolution with the known noise density and
efficient score tests for the case of unknown density. The tests are
incorporated with model selection rules to choose reasonable model dimensions
automatically by the data. Consistency of the tests is proved
Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing
<p>Abstract</p> <p>Background</p> <p>Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable.</p> <p>Results</p> <p>Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable performance.) The posterior predictive assessment corroborates these findings.</p> <p>Conclusions</p> <p>Algorithms for detecting differential gene expression may be compared by estimating each algorithm's error in predicting expression ratios, whether such ratios are defined across microarray channels or between two independent groups.</p> <p>According to two distinct estimators of prediction error, algorithms using hierarchical models outperform the other algorithms of the study. The fact that fold-change shrinkage performed as well as conventional model selection criteria calls for investigating algorithms that combine the strengths of significance testing and fold-change estimation.</p
Reconstruction Mechanism of FCC Transition-Metal (001) Surfaces
The reconstruction mechanism of (001) fcc transition metal surfaces is
investigated using a full-potential all-electron electronic structure method
within density-functional theory. Total-energy supercell calculations confirm
the experimental finding that a close-packed quasi-hexagonal overlayer
reconstruction is possible for the late 5-metals Ir, Pt, and Au, while it is
disfavoured in the isovalent 4 metals (Rh, Pd, Ag). The reconstructive
behaviour is driven by the tensile surface stress of the unreconstructed
surfaces; the stress is significantly larger in the 5 metals than in 4
ones, and only in the former case it overcomes the substrate resistance to the
required geometric rearrangement. It is shown that the surface stress for these
systems is due to charge depletion from the surface layer, and that the
cause of the 4th-to-5th row stress difference is the importance of relativistic
effects in the 5 series.Comment: RevTeX 3.0, 12 pages, 1 PostScript figure available upon request] 23
May 199
- …