2,293 research outputs found

    Azimuthal instability of the radial thermocapillary flow around a hot bead trapped at the water-air interface

    Full text link
    We investigate the radial thermocapillary flow driven by a laser-heated microbead in partial wetting at the water-air interface. Particular attention is paid to the evolution of the convective flow patterns surrounding the hot sphere as the latter is increasingly heated. The flow morphology is nearly axisymmetric at low laser power P. Increasing P leads to symmetry breaking with the onset of counter-rotating vortex pairs. The boundary condition at the interface, close to no-slip in the low-P regime, turns about stress-free between the vortex pairs in the high-P regime. These observations strongly support the view that surface-active impurities are inevitably adsorbed on the water surface where they form an elastic layer. The onset of vortex pairs is the signature of a hydrodynamic instability in the layer response to the centrifugal forced flow. Interestingly, our study paves the way for the design of active colloids able to achieve high-speed self-propulsion via vortex pair generation at a liquid interface

    Ablation debris control by means of closed thick film filtered water immersion

    Get PDF
    The performance of laser ablation generated debris control by means of open immersion techniques have been shown to be limited by flow surface ripple effects on the beam and the action of ablation plume pressure loss by splashing of the immersion fluid. To eradicate these issues a closed technique has been developed which ensured a controlled geometry for both the optical interfaces of the flowing liquid film. This had the action of preventing splashing, ensuring repeatable machining conditions and allowed for control of liquid flow velocity. To investigate the performance benefits of this closed immersion technique bisphenol A polycarbonate samples have been machined using filtered water at a number of flow velocities. The results demonstrate the efficacy of the closed immersion technique: a 93% decrease in debris is produced when machining under closed filtered water immersion; the average debris particle size becomes larger, with an equal proportion of small and medium sized debris being produced when laser machining under closed flowing filtered water immersion; large debris is shown to be displaced further by a given flow velocity than smaller debris, showing that the action of flow turbulence in the duct has more impact on smaller debris. Low flow velocities were found to be less effective at controlling the positional trend of deposition of laser ablation generated debris than high flow velocities; but, use of excessive flow velocities resulted in turbulence motivated deposition. This work is of interest to the laser micromachining community and may aide in the manufacture of 2.5D laser etched patterns covering large area wafers and could be applied to a range of wavelengths and laser types

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    A Global Dataset of Potential Chloride Deposits on Mars as Identified by TGO CaSSIS.

    Get PDF
    Chloride deposits are markers for early Mars' aqueous past, with important implications for our understanding of the martian climate and habitability. The Colour and Stereo Surface Imaging System (CaSSIS) onboard ESA's Trace Gas Orbiter provides high-resolution color-infrared images, enabling a planet-wide search for (small) potentially chloride-bearing deposits. Here, we use a neural network to map potentially chloride-bearing deposits in CaSSIS images over a significant fraction of the planet. We identify 965 chloride deposit candidates with diameters ranging from 3000 m, including previously unknown deposits, 136 (~14%) of which are located in the highlands north of the equator, up to ~36°N. Northern chloride candidates tend to be smaller than in the south and are predominantly located in small-scale topographic depressions in low-albedo Noachian and Hesperian highland terranes. Our new dataset augments existing chloride deposit maps, informs current and future imaging campaigns, and enables future modelling work towards a better understanding of the distribution of near-surface water in Mars' distant past

    Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable.</p> <p>Results</p> <p>Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable performance.) The posterior predictive assessment corroborates these findings.</p> <p>Conclusions</p> <p>Algorithms for detecting differential gene expression may be compared by estimating each algorithm's error in predicting expression ratios, whether such ratios are defined across microarray channels or between two independent groups.</p> <p>According to two distinct estimators of prediction error, algorithms using hierarchical models outperform the other algorithms of the study. The fact that fold-change shrinkage performed as well as conventional model selection criteria calls for investigating algorithms that combine the strengths of significance testing and fold-change estimation.</p

    VerdictDB: Universalizing Approximate Query Processing

    Full text link
    Despite 25 years of research in academia, approximate query processing (AQP) has had little industrial adoption. One of the major causes of this slow adoption is the reluctance of traditional vendors to make radical changes to their legacy codebases, and the preoccupation of newer vendors (e.g., SQL-on-Hadoop products) with implementing standard features. Additionally, the few AQP engines that are available are each tied to a specific platform and require users to completely abandon their existing databases---an unrealistic expectation given the infancy of the AQP technology. Therefore, we argue that a universal solution is needed: a database-agnostic approximation engine that will widen the reach of this emerging technology across various platforms. Our proposal, called VerdictDB, uses a middleware architecture that requires no changes to the backend database, and thus, can work with all off-the-shelf engines. Operating at the driver-level, VerdictDB intercepts analytical queries issued to the database and rewrites them into another query that, if executed by any standard relational engine, will yield sufficient information for computing an approximate answer. VerdictDB uses the returned result set to compute an approximate answer and error estimates, which are then passed on to the user or application. However, lack of access to the query execution layer introduces significant challenges in terms of generality, correctness, and efficiency. This paper shows how VerdictDB overcomes these challenges and delivers up to 171×\times speedup (18.45×\times on average) for a variety of existing engines, such as Impala, Spark SQL, and Amazon Redshift, while incurring less than 2.6% relative error. VerdictDB is open-sourced under Apache License.Comment: Extended technical report of the paper that appeared in Proceedings of the 2018 International Conference on Management of Data, pp. 1461-1476. ACM, 201

    Linear regression for numeric symbolic variables: an ordinary least squares approach based on Wasserstein Distance

    Full text link
    In this paper we present a linear regression model for modal symbolic data. The observed variables are histogram variables according to the definition given in the framework of Symbolic Data Analysis and the parameters of the model are estimated using the classic Least Squares method. An appropriate metric is introduced in order to measure the error between the observed and the predicted distributions. In particular, the Wasserstein distance is proposed. Some properties of such metric are exploited to predict the response variable as direct linear combination of other independent histogram variables. Measures of goodness of fit are discussed. An application on real data corroborates the proposed method

    A unifying principle underlying the extracellular field potential spectral responses in the human cortex

    Get PDF
    Electrophysiological mass potentials show complex spectral changes upon neuronal activation. However, it is unknown to what extent these complex band-limited changes are interrelated or, alternatively, reflect separate neuronal processes. To address this question, intracranial electrocorticograms (ECoG) responses were recorded in patients engaged in visuomotor tasks. We found that in the 10- to 100-Hz frequency range there was a significant reduction in the exponent chi of the 1/f(chi) component of the spectrum associated with neuronal activation. In a minority of electrodes showing particularly high activations the exponent reduction was associated with specific band-limited power modulations: emergence of a high gamma (80-100 Hz) and a decrease in the alpha (9-12 Hz) peaks. Importantly, the peaks\u27 height was correlated with the 1/f(chi) exponent on activation. Control simulation ruled out the possibility that the change in 1/f(chi) exponent was a consequence of the analysis procedure. These results reveal a new global, cross-frequency (10-100 Hz) neuronal process reflected in a significant reduction of the power spectrum slope of the ECoG signal

    Quantifying Robotic Swarm Coverage

    Full text link
    In the field of swarm robotics, the design and implementation of spatial density control laws has received much attention, with less emphasis being placed on performance evaluation. This work fills that gap by introducing an error metric that provides a quantitative measure of coverage for use with any control scheme. The proposed error metric is continuously sensitive to changes in the swarm distribution, unlike commonly used discretization methods. We analyze the theoretical and computational properties of the error metric and propose two benchmarks to which error metric values can be compared. The first uses the realizable extrema of the error metric to compute the relative error of an observed swarm distribution. We also show that the error metric extrema can be used to help choose the swarm size and effective radius of each robot required to achieve a desired level of coverage. The second benchmark compares the observed distribution of error metric values to the probability density function of the error metric when robot positions are randomly sampled from the target distribution. We demonstrate the utility of this benchmark in assessing the performance of stochastic control algorithms. We prove that the error metric obeys a central limit theorem, develop a streamlined method for performing computations, and place the standard statistical tests used here on a firm theoretical footing. We provide rigorous theoretical development, computational methodologies, numerical examples, and MATLAB code for both benchmarks.Comment: To appear in Springer series Lecture Notes in Electrical Engineering (LNEE). This book contribution is an extension of our ICINCO 2018 conference paper arXiv:1806.02488. 27 pages, 8 figures, 2 table

    Electrostatic and electrokinetic contributions to the elastic moduli of a driven membrane

    Get PDF
    We discuss the electrostatic contribution to the elastic moduli of a cell or artificial membrane placed in an electrolyte and driven by a DC electric field. The field drives ion currents across the membrane, through specific channels, pumps or natural pores. In steady state, charges accumulate in the Debye layers close to the membrane, modifying the membrane elastic moduli. We first study a model of a membrane of zero thickness, later generalizing this treatment to allow for a finite thickness and finite dielectric constant. Our results clarify and extend the results presented in [D. Lacoste, M. Cosentino Lagomarsino, and J. F. Joanny, Europhys. Lett., {\bf 77}, 18006 (2007)], by providing a physical explanation for a destabilizing term proportional to \kps^3 in the fluctuation spectrum, which we relate to a nonlinear (E2E^2) electro-kinetic effect called induced-charge electro-osmosis (ICEO). Recent studies of ICEO have focused on electrodes and polarizable particles, where an applied bulk field is perturbed by capacitive charging of the double layer and drives flow along the field axis toward surface protrusions; in contrast, we predict "reverse" ICEO flows around driven membranes, due to curvature-induced tangential fields within a non-equilibrium double layer, which hydrodynamically enhance protrusions. We also consider the effect of incorporating the dynamics of a spatially dependent concentration field for the ion channels.Comment: 22 pages, 10 figures. Under review for EPJ
    corecore