998 research outputs found
Recommended from our members
Experimental and Numerical Analysis of Single Phase Flow in a micro T-junction
This paper was presented at the 4th Micro and Nano Flows Conference (MNF2014), which was held at University College, London, UK. The conference was organised by Brunel University and supported by the Italian Union of Thermofluiddynamics, IPEM, the Process Intensification Network, the Institution of Mechanical Engineers, the Heat Transfer Society, HEXAG - the Heat Exchange Action Group, and the Energy Institute, ASME Press, LCN London Centre for Nanotechnology, UCL University College London, UCL Engineering, the International NanoScience Community, www.nanopaprika.eu.In this work the fluid-dynamic behaviour of a micro T-junction has been investigated both
numerically and experimentally for low Reynolds numbers (Re<14) with water as working fluid. The
velocity profiles within the T-junction has been experimentally determined by using the micro Particle Image
Velocimetry (ÎĽPIV). The experimental data have been compared with the numerical results obtained by
means of a 3D model implemented in Comsol Multiphysics® environment for incompressible, isothermal,
laminar flows with constant properties. The comparison between the experimental and the numerical data
puts in evidence a perfect agreement among the results. In the central region of the T-junction where the
velocity profiles of the inlet branches interact, the maximum difference is less than 5.8% for different flow
rates imposed at the inlet (with the ratio 1:2) and less than 4.4% in the case of the same flow rate at the inlets
(1:1). Since the estimated uncertainty of the experimental velocity is about 3%, the obtained result can be
considered very good and it demonstrates that no significant scaling effects influences the liquid mixing for
low Reynolds numbers (Re<14) and the behaviour of the micro T-junction can be considered as
conventional. The detailed analysis of the velocity profile evolution within the central region of the mixer
has allowed to determine where the fully developed laminar profile is reached (for instance 260 mm far from
the centre of the T-junction when a maximum water flow rate of 8 ml/h is considered)
On the computation of Wasserstein barycenters
The Wasserstein barycenter is an important notion in the analysis of high dimensional data with a broad range of applications in applied probability, economics, statistics, and in particular to clustering and image processing. In this paper, we state a general version of the equivalence of the Wasserstein barycenter problem to the n-coupling problem. As a consequence, the coupling to the sum principle (characterizing solutions to the n-coupling problem) provides a novel criterion for the explicit characterization of barycenters. Based on this criterion, we provide as a main contribution the simple to implement iterative swapping algorithm (ISA) for computing barycenters. The ISA is a completely non-parametric algorithm which provides a sharp image of the support of the barycenter and has a quadratic time complexity which is comparable to other well established algorithms designed to compute barycenters. The algorithm can also be applied to more complex optimization problems like the k-barycenter problem
Centers of probability measures without the mean
In the recent years, the notion of mixability has been developed with applications to operations research, optimal transportation, and quantitative finance. An n-tuple of distributions is said to be jointly mixable if there exist n random variables following these distributions and adding up to a constant, called center, with probability one. When the n distributions are identical, we speak of complete mixability. If each distribution has finite mean, the center is obviously the sum of the means. In this paper, we investigate the set of centers of completely and jointly mixable distributions not having a finite mean. In addition to several results, we show the (possibly counterintuitive) fact that, for each (Formula presented.), there exist n standard Cauchy random variables adding up to a constant C if and only if (Formula presented.
B4DS @ PRELEARN: Ensemble method for prerequisite learning
In this paper we describe the methodologies we proposed to tackle the EVALITA 2020 shared task PRELEARN.
We propose both a methodology based on gated recurrent units as well as one
using more classical word embeddings together with ensemble methods. Our
goal in choosing these approaches, is twofold, on one side we wish to see how
much of the prerequisite information is present within the pages themselves. On
the other we would like to compare how much using the information from the rest
of Wikipedia can help in identifying this type of relation. This second approach is
particularly useful in terms of extension to new entities close to the one in the corpus provided for the task but not actually present in it. With this methodologies we reached second position in the challenge
AGN counts at 15um. XMM observations of the ELAIS-S1-5 sample
Context: The counts of galaxies and AGN in the mid infra-red (MIR) bands are
important instruments for studying their cosmological evolution. However, the
classic spectral line ratios techniques can become misleading when trying to
properly separate AGN from starbursts or even from apparently normal galaxies.
Aims: We use X-ray band observations to discriminate AGN activity in
previously classified MIR-selected starburst galaxies and to derive updated
AGN1 and (Compton thin) AGN2 counts at 15 um.
Methods: XMM observations of the ELAIS-S1 15um sample down to flux limits
~2x10^-15 erg cm^-2 s^-1 (2-10 keV band) were used. We classified as AGN all
those MIR sources with a unabsorbed 2-10 keV X-ray luminosity higher that
~10^42 erg/s.
Results: We find that at least about 13(+/-6) per cent of the previously
classified starburst galaxies harbor an AGN. According to these figures, we
provide an updated estimate of the counts of AGN1 and (Compton thin) AGN2 at 15
um. It turns out that at least 24% of the extragalactic sources brighter than
0.6 my at 15 um are AGN (~13% contribution to the extragalactic background
produced at fluxes brighter than 0.6 mJy).Comment: Accepted for publication on A&
Extremal Dependence Concepts
The probabilistic characterization of the relationship between two or more random variables calls for a notion of dependence. Dependence modeling leads to mathematical and statistical challenges, and recent devel- opments in extremal dependence concepts have drawn a lot of attention to probability and its applications in several disciplines. The aim of this paper is to review various concepts of extremal positive and negative dependence, including several recently established results, reconstruct their history, link them to probabilistic optimization problems, and provide a list of open ques- tions in this area. While the concept of extremal positive dependence is agreed upon for random vectors of arbitrary dimensions, various notions of extremal negative dependence arise when more than two random variables are involved. We review existing popular concepts of extremal negative de- pendence given in literature and introduce a novel notion, which in a gen- eral sense includes the existing ones as particular cases. Even if much of the literature on dependence is focused on positive dependence, we show that negative dependence plays an equally important role in the solution of many optimization problems. While the most popular tool used nowadays to model dependence is that of a copula function, in this paper we use the equivalent concept of a set of rearrangements. This is not only for historical reasons. Re- arrangement functions describe the relationship between random variables in a completely deterministic way, allow a deeper understanding of dependence itself, and have several advantages on the approximation of solutions in a broad class of optimization problems
Copulas, credit portfolios, and the broken heart syndrome
David X. Li is professor of Finance at the Shanghai Advanced Institute of Finance (SAIF). For more than two decades, he worked at leading nancial institutions in the areas of product de- velopment, risk management, asset/liability management, and investment analytics. He was the chief risk o cer for China International Capital Corporation (CICC) Ltd, head of credit derivative research and analytics at Citigroup and Barclays Capital, and head of modeling for AIG Invest- ments.
David has a PhD degree in Statistics from the University of Waterloo, Masters degrees in Eco- nomics, Finance, and Actuarial Science, and a Bachelor\u2019s degree in Mathematics. David is currently an Associate Editor for the North American Actuarial Journal, an adjunct professor at the University of Waterloo, a senior research fellow at Global Risk Institute in Toronto, and a senior advisor to the Risk Management Institute at the National University of Singapore. David was one of the pioneers in credit derivatives. His seminal work of using copula functions for credit port- folio modeling has been widely cited by academic research, broadly used by practitioners for credit portfolio trading, risk management and rating, and well covered by the media (Wall Street Journal, Financial Times, Nikkei, and CBC News)
- …