209 research outputs found

    The ALMA Frontier Fields Survey - IV. Lensing-corrected 1.1 mm number counts in Abell 2744, MACSJ0416.1-2403 and MACSJ1149.5+2223

    Get PDF
    [abridged] Characterizing the number counts of faint, dusty star-forming galaxies is currently a challenge even for deep, high-resolution observations in the FIR-to-mm regime. They are predicted to account for approximately half of the total extragalactic background light at those wavelengths. Searching for dusty star-forming galaxies behind massive galaxy clusters benefits from strong lensing, enhancing their measured emission while increasing spatial resolution. Derived number counts depend, however, on mass reconstruction models that properly constrain these clusters. We estimate the 1.1 mm number counts along the line of sight of three galaxy clusters, i.e. Abell 2744, MACSJ0416.1-2403 and MACSJ1149.5+2223, which are part of the ALMA Frontier Fields Survey. We perform detailed simulations to correct these counts for lensing effects. We use several publicly available lensing models for the galaxy clusters to derive the intrinsic flux densities of our sources. We perform Monte Carlo simulations of the number counts for a detailed treatment of the uncertainties in the magnifications and adopted source redshifts. We find an overall agreement among the number counts derived for the different lens models, despite their systematic variations regarding source magnifications and effective areas. Our number counts span ~2.5 dex in demagnified flux density, from several mJy down to tens of uJy. Our number counts are consistent with recent estimates from deep ALMA observations at a 3σ\sigma level. Below ≈\approx 0.1 mJy, however, our cumulative counts are lower by ≈\approx 1 dex, suggesting a flattening in the number counts. In our deepest ALMA mosaic, we estimate number counts for intrinsic flux densities ≈\approx 4 times fainter than the rms level. This highlights the potential of probing the sub-10 uJy population in larger samples of galaxy cluster fields with deeper ALMA observations.Comment: 19 pages, 14 figures, 3 tables. Accepted for publication in A&

    A Vertical PRF Architecture for Microblog Search

    Full text link
    In microblog retrieval, query expansion can be essential to obtain good search results due to the short size of queries and posts. Since information in microblogs is highly dynamic, an up-to-date index coupled with pseudo-relevance feedback (PRF) with an external corpus has a higher chance of retrieving more relevant documents and improving ranking. In this paper, we focus on the research question:how can we reduce the query expansion computational cost while maintaining the same retrieval precision as standard PRF? Therefore, we propose to accelerate the query expansion step of pseudo-relevance feedback. The hypothesis is that using an expansion corpus organized into verticals for expanding the query, will lead to a more efficient query expansion process and improved retrieval effectiveness. Thus, the proposed query expansion method uses a distributed search architecture and resource selection algorithms to provide an efficient query expansion process. Experiments on the TREC Microblog datasets show that the proposed approach can match or outperform standard PRF in MAP and NDCG@30, with a computational cost that is three orders of magnitude lower.Comment: To appear in ICTIR 201

    A parallel algorithm for Hamiltonian matrix construction in electron-molecule collision calculations: MPI-SCATCI

    Full text link
    Construction and diagonalization of the Hamiltonian matrix is the rate-limiting step in most low-energy electron -- molecule collision calculations. Tennyson (J Phys B, 29 (1996) 1817) implemented a novel algorithm for Hamiltonian construction which took advantage of the structure of the wavefunction in such calculations. This algorithm is re-engineered to make use of modern computer architectures and the use of appropriate diagonalizers is considered. Test calculations demonstrate that significant speed-ups can be gained using multiple CPUs. This opens the way to calculations which consider higher collision energies, larger molecules and / or more target states. The methodology, which is implemented as part of the UK molecular R-matrix codes (UKRMol and UKRMol+) can also be used for studies of bound molecular Rydberg states, photoionisation and positron-molecule collisions.Comment: Write up of a computer program MPI-SCATCI Computer Physics Communications, in pres

    Distributed Edge Connectivity in Sublinear Time

    Full text link
    We present the first sublinear-time algorithm for a distributed message-passing network sto compute its edge connectivity λ\lambda exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes O~(n1−1/353D1/353+n1−1/706)\tilde O(n^{1-1/353}D^{1/353}+n^{1-1/706}) time to compute λ\lambda and a cut of cardinality λ\lambda with high probability, where nn and DD are the number of nodes and the diameter of the network, respectively, and O~\tilde O hides polylogarithmic factors. This running time is sublinear in nn (i.e. O~(n1−ϔ)\tilde O(n^{1-\epsilon})) whenever DD is. Previous sublinear-time distributed algorithms can solve this problem either (i) exactly only when λ=O(n1/8−ϔ)\lambda=O(n^{1/8-\epsilon}) [Thurimella PODC'95; Pritchard, Thurimella, ACM Trans. Algorithms'11; Nanongkai, Su, DISC'14] or (ii) approximately [Ghaffari, Kuhn, DISC'13; Nanongkai, Su, DISC'14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a kk-edge connectivity certificate for any k=O(n1−ϔ)k=O(n^{1-\epsilon}) in time O~(nk+D)\tilde O(\sqrt{nk}+D). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA'19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC'15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the `trivial' ones). Finally, by extending the tree packing technique from [Karger STOC'96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an O~(n)\tilde O(n)-time algorithm for computing exact minimum cut for weighted graphs.Comment: Accepted at 51st ACM Symposium on Theory of Computing (STOC 2019

    A Simulation Method for the Computation of the E

    Get PDF
    We propose a set of numerical methods for the computation of the frequency-dependent eff ective primary wave velocity of heterogeneous rocks. We assume the rocks' internal microstructure is given by micro-computed tomography images. In the low/medium frequency regime, we propose to solve the acoustic equation in the frequency domain by a Finite Element Method (FEM). We employ a Perfectly Matched Layer to truncate the computational domain and we show the need to repeat the domain a su cient number of times to obtain accurate results. To make this problem computationally tractable, we equip the FEM with non-fitting meshes and we precompute multiple blocks of the sti ffness matrix. In the high-frequency range, we solve the eikonal equation with a Fast Marching Method. Numerical results con rm the validity of the proposed methods and illustrate the e ffect of density, porosity, and the size and distribution of the pores on the e ective compressional wave velocity

    Feeling crowded yet?: Crowd simulations for VR

    Get PDF
    With advances in virtual reality technology and its multiple applications, the need for believable, immersive virtual environments is increasing. Even though current computer graphics methods allow us to develop highly realistic virtual worlds, the main element failing to enhance presence is autonomous groups of human inhabitants. A great number of crowd simulation techniques have emerged in the last decade, but critical details in the crowd's movements and appearance do not meet the standards necessary to convince VR participants that they are present in a real crowd. In this paper, we review recent advances in the creation of immersive virtual crowds and discuss areas that require further work to turn these simulations into more fully immersive and believable experiences.Peer ReviewedPostprint (author's final draft
    • 

    corecore