2,366 research outputs found

    Predicting Scientific Success Based on Coauthorship Networks

    Full text link
    We address the question to what extent the success of scientific articles is due to social influence. Analyzing a data set of over 100000 publications from the field of Computer Science, we study how centrality in the coauthorship network differs between authors who have highly cited papers and those who do not. We further show that a machine learning classifier, based only on coauthorship network centrality measures at time of publication, is able to predict with high precision whether an article will be highly cited five years after publication. By this we provide quantitative insight into the social dimension of scientific publishing - challenging the perception of citations as an objective, socially unbiased measure of scientific success.Comment: 21 pages, 2 figures, incl. Supplementary Materia

    Space-based Aperture Array For Ultra-Long Wavelength Radio Astronomy

    Full text link
    The past decade has seen the rise of various radio astronomy arrays, particularly for low-frequency observations below 100MHz. These developments have been primarily driven by interesting and fundamental scientific questions, such as studying the dark ages and epoch of re-ionization, by detecting the highly red-shifted 21cm line emission. However, Earth-based radio astronomy below frequencies of 30MHz is severely restricted due to man-made interference, ionospheric distortion and almost complete non-transparency of the ionosphere below 10MHz. Therefore, this narrow spectral band remains possibly the last unexplored frequency range in radio astronomy. A straightforward solution to study the universe at these frequencies is to deploy a space-based antenna array far away from Earths' ionosphere. Various studies in the past were principally limited by technology and computing resources, however current processing and communication trends indicate otherwise. We briefly present the achievable science cases, and discuss the system design for selected scenarios, such as extra-galactic surveys. An extensive discussion is presented on various sub-systems of the potential satellite array, such as radio astronomical antenna design, the on-board signal processing, communication architectures and joint space-time estimation of the satellite network. In light of a scalable array and to avert single point of failure, we propose both centralized and distributed solutions for the ULW space-based array. We highlight the benefits of various deployment locations and summarize the technological challenges for future space-based radio arrays.Comment: Submitte

    Two-stage Denoising Diffusion Model for Source Localization in Graph Inverse Problems

    Full text link
    Source localization is the inverse problem of graph information dissemination and has broad practical applications. However, the inherent intricacy and uncertainty in information dissemination pose significant challenges, and the ill-posed nature of the source localization problem further exacerbates these challenges. Recently, deep generative models, particularly diffusion models inspired by classical non-equilibrium thermodynamics, have made significant progress. While diffusion models have proven to be powerful in solving inverse problems and producing high-quality reconstructions, applying them directly to the source localization is infeasible for two reasons. Firstly, it is impossible to calculate the posterior disseminated results on a large-scale network for iterative denoising sampling, which would incur enormous computational costs. Secondly, in the existing methods for this field, the training data itself are ill-posed (many-to-one); thus simply transferring the diffusion model would only lead to local optima. To address these challenges, we propose a two-stage optimization framework, the source localization denoising diffusion model (SL-Diff). In the coarse stage, we devise the source proximity degrees as the supervised signals to generate coarse-grained source predictions. This aims to efficiently initialize the next stage, significantly reducing its convergence time and calibrating the convergence process. Furthermore, the introduction of cascade temporal information in this training method transforms the many-to-one mapping relationship into a one-to-one relationship, perfectly addressing the ill-posed problem. In the fine stage, we design a diffusion model for the graph inverse problem that can quantify the uncertainty in the dissemination. The proposed SL-Diff yields excellent prediction results within a reasonable sampling time at extensive experiments

    Technical benefits and cultural barriers of networked Autonomous Undersea Vehicles

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 44-45).The research presented in this thesis examines the technical benefits to using a collaborative network of Autonomous Undersea Vehicles (AUVs) in place of individual vehicles. Benefits could be achieved in the areas of reduced power consumption, improved positional information and improved acoustic communication bandwidth. However, current culture of AUV development may impede this approach. The thesis uses the Object Process Methodology (OPM) and principles of System Architecture to trace the value of an AUV system from the scientist who benefits from the data to the vehicle itself. Sections 3 and 4 outline the needs for an AUV system as they currently exist and describe the key physics-based limitations of operations. Section 5 takes a broader look at the system goal as data delivery, not just the deployment of a vehicle, and introduces the concept of networked AUV. Section 6 describes a potential evolution of networked AUVs in increasing autonomy and collaboration. Finally, Section 7 examines AUV development cultures that could impede, or foster, networked vehicles.by Patrick L. Wineman.S.M

    A Comparative Analysis of Human Behavior Prediction Approaches in Intelligent Environments

    Get PDF
    Behavior modeling has multiple applications in the intelligent environment domain. It has been used in different tasks, such as the stratification of different pathologies, prediction of the user actions and activities, or modeling the energy usage. Specifically, behavior prediction can be used to forecast the future evolution of the users and to identify those behaviors that deviate from the expected conduct. In this paper, we propose the use of embeddings to represent the user actions, and study and compare several behavior prediction approaches. We test multiple model (LSTM, CNNs, GCNs, and transformers) architectures to ascertain the best approach to using embeddings for behavior modeling and also evaluate multiple embedding retrofitting approaches. To do so, we use the Kasteren dataset for intelligent environments, which is one of the most widely used datasets in the areas of activity recognition and behavior modeling.This work was carried out with the financial support of FuturAAL-Ego (RTI2018-101045-A-C22) and FuturAAL-Context (RTI2018-101045-B-C21) granted by Spanish Ministry of Science, Innovation and Universities

    Bioinformatics Methods For Studying Intra-Host and Inter-Host Evolution Of Highly Mutable Viruses

    Get PDF
    Reproducibility and robustness of genomic tools are two important factors to assess the reliability of bioinformatics analysis. Such assessment based on these criteria requires repetition of experiments across lab facilities which is usually costly and time consuming. In this study we propose methods that are able to generate computational replicates, allowing the assessment of the reproducibility of genomic tools. We analyzed three different groups of genomic tools: DNA-seq read alignment tools, structural variant (SV) detection tools and RNA-seq gene expression quantification tools. We tested these tools with different technical replicate data. We observed that while some tools were impacted by the technical replicate data some remained robust. We observed the importance of the choice of read alignment tools for SV detection as well. On the other hand, we found out that the RNA-seq quantification tools (Kallisto and Salmon) that we chose were not affected by the shuffled data but were affected by reverse complement data. Using these findings, our proposed method here may help biomedical communities to advice on the robustness and reproducibility factors of genomic tools and help them to choose the most appropriate tools in terms of their needs. Furthermore, this study will give an insight to genomic tool developers about the importance of a good balance between technical improvements and reliable results

    A Survey on Graph Representation Learning Methods

    Full text link
    Graphs representation learning has been a very active research area in recent years. The goal of graph representation learning is to generate graph representation vectors that capture the structure and features of large graphs accurately. This is especially important because the quality of the graph representation vectors will affect the performance of these vectors in downstream tasks such as node classification, link prediction and anomaly detection. Many techniques are proposed for generating effective graph representation vectors. Two of the most prevalent categories of graph representation learning are graph embedding methods without using graph neural nets (GNN), which we denote as non-GNN based graph embedding methods, and graph neural nets (GNN) based methods. Non-GNN graph embedding methods are based on techniques such as random walks, temporal point processes and neural network learning methods. GNN-based methods, on the other hand, are the application of deep learning on graph data. In this survey, we provide an overview of these two categories and cover the current state-of-the-art methods for both static and dynamic graphs. Finally, we explore some open and ongoing research directions for future work
    corecore