3,959 research outputs found

    Modeling Persistent Trends in Distributions

    Full text link
    We present a nonparametric framework to model a short sequence of probability distributions that vary both due to underlying effects of sequential progression and confounding noise. To distinguish between these two types of variation and estimate the sequential-progression effects, our approach leverages an assumption that these effects follow a persistent trend. This work is motivated by the recent rise of single-cell RNA-sequencing experiments over a brief time course, which aim to identify genes relevant to the progression of a particular biological process across diverse cell populations. While classical statistical tools focus on scalar-response regression or order-agnostic differences between distributions, it is desirable in this setting to consider both the full distributions as well as the structure imposed by their ordering. We introduce a new regression model for ordinal covariates where responses are univariate distributions and the underlying relationship reflects consistent changes in the distributions over increasing levels of the covariate. This concept is formalized as a "trend" in distributions, which we define as an evolution that is linear under the Wasserstein metric. Implemented via a fast alternating projections algorithm, our method exhibits numerous strengths in simulations and analyses of single-cell gene expression data.Comment: To appear in: Journal of the American Statistical Associatio

    On cross-domain social semantic learning

    Get PDF
    Approximately 2.4 billion people are now connected to the Internet, generating massive amounts of data through laptops, mobile phones, sensors and other electronic devices or gadgets. Not surprisingly then, ninety percent of the world's digital data was created in the last two years. This massive explosion of data provides tremendous opportunity to study, model and improve conceptual and physical systems from which the data is produced. It also permits scientists to test pre-existing hypotheses in various fields with large scale experimental evidence. Thus, developing computational algorithms that automatically explores this data is the holy grail of the current generation of computer scientists. Making sense of this data algorithmically can be a complex process, specifically due to two reasons. Firstly, the data is generated by different devices, capturing different aspects of information and resides in different web resources/ platforms on the Internet. Therefore, even if two pieces of data bear singular conceptual similarity, their generation, format and domain of existence on the web can make them seem considerably dissimilar. Secondly, since humans are social creatures, the data often possesses inherent but murky correlations, primarily caused by the causal nature of direct or indirect social interactions. This drastically alters what algorithms must now achieve, necessitating intelligent comprehension of the underlying social nature and semantic contexts within the disparate domain data and a quantifiable way of transferring knowledge gained from one domain to another. Finally, the data is often encountered as a stream and not as static pages on the Internet. Therefore, we must learn, and re-learn as the stream propagates. The main objective of this dissertation is to develop learning algorithms that can identify specific patterns in one domain of data which can consequently augment predictive performance in another domain. The research explores existence of specific data domains which can function in synergy with another and more importantly, proposes models to quantify the synergetic information transfer among such domains. We include large-scale data from various domains in our study: social media data from Twitter, multimedia video data from YouTube, video search query data from Bing Videos, Natural Language search queries from the web, Internet resources in form of web logs (blogs) and spatio-temporal social trends from Twitter. Our work presents a series of solutions to address the key challenges in cross-domain learning, particularly in the field of social and semantic data. We propose the concept of bridging media from disparate sources by building a common latent topic space, which represents one of the first attempts toward answering sociological problems using cross-domain (social) media. This allows information transfer between social and non-social domains, fostering real-time socially relevant applications. We also engineer a concept network from the semantic web, called semNet, that can assist in identifying concept relations and modeling information granularity for robust natural language search. Further, by studying spatio-temporal patterns in this data, we can discover categorical concepts that stimulate collective attention within user groups.Includes bibliographical references (pages 210-214)

    Rethinking Privacy and Security Mechanisms in Online Social Networks

    Get PDF
    With billions of users, Online Social Networks(OSNs) are amongst the largest scale communication applications on the Internet. OSNs enable users to easily access news from local and worldwide, as well as share information publicly and interact with friends. On the negative side, OSNs are also abused by spammers to distribute ads or malicious information, such as scams, fraud, and even manipulate public political opinions. Having achieved significant commercial success with large amount of user information, OSNs do treat the security and privacy of their users seriously and provide several mechanisms to reinforce their account security and information privacy. However, the efficacy of those measures is either not thoroughly validated or in need to be improved. In sight of cyber criminals and potential privacy threats on OSNs, we focus on the evaluations and improvements of OSN user privacy configurations, account security protection mechanisms, and trending topic security in this dissertation. We first examine the effectiveness of OSN privacy settings on protecting user privacy. Given each privacy configuration, we propose a corresponding scheme to reveal the target user\u27s basic profile and connection information starting from some leaked connections on the user\u27s homepage. Based on the dataset we collected on Facebook, we calculate the privacy exposure in each privacy setting type and measure the accuracy of our privacy inference schemes with different amount of public information. The evaluation results show that (1) a user\u27s private basic profile can be inferred with high accuracy and (2) connections can be revealed in a significant portion based on even a small number of directly leaked connections. Secondly, we propose a behavioral-profile-based method to detect OSN user account compromisation in a timely manner. Specifically, we propose eight behavioral features to portray a user\u27s social behavior. A user\u27s statistical distributions of those feature values comprise its behavioral profile. Based on the sample data we collected from Facebook, we observe that each user\u27s activities are highly likely to conform to its behavioral profile while two different user\u27s profile tend to diverge from each other, which can be employed for compromisation detection. The evaluation result shows that the more complete and accurate a user\u27s behavioral profile can be built the more accurately compromisation can be detected. Finally, we investigate the manipulation of OSN trending topics. Based on the dataset we collected from Twitter, we manifest the manipulation of trending and a suspect spamming infrastructure. We then measure how accurately the five factors (popularity, coverage, transmission, potential coverage, and reputation) can predict trending using an SVM classifier. We further study the interaction patterns between authenticated accounts and malicious accounts in trending. at last we demonstrate the threats of compromised accounts and sybil accounts to trending through simulation and discuss countermeasures against trending manipulation

    Predicting Successful Memes using Network and Community Structure

    Full text link
    We investigate the predictability of successful memes using their early spreading patterns in the underlying social networks. We propose and analyze a comprehensive set of features and develop an accurate model to predict future popularity of a meme given its early spreading patterns. Our paper provides the first comprehensive comparison of existing predictive frameworks. We categorize our features into three groups: influence of early adopters, community concentration, and characteristics of adoption time series. We find that features based on community structure are the most powerful predictors of future success. We also find that early popularity of a meme is not a good predictor of its future popularity, contrary to common belief. Our methods outperform other approaches, particularly in the task of detecting very popular or unpopular memes.Comment: 10 pages, 6 figures, 2 tables. Proceedings of 8th AAAI Intl. Conf. on Weblogs and social media (ICWSM 2014

    Event detection, tracking, and visualization in Twitter: a mention-anomaly-based approach

    Full text link
    The ever-growing number of people using Twitter makes it a valuable source of timely information. However, detecting events in Twitter is a difficult task, because tweets that report interesting events are overwhelmed by a large volume of tweets on unrelated topics. Existing methods focus on the textual content of tweets and ignore the social aspect of Twitter. In this paper we propose MABED (i.e. mention-anomaly-based event detection), a novel statistical method that relies solely on tweets and leverages the creation frequency of dynamic links (i.e. mentions) that users insert in tweets to detect significant events and estimate the magnitude of their impact over the crowd. MABED also differs from the literature in that it dynamically estimates the period of time during which each event is discussed, rather than assuming a predefined fixed duration for all events. The experiments we conducted on both English and French Twitter data show that the mention-anomaly-based approach leads to more accurate event detection and improved robustness in presence of noisy Twitter content. Qualitatively speaking, we find that MABED helps with the interpretation of detected events by providing clear textual descriptions and precise temporal descriptions. We also show how MABED can help understanding users' interest. Furthermore, we describe three visualizations designed to favor an efficient exploration of the detected events.Comment: 17 page

    On-the-fly tracing for data-centric computing : parallelization, workflow and applications

    Get PDF
    As data-centric computing becomes the trend in science and engineering, more and more hardware systems, as well as middleware frameworks, are emerging to handle the intensive computations associated with big data. At the programming level, it is crucial to have corresponding programming paradigms for dealing with big data. Although MapReduce is now a known programming model for data-centric computing where parallelization is completely replaced by partitioning the computing task through data, not all programs particularly those using statistical computing and data mining algorithms with interdependence can be re-factorized in such a fashion. On the other hand, many traditional automatic parallelization methods put an emphasis on formalism and may not achieve optimal performance with the given limited computing resources. In this work we propose a cross-platform programming paradigm, called on-the-fly data tracing , to provide source-to-source transformation where the same framework also provides the functionality of workflow optimization on larger applications. Using a big-data approximation computations related to large-scale data input are identified in the code and workflow and a simplified core dependence graph is built based on the computational load taking in to account big data. The code can then be partitioned into sections for efficient parallelization; and at the workflow level, optimization can be performed by adjusting the scheduling for big-data considerations, including the I/O performance of the machine. Regarding each unit in both source code and workflow as a model, this framework enables model-based parallel programming that matches the available computing resources. The techniques used in model-based parallel programming as well as the design of the software framework for both parallelization and workflow optimization as well as its implementations with multiple programming languages are presented in the dissertation. Then, the following experiments are performed to validate the framework: i) the benchmarking of parallelization speed-up using typical examples in data analysis and machine learning (e.g. naive Bayes, k-means) and ii) three real-world applications in data-centric computing with the framework are also described to illustrate the efficiency: pattern detection from hurricane and storm surge simulations, road traffic flow prediction and text mining from social media data. In the applications, it illustrates how to build scalable workflows with the framework along with performance enhancements

    Application of Lidar to 3D Structural Mapping

    Get PDF
    The rugged, densely forested terrain of the West Virginia Appalachian Valley and Ridge Province has made it difficult for field-based studies to agree on the structure of the highly deformed Silurian-Devonian cover strata. In this study, we demonstrate a 3D approach to geologic mapping utilizing the structural information revealed in a “bare-earth” 1-m Lidar DEM of the Smoke Hole Canyon. The completed 3D map was integrated with kinematic forward modeling carried out in MOVETM to provide information on the parameters required to form the major structures observed. Additionally, land surface attributes generated using geomorphometric analysis of the Lidar allowed for better mapping of smaller scale structures and specific outcrops. Our kinematic reconstructions and 3D models show that the Cave Mountain Anticline can be produced with Trishear-style deformation for fault-propagation folds. Also, the depth to detachment of the Cave Mountain thrust shallows along strike to the north and south, indicating that lateral ramps are required from the Martinsburg to the Juniata Formation. Additionally, steeper backlimb dips indicate that the thrust ramp angle increases from 30° to 45° moving southward. To the north, the Cave Mountain Anticline splits in to two, indicating that the main thrust plane is branching at depth, resulting in an imbricated-thrust geometry. Kinematic reconstructions of the shorter wavelength folds west of Cave Mountain show that they can be modeled using detachment folding algorithms, however they must form above a shallower detachment than previously interpreted, within the shales of the Silurian Rose Hill Formation
    corecore