904 research outputs found

    Inferring Network Topology from Complex Dynamics

    Full text link
    Inferring network topology from dynamical observations is a fundamental problem pervading research on complex systems. Here, we present a simple, direct method to infer the structural connection topology of a network, given an observation of one collective dynamical trajectory. The general theoretical framework is applicable to arbitrary network dynamical systems described by ordinary differential equations. No interference (external driving) is required and the type of dynamics is not restricted in any way. In particular, the observed dynamics may be arbitrarily complex; stationary, invariant or transient; synchronous or asynchronous and chaotic or periodic. Presupposing a knowledge of the functional form of the dynamical units and of the coupling functions between them, we present an analytical solution to the inverse problem of finding the network topology. Robust reconstruction is achieved in any sufficiently long generic observation of the system. We extend our method to simultaneously reconstruct both the entire network topology and all parameters appearing linear in the system's equations of motion. Reconstruction of network topology and system parameters is viable even in the presence of substantial external noise.Comment: 11 pages, 4 figure

    Assembling a sociology of numbers

    Full text link

    The influence of date of flowering on certain seed and fiber properties of pope cotton

    Get PDF
    This study was undertaken to investigate the effect of date of flowering with subsequent attendant environmental conditions on certain seed and fiber properties

    Task irrelevant external cues can influence language selection in voluntary object naming: evidence from Hindi-English bilinguals

    Get PDF
    We examined if external cues such as other agents’ actions can influence the choice of language during voluntary and cued object naming in bilinguals in three experiments. Hindi– English bilinguals first saw a cartoon waving at a color patch. They were then asked to either name a picture in the language of their choice (voluntary block) or to name in the instructed language (cued block). The colors waved at by the cartoon were also the colors used as language cues (Hindi or English). We compared the influence of the cartoon’s choice of color on naming when speakers had to indicate their choice explicitly before naming (Experiment 1) as opposed to when they named directly on seeing the pictures (Experiment 2 and 3). Results showed that participants chose the language indicated by the cartoon greater number of times (Experiment 1 and 3). Speakers also switched significantly to the language primed by the cartoon greater number of times (Experiment 1 and 2). These results suggest that choices leading to voluntary action, as in the case of object naming can be influenced significantly by external non-linguistic cues. Importantly, these symbolic influences can work even when other agents are merely indicating their choices and are not interlocutors in bilingual communicatio

    Hybrid Models with Deep and Invertible Features

    Full text link
    We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets | features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Moreover the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.Comment: ICML 201

    Research into polymeric insulating materials for high voltage outdoor insulators.

    Get PDF

    Do Deep Generative Models Know What They Don't Know?

    Full text link
    A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.Comment: ICLR 201

    ArguCast: a system for online multi-forecasting with gradual argumentation

    Get PDF
    Judgmental forecasting is a form of forecasting which employs (human) users to make predictions about specied future events. Judgmental forecasting has been shown to perform better than quantitative methods for forecasting, e.g. when historical data is unavailable or causal reasoning is needed. However, it has a number of limitations, arising from users’ irrationality and cognitive biases. To mitigate against these phenomena, we leverage on computational argumentation, a eld which excels in the representation and resolution of conicting knowledge and human-like reasoning, and propose novel ArguCast frameworks (ACFs) and the novel online system ArguCast, integrating ACFs. ACFs and ArguCast accommodate multi-forecasting, by allowing multiple users to debate on multiple forecasting predictions simultaneously, each potentially admitting multiple outcomes. Finally, we propose a novel notion of user rationality in ACFs based on votes on arguments in ACFs, allowing the ltering out of irrational opinions before obtaining group forecasting predictions by means commonly used in judgmental forecasting
    • …
    corecore