4,521 research outputs found

    A psychoacoustic model of harmonic cadences: a preliminary report

    Get PDF
    This report presents a psychoacoustically derived computational model of the perceived distance between any two major or minor triads, the degree of activity created by any given pair of triads, and the cadential effectiveness of three-triad progressions. It also provides statistical analyses of the ratings given by thirty-five participants for the "similarity" and "fit" of triads in a pair, and the "cadential effectiveness" of three-triad progressions. Multiple regressions show that the model provides highly significant predictions of the experimentally obtained ratings. Finally, it is argued that because the model is based upon psychoacoustic axioms, it is likely the regression equations represent true causal models. As such, the computational model and its associated theory question the plausibility of theoretical approaches to tonality that use only long-term memory and statistical features, as well as those approaches based upon symmetrical geometrical structures like the torus. It is hoped that the psychoacoustic approach proposed here may herald not only the return of psychoacoustic approaches to tonal music theory, but also the exploration of the tonal possibilities offered by non-standard tunings and non-harmonic timbres

    Generalized Independent Noise Condition for Estimating Causal Structure with Latent Variables

    Full text link
    We investigate the challenging task of learning causal structure in the presence of latent variables, including locating latent variables and determining their quantity, and identifying causal relationships among both latent and observed variables. To address this, we propose a Generalized Independent Noise (GIN) condition for linear non-Gaussian acyclic causal models that incorporate latent variables, which establishes the independence between a linear combination of certain measured variables and some other measured variables. Specifically, for two observed random vectors Y\bf{Y} and Z\bf{Z}, GIN holds if and only if ω⊺Y\omega^{\intercal}\mathbf{Y} and Z\mathbf{Z} are independent, where ω\omega is a non-zero parameter vector determined by the cross-covariance between Y\mathbf{Y} and Z\mathbf{Z}. We then give necessary and sufficient graphical criteria of the GIN condition in linear non-Gaussian acyclic causal models. Roughly speaking, GIN implies the existence of an exogenous set S\mathcal{S} relative to the parent set of Y\mathbf{Y} (w.r.t. the causal ordering), such that S\mathcal{S} d-separates Y\mathbf{Y} from Z\mathbf{Z}. Interestingly, we find that the independent noise condition (i.e., if there is no confounder, causes are independent of the residual derived from regressing the effect on the causes) can be seen as a special case of GIN. With such a connection between GIN and latent causal structures, we further leverage the proposed GIN condition, together with a well-designed search procedure, to efficiently estimate Linear, Non-Gaussian Latent Hierarchical Models (LiNGLaHs), where latent confounders may also be causally related and may even follow a hierarchical structure. We show that the underlying causal structure of a LiNGLaH is identifiable in light of GIN conditions under mild assumptions. Experimental results show the effectiveness of the proposed approach

    Identifiable Latent Polynomial Causal Models Through the Lens of Change

    Full text link
    Causal representation learning aims to unveil latent high-level causal representations from observed low-level data. One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability. A recent breakthrough explores identifiability by leveraging the change of causal influences among latent causal variables across multiple environments \citep{liu2022identifying}. However, this progress rests on the assumption that the causal relationships among latent causal variables adhere strictly to linear Gaussian models. In this paper, we extend the scope of latent causal models to involve nonlinear causal relationships, represented by polynomial models, and general noise distributions conforming to the exponential family. Additionally, we investigate the necessity of imposing changes on all causal parameters and present partial identifiability results when part of them remains unchanged. Further, we propose a novel empirical estimation method, grounded in our theoretical finding, that enables learning consistent latent causal representations. Our experimental results, obtained from both synthetic and real-world data, validate our theoretical contributions concerning identifiability and consistency

    Identification of Nonlinear Latent Hierarchical Models

    Full text link
    Identifying latent variables and causal structures from observational data is essential to many real-world applications involving biological data, medical data, and unstructured data such as images and languages. However, this task can be highly challenging, especially when observed variables are generated by causally related latent variables and the relationships are nonlinear. In this work, we investigate the identification problem for nonlinear latent hierarchical causal models in which observed variables are generated by a set of causally related latent variables, and some latent variables may not have observed children. We show that the identifiability of causal structures and latent variables (up to invertible transformations) can be achieved under mild assumptions: on causal structures, we allow for multiple paths between any pair of variables in the graph, which relaxes latent tree assumptions in prior work; on structural functions, we permit general nonlinearity and multi-dimensional continuous variables, alleviating existing work's parametric assumptions. Specifically, we first develop an identification criterion in the form of novel identifiability guarantees for an elementary latent variable model. Leveraging this criterion, we show that both causal structures and latent variables of the hierarchical model can be identified asymptotically by explicitly constructing an estimation procedure. To the best of our knowledge, our work is the first to establish identifiability guarantees for both causal structures and latent variables in nonlinear latent hierarchical models.Comment: NeurIPS 202

    A Survey on Causal Discovery Methods for Temporal and Non-Temporal Data

    Full text link
    Causal Discovery (CD) is the process of identifying the cause-effect relationships among the variables from data. Over the years, several methods have been developed primarily based on the statistical properties of data to uncover the underlying causal mechanism. In this study we introduce the common terminologies in causal discovery, and provide a comprehensive discussion of the approaches designed to identify the causal edges in different settings. We further discuss some of the benchmark datasets available for evaluating the performance of the causal discovery algorithms, available tools to perform causal discovery readily, and the common metrics used to evaluate these methods. Finally, we conclude by presenting the common challenges involved in CD and also, discuss the applications of CD in multiple areas of interest

    Linear Causal Disentanglement via Interventions

    Full text link
    Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model. A representation is identifiable if both the latent model and the transformation from latent to observed variables are unique. In this paper, we study observed variables that are a linear transformation of a linear latent causal model. Data from interventions are necessary for identifiability: if one latent variable is missing an intervention, we show that there exist distinct models that cannot be distinguished. Conversely, we show that a single intervention on each latent variable is sufficient for identifiability. Our proof uses a generalization of the RQ decomposition of a matrix that replaces the usual orthogonal and upper triangular conditions with analogues depending on a partial order on the rows of the matrix, with partial order determined by a latent causal model. We corroborate our theoretical results with a method for causal disentanglement that accurately recovers a latent causal model

    Nonparametric Identifiability of Causal Representations from Unknown Interventions

    Full text link
    We study causal representation learning, the task of inferring latent causal variables and their causal relations from high-dimensional functions ("mixtures") of the variables. Prior work relies on weak supervision, in the form of counterfactual pre- and post-intervention views or temporal structure; places restrictive assumptions, such as linearity, on the mixing function or latent causal model; or requires partial knowledge of the generative process, such as the causal graph or the intervention targets. We instead consider the general setting in which both the causal model and the mixing function are nonparametric. The learning signal takes the form of multiple datasets, or environments, arising from unknown interventions in the underlying causal model. Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data. We study the fundamental setting of two causal variables and prove that the observational distribution and one perfect intervention per node suffice for identifiability, subject to a genericity condition. This condition rules out spurious solutions that involve fine-tuning of the intervened and observational distributions, mirroring similar conditions for nonlinear cause-effect inference. For an arbitrary number of variables, we show that two distinct paired perfect interventions per node guarantee identifiability. Further, we demonstrate that the strengths of causal influences among the latent variables are preserved by all equivalent solutions, rendering the inferred representation appropriate for drawing causal conclusions from new data. Our study provides the first identifiability results for the general nonparametric setting with unknown interventions, and elucidates what is possible and impossible for causal representation learning without more direct supervision

    A semiotic analysis of the genetic information

    Get PDF
    Terms loaded with informational connotations are often employed to refer to genes and their dynamics. Indeed, genes are usually perceived by biologists as basically ‘the carriers of hereditary information.’ Nevertheless, a number of researchers consider such talk as inadequate and ‘just metaphorical,’ thus expressing a skepticism about the use of the term ‘information’ and its derivatives in biology as a natural science. First, because the meaning of that term in biology is not as precise as it is, for instance, in the mathematical theory of communication. Second, because it seems to refer to a purported semantic property of genes without theoretically clarifying if any genuinely intrinsic semantics is involved. Biosemiotics, a field that attempts to analyze biological systems as semiotic systems, makes it possible to advance in the understanding of the concept of information in biology. From the perspective of Peircean biosemiotics, we develop here an account of genes as signs, including a detailed analysis of two fundamental processes in the genetic information system (transcription and protein synthesis) that have not been made so far in this field of research. Furthermore, we propose here an account of information based on Peircean semiotics and apply it to our analysis of transcription and protein synthesis
    • …
    corecore