39,439 research outputs found

    Modeling Mental Qualities

    Get PDF
    Conscious experiences are characterized by mental qualities, such as those involved in seeing red, feeling pain, or smelling cinnamon. The standard framework for modeling mental qualities represents them via points in geometrical spaces, where distances between points inversely correspond to degrees of phenomenal similarity. This paper argues that the standard framework is structurally inadequate and develops a new framework that is more powerful and flexible. The core problem for the standard framework is that it cannot capture precision structure: for example, consider the phenomenal contrast between seeing an object as crimson in foveal vision versus merely as red in peripheral vision. The solution I favor is to model mental qualities using regions, rather than points. I explain how this seemingly simple formal innovation not only provides a natural way of modeling precision, but also yields a variety of further theoretical fruits: it enables us to formulate novel hypotheses about the space and structures of mental qualities, formally differentiate two dimensions of phenomenal similarity, generate a quantitative model of the phenomenal sorites, and define a measure of discriminatory grain. A noteworthy consequence is that the structure of the mental qualities of conscious experiences is fundamentally different from the structure of the perceptible qualities of external objects

    The Microstructure of Experience

    Get PDF
    I argue that experiences can have microphenomenal structures, where the macrophenomenal properties we introspect are realized by non-introspectible microphenomenal properties. After explaining what it means to ascribe a microstructure to experience, I defend the thesis against its principal philosophical challenge, discuss how the thesis interacts with other philosophical issues about experience, and consider our prospects for investigating the microphenomenal realm

    A Framework for Modeling Subgrid Effects for Two-Phase Flows in Porous Media

    Get PDF
    In this paper, we study upscaling for two-phase flows in strongly heterogeneous porous media. Upscaling a hyperbolic convection equation is known to be very difficult due to the presence of nonlocal memory effects. Even for a linear hyperbolic equation with a shear velocity field, the upscaled equation involves a nonlocal history dependent diffusion term, which is not amenable to computation. By performing a systematic multiscale analysis, we derive coupled equations for the average and the fluctuations for the two-phase flow. The homogenized equations for the coupled system are obtained by projecting the fluctuations onto a suitable subspace. This projection corresponds exactly to averaging along streamlines of the flow. Convergence of the multiscale analysis is verified numerically. Moreover, we show how to apply this multiscale analysis to upscale two-phase flows in practical applications

    Powerful sets: a generalisation of binary matroids

    Full text link
    A set S{0,1}ES\subseteq\{0,1\}^E of binary vectors, with positions indexed by EE, is said to be a \textit{powerful code} if, for all XEX\subseteq E, the number of vectors in SS that are zero in the positions indexed by XX is a power of 2. By treating binary vectors as characteristic vectors of subsets of EE, we say that a set S2ES\subseteq2^E of subsets of EE is a \textit{powerful set} if the set of characteristic vectors of sets in SS is a powerful code. Powerful sets (codes) include cocircuit spaces of binary matroids (equivalently, linear codes over F2\mathbb{F}_2), but much more besides. Our motivation is that, to each powerful set, there is an associated nonnegative-integer-valued rank function (by a construction of Farr), although it does not in general satisfy all the matroid rank axioms. In this paper we investigate the combinatorial properties of powerful sets. We prove fundamental results on special elements (loops, coloops, frames, near-frames, and stars), their associated types of single-element extensions, various ways of combining powerful sets to get new ones, and constructions of nonlinear powerful sets. We show that every powerful set is determined by its clutter of minimal nonzero members. Finally, we show that the number of powerful sets is doubly exponential, and hence that almost all powerful sets are nonlinear.Comment: 19 pages. This work was presented at the 40th Australasian Conference on Combinatorial Mathematics and Combinatorial Computing (40ACCMCC), University of Newcastle, Australia, Dec. 201

    Controlling the False Discovery Rate in Astrophysical Data Analysis

    Get PDF
    The False Discovery Rate (FDR) is a new statistical procedure to control the number of mistakes made when performing multiple hypothesis tests, i.e. when comparing many data against a given model hypothesis. The key advantage of FDR is that it allows one to a priori control the average fraction of false rejections made (when comparing to the null hypothesis) over the total number of rejections performed. We compare FDR to the standard procedure of rejecting all tests that do not match the null hypothesis above some arbitrarily chosen confidence limit, e.g. 2 sigma, or at the 95% confidence level. When using FDR, we find a similar rate of correct detections, but with significantly fewer false detections. Moreover, the FDR procedure is quick and easy to compute and can be trivially adapted to work with correlated data. The purpose of this paper is to introduce the FDR procedure to the astrophysics community. We illustrate the power of FDR through several astronomical examples, including the detection of features against a smooth one-dimensional function, e.g. seeing the ``baryon wiggles'' in a power spectrum of matter fluctuations, and source pixel detection in imaging data. In this era of large datasets and high precision measurements, FDR provides the means to adaptively control a scientifically meaningful quantity -- the number of false discoveries made when conducting multiple hypothesis tests.Comment: 15 pages, 9 figures. Submitted to A

    Is Employment Globalizing?

    Get PDF
    We investigate the claim that national labor markets have become more globally interconnected in recent decades. We do so by deriving estimates over time of three different notions of interconnection: (i) the share of labor demand that is export induced (i.e., all labor demand created by foreign entities buying products exported by the home country)—we provide estimates for 40 countries; (ii) the share of workers employed in sectors producing tradable goods or services—68 countries; and (iii) the ratio of the number of jobs that are either located in a tradable sector, or that are involved in producing services that are required by these tradable sectors, to all jobs in the economy, which we call the trade-linked employment share—40 countries. Our estimates lead to the conclusion that the evidence of a large increase in the interconnections between national labor markets is far weaker than commonly asserted: levels of interconnectivity, and the direction of changes over time, vary across notions of interconnection and countries. The main reasons for this are labor- displacing productivity growth in tradable sectors of each economy and the diminishing fraction of national labor forces hired into manufacturing jobs worldwide. We also discuss the implications of our results for different policy debates that each of the three measures is associated with: international coordination of macroeconomic policies (export-induced labor demand), currency devaluations (share of workers producing tradables), and education and labor protection (trade-linked share)

    Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors

    Full text link
    Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8%
    corecore