22 research outputs found

    Topics of Thought

    Get PDF
    This book concerns mental states such as thinking that Obama is tall, imagining that there will be a climate change catastrophe, knowing that one is not a brain in a vat, or believing that Martina Navratilova is the greatest tennis player ever. Such states are usually understood as having intentionality, that is, as being about things or situations to which the mind is directed. The contents of such states are often taken to be propositions. The book presents a new framework for the logic of thought, so understood—an answer to the question: Given that one thinks (believes, knows, etc.) something, what else must one think (ditto) as a matter of logic? This should depend on the propositions which make for the contents of the relevant thoughts. And the book defends the idea that propositions should be individuated hyperintensionally, i.e. not just by the sets of worlds at which they are true (as in standard ‘intensional’ possible worlds semantics), but also by what they are about: their topic or subject matter. Thus, the logic of thought should be ‘topic-sensitive’. After the philosophical foundations have been presented in Chapters 1−2, Chapter 3 develops a theory of Topic-Sensitive Intentional Modals (TSIMs): modal operators representing attitude ascriptions, which embed a topicality or subject matter constraint. Subsequent chapters explore applications ranging from mainstream epistemology (dogmatism, scepticism, fallibilism: Chapter 4), to the nature of suppositional thinking and imagination (Chapter 5), conditional belief and belief revision (Chapter 6), framing effects (Chapter 7), probabilities and indicative conditionals (Chapter 8)

    On the design and implementation of flexible software platforms to facilitate the development of advanced graphics applications

    Get PDF
    This thesis presents the design and implementation of a software development platform (ATLAS) which offers some tools and methods to greatly simplify the construction of fairly sophisticated applications. It allows thus programmers to include advanced features in their applications with no or very little extra information and effort. These features include: the splitting of the application in distinct processes that may be distributed over a network; a powerful configuration and scripting language; several tools including an input system to easily construct reasonable interfaces; a flexible journaling mechanism --offering fault-tolerance to crashes of processes or communications--; and other features designed for graphics applications, like a global data identification- --addressing the problem of volatile references and giving support to processes of constraint solving--, and a uniform but flexible view of inputs allowing many different dialogue modes.These can be seen as related or overlapping with CORBA or other systems like Horus or Arjuna, but none of them addresses simultaneously all aspects included in ATLAS; more specifically none of them offers a standardized input model, a configuration and macro language, a journaling mechanism or gives support to processes of constraints solving and parametric design.The contributions of ATLAS are in showing how all these requirements can be addressed together; also in showing means by which this can be attained with little or no performance cost and without imposing on developers the need of mastering all these techniques. Finally, the design of the ATLAS journaling system is to our knowledge original in the simultaneous solution of all of its requirements.Postprint (published version

    The process of inference making in reading comprehension: an ERP analysis

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão. Programa de Pós-Graduação em Letras/Inglês e Literatura Correspondente.Pesquisas recentes na área de compreensão textual têm enfocado a habilidade dos leitores em construir uma representação mental coerente daquilo que lêem. Para que a representação uniforme de um texto seja obtida, o leitor deve ser capaz de compilar as informações presentes no texto com o seu conhecimento prévio para a construção do significado - que pode não estar explícito -, através do processo de inferência. Nesse estudo, o processo de inferência foi investigado mediante a leitura de dois tipos diferentes de texto, por meio da utilização da eletroencefalografia (EEG). Os sujeitos, falantes nativos do inglês, leram parágrafos expositivos e narrativos, e julgaram a plausibilidade da sentença final de cada parágrafo, tendo como referência, a informação das três sentenças anteriores. A análise dos resultados enfocou dois potenciais relacionados a eventos (ERPs): os componentes N1 e N400, e a acuidade nas respostas comportamentais. As amplitudes do N400 revelaram que o texto expositivo exigiu mais dos sujeitos em termos de processamento semântico, enquanto que as respostas comportamentais mostraram que os sujeitos tiveram uma tendência maior a gerar inferências enquanto liam esse mesmo tipo de texto. Com relação ao envolvimento dos hemisférios esquerdo e direito no processo de inferência, não houve diferenças significativas em relação à amplitude dos ERPs, embora o hemisfério direito tenha se mostrado mais participativo no momento em que os sujeitos liam a última sentença dos parágrafos, e tinham que julgar se a mesma era coerente com as sentenças anteriores. No geral, esse estudo sugere que os dois tipos de texto são processados diferentemente pelo cérebro, conforme demonstrado pelas nuances dos componentes N1 e N400, gerados durante a leitura das duas últimas sentenças de cada parágrafo. Embora não tenha sido possível uma clara visualização com relação aos processos cerebrais subjacentes ao processo de inferência, em função dos resultados pouco robustos, o presente estudo contribui como mais um dos primeiros passos a serem dados no longo caminho, até que uma compreensão mais detalhada dos processos cognitivos inerentes à compreensão textual seja alcançada. Much of recent research on discourse comprehension has centered on the readers' ability to construct coherent mental representations of texts. In order to form a unified representation of a given text, a reader must be able to join the information presented in the text with his background knowledge to construe the meaning that may not be explicitly stated in the text, through the generation of inferences. In this is study, the process of inference making by native speakers of English while reading two different types of text was investigated, using Electroencephalography (EEG). Subjects read narrative and expository paragraphs, and judged the plausibility of the final sentence of each four-sentence long paragraph by reference to the previous information. The analysis of data focused on two ERP (Event-related brain potential) components, the N1 and the N400 and on accuracy of behavioral responses. N400 amplitudes revealed that exposition was more demanding than narration in terms of semantic processing, whereas behavioral data showed that subjects were more prone to generate inferences when reading exposition. Concerning the involvement of the right and left hemispheres in the process of inference making, there were no significant differences in terms of the ERPs amplitudes, although the right hemisphere showed a tendency for greater participation when subjects were reading the last sentence of the paragraphs and had to judge whether this sentence was coherent to the previous sentences. Overall, this study suggests that the two types of text investigated are processed differently by the brain, as revealed by the nuances showed in the N1 and N400 components across the two last sentences of the paragraphs. Even though it was not possible to delineate a clear picture in terms of brain processes, given the lack of robust results, this study might be the first of many steps towards a complete understanding of the cognitive processes involved in discourse comprehension

    Properties grounded in identity

    Get PDF
    The topic of this dissertation are essential properties. The aim is to give an explication of the concept of essential properties in terms of the concept of metaphysical grounding. Along the way, the author proves several new results about formal theories of metaphysical grounding and develops a new hyperintensional theory of properties. Chapter 1 is the introduction of the thesis in which the author motivates the problem of explicating the concept of essential properties and gives adequacy criteria for a successful explication tracing back to Carnap. The author discusses the orthodox explication of essential properties in terms of metaphysical necessity and Fine's objections to it. He goes on to develop a new ground-theoretic explication of essential properties in an informal way, where the basic idea is that a property is essential to an object if and only if the property is metaphysically grounded in the identity or haecceity of the object. The author argues informally that the new explication provides natural solutions to the problems raised by Fine and make it the goal for the rest of the dissertation to make the account formally precise. Chapter 2 focuses on a axiomatic theories of metaphysical grounding. In particular, the author develops a new formal approach to the relation of partial ground, i.e. the relation of one truth holding partially in virtue of another. The main novelty of the chapter is the use of a grounding predicate rather than an operator to formalize statements of (partial) ground. The author develops the new theory in formal detail as a first-order theory, proves its consistency, and shows that it's a conservative extension of the well-known theory of positive truth. Moreover, the author constructs a concrete model of the theory. Then, the author extends the framework with typed truth predicates, which allow us to make statements about the grounds and truth of statements about the truth of other sentences. Also this theory the author proves consistent and a conservative extension of the ramified theory of positive truth. A model construction extending the construction of the base theory is also given. Ultimately, the author discards the theory for the purpose of the dissertation, because of technical problems that arise when we try to develop a satisfactory explication of essential properties in the framework. The author leaves further development of the framework for future work and argues that further investigating could lead to interesting and fruitful discussion between formal theorists of truth and metaphysical theorists of grounding. Chapter 3 develops a logic of iterated ground, i.e. relations of ground between statements of ground, using the operator approach. The author first discusses certain conceptual distinctions in the context of metaphysical ground in general and iterated ground in particular. The author argues that different conceptions of iterated ground lead to different logics of iterated ground. He goes on to develop the logic of iterated ground based on the idea that if one truth is grounded in others, then it's these grounds that ground the statement of ground itself. This view traces back to remarks by de Rosset and Litland. The logic is developed in formal detail, both syntactically and semantically, and its formal properties are investigated. To conclude the chapter, the author discusses certain problems that arise when we're trying to prove a completeness result for the logic and sketches a way around them. In chapter 4, the author develops a new hyperintensional theory of properties, which can distinguish in natural, semantic terms between necessarily co-extensional but intuitively distinct properties. The theory is based on the idea that we can individuate properties by means of what the author calls "exemplification criteria" of an object for a property, roughly the states of affairs that if they obtain explain why the object has the property. The author develops this theory both formally and informally and discusses in detail how it achieves a natural distinction between necessarily co-extensional but intuitively distinct properties. Chapter 5 is the conclusion, where the author brings the results of chapter 3 and 4 to bear on the informal explication of essential properties in terms of metaphysical ground from the introduction. The author argues that together the iterated logic of ground from chapter 3 and the hyperintensional property theory of chapter 4 allow us to make the explication formally precise in a way that satisfies the adequacy criteria for a successful explication laid out by Carnap. The author concludes with a brief discussion of possible ways of extending the results of the dissertation in various ways

    Prosody generation for text-to-speech synthesis

    Get PDF
    The absence of convincing intonation makes current parametric speech synthesis systems sound dull and lifeless, even when trained on expressive speech data. Typically, these systems use regression techniques to predict the fundamental frequency (F0) frame-by-frame. This approach leads to overlysmooth pitch contours and fails to construct an appropriate prosodic structure across the full utterance. In order to capture and reproduce larger-scale pitch patterns, we propose a template-based approach for automatic F0 generation, where per-syllable pitch-contour templates (from a small, automatically learned set) are predicted by a recurrent neural network (RNN). The use of syllable templates mitigates the over-smoothing problem and is able to reproduce pitch patterns observed in the data. The use of an RNN, paired with connectionist temporal classification (CTC), enables the prediction of structure in the pitch contour spanning the entire utterance. This novel F0 prediction system is used alongside separate LSTMs for predicting phone durations and the other acoustic features, to construct a complete text-to-speech system. Later, we investigate the benefits of including long-range dependencies in duration prediction at frame-level using uni-directional recurrent neural networks. Since prosody is a supra-segmental property, we consider an alternate approach to intonation generation which exploits long-term dependencies of F0 by effective modelling of linguistic features using recurrent neural networks. For this purpose, we propose a hierarchical encoder-decoder and multi-resolution parallel encoder where the encoder takes word and higher level linguistic features at the input and upsamples them to phone-level through a series of hidden layers and is integrated into a Hybrid system which is then submitted to Blizzard challenge workshop. We then highlight some of the issues in current approaches and a plan for future directions of investigation is outlined along with on-going work

    Statistical Foundations of Actuarial Learning and its Applications

    Get PDF
    This open access book discusses the statistical modeling of insurance problems, a process which comprises data collection, data analysis and statistical model building to forecast insured events that may happen in the future. It presents the mathematical foundations behind these fundamental statistical concepts and how they can be applied in daily actuarial practice. Statistical modeling has a wide range of applications, and, depending on the application, the theoretical aspects may be weighted differently: here the main focus is on prediction rather than explanation. Starting with a presentation of state-of-the-art actuarial models, such as generalized linear models, the book then dives into modern machine learning tools such as neural networks and text recognition to improve predictive modeling with complex features. Providing practitioners with detailed guidance on how to apply machine learning methods to real-world data sets, and how to interpret the results without losing sight of the mathematical assumptions on which these methods are based, the book can serve as a modern basis for an actuarial education syllabus
    corecore