3,238 research outputs found

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    On the automaticity of language processing

    Get PDF
    People speak and listen to language all the time. Given this high frequency of use, it is often suggested that at least some aspects of language processing are highly overlearned and therefore occur “automatically”. Here we critically examine this suggestion. We first sketch a framework that views automaticity as a set of interrelated features of mental processes and a matter of degree rather than a single feature that is all-or-none. We then apply this framework to language processing. To do so, we carve up the processes involved in language use according to (a) whether language processing takes place in monologue or dialogue, (b) whether the individual is comprehending or producing language, (c) whether the spoken or written modality is used, and (d) the linguistic processing level at which they occur, that is, phonology, the lexicon, syntax, or conceptual processes. This exercise suggests that while conceptual processes are relatively non-automatic (as is usually assumed), there is also considerable evidence that syntactic and lexical lower-level processes are not fully automatic. We close by discussing entrenchment as a set of mechanisms underlying automatization

    Grammaticalization and grammar

    Get PDF
    This paper is concerned with developing Joan Bybee's proposals regarding the nature of grammatical meaning and synthesizing them with Paul Hopper's concept of grammar as emergent. The basic question is this: How much of grammar may be modeled in terms of grammaticalization? In contradistinction to Heine, Claudi & Hünnemeyer (1991), who propose a fairly broad and unconstrained framework for grammaticalization, we try to present a fairly specific and constrained theory of grammaticalization in order to get a more precise idea of the potential and the problems of this approach. Thus, while Heine et al. (1991:25) expand – without discussion – the traditional notion of grammaticalization to the clause level, and even include non-segmental structure (such as word order), we will here adhere to a strictly 'element-bound' view of grammaticalization: where no grammaticalized element exists, there is no grammaticalization. Despite this fairly restricted concept of grammaticalization, we will attempt to corroborate the claim that essential aspects of grammar may be understood and modeled in terms of grammaticalization. The approach is essentially theoretical (practical applications will, hopefully, follow soon) and many issues are just mentioned and not discussed in detail. The paper presupposes a familiarity with the basic facts of grammaticalization and it does not present any new facts

    Abstract Fixpoint Computations with Numerical Acceleration Methods

    Get PDF
    Static analysis by abstract interpretation aims at automatically proving properties of computer programs. To do this, an over-approximation of program semantics, defined as the least fixpoint of a system of semantic equations, must be computed. To enforce the convergence of this computation, widening operator is used but it may lead to coarse results. We propose a new method to accelerate the computation of this fixpoint by using standard techniques of numerical analysis. Our goal is to automatically and dynamically adapt the widening operator in order to maintain precision

    Laruelle Qua Stiegler: On Non-Marxism and the Transindividual

    Get PDF
    Alexander R. Galloway and Jason R. LaRiviére’s article “Compression in Philosophy” seeks to pose François Laruelle’s engagement with metaphysics against Bernard Stiegler’s epistemological rendering of idealism. Identifying Laruelle as the theorist of genericity, through which mankind and the world are identified through an index of “opacity,” the authors argue that Laruelle does away with all deleterious philosophical “data.” Laruelle’s generic immanence is posed against Stiegler’s process of retention and discretization, as Galloway and LaRiviére argue that Stiegler’s philosophy seeks to reveal an enchanted natural world through the development of noesis. By further developing Laruelle and Stiegler’s Marxian projects, I seek to demonstrate the relation between Stiegler's artefaction and “compression” while, simultaneously, I also seek to create further bricolage between Laruelle and Stiegler. I also further elaborate on their distinct engagement(s) with Marx, offering the mold of synthesis as an alternative to compression when considering Stiegler’s work on transindividuation. In turn, this paper seeks to survey some of the contemporary theorists drawing from Stiegler (Yuk Hui, Al-exander Wilson and Daniel Ross) and Laruelle (Anne-Françoise Schmidt, Gilles Grelet, Ray Brassier, Katerina Kolozova, John Ó Maoilearca and Jonathan Fardy) to examine political discourse regarding the posthuman and non-human, with a particular interest in Kolozova’s unified theory of standard philosophy and Capital

    MITK-ModelFit: A generic open-source framework for model fits and their exploration in medical imaging -- design, implementation and application on the example of DCE-MRI

    Full text link
    Many medical imaging techniques utilize fitting approaches for quantitative parameter estimation and analysis. Common examples are pharmacokinetic modeling in DCE MRI/CT, ADC calculations and IVIM modeling in diffusion-weighted MRI and Z-spectra analysis in chemical exchange saturation transfer MRI. Most available software tools are limited to a special purpose and do not allow for own developments and extensions. Furthermore, they are mostly designed as stand-alone solutions using external frameworks and thus cannot be easily incorporated natively in the analysis workflow. We present a framework for medical image fitting tasks that is included in MITK, following a rigorous open-source, well-integrated and operating system independent policy. Software engineering-wise, the local models, the fitting infrastructure and the results representation are abstracted and thus can be easily adapted to any model fitting task on image data, independent of image modality or model. Several ready-to-use libraries for model fitting and use-cases, including fit evaluation and visualization, were implemented. Their embedding into MITK allows for easy data loading, pre- and post-processing and thus a natural inclusion of model fitting into an overarching workflow. As an example, we present a comprehensive set of plug-ins for the analysis of DCE MRI data, which we validated on existing and novel digital phantoms, yielding competitive deviations between fit and ground truth. Providing a very flexible environment, our software mainly addresses developers of medical imaging software that includes model fitting algorithms and tools. Additionally, the framework is of high interest to users in the domain of perfusion MRI, as it offers feature-rich, freely available, validated tools to perform pharmacokinetic analysis on DCE MRI data, with both interactive and automatized batch processing workflows.Comment: 31 pages, 11 figures URL: http://mitk.org/wiki/MITK-ModelFi

    Creativity and the Brain

    Get PDF
    Neurocognitive approach to higher cognitive functions that bridges the gap between psychological and neural level of description is introduced. Relevant facts about the brain, working memory and representation of symbols in the brain are summarized. Putative brain processes responsible for problem solving, intuition, skill learning and automatization are described. The role of non-dominant brain hemisphere in solving problems requiring insight is conjectured. Two factors seem to be essential for creativity: imagination constrained by experience, and filtering that selects most interesting solutions. Experiments with paired words association are analyzed in details and evidence for stochastic resonance effects is found. Brain activity in the process of invention of novel words is proposed as the simplest way to understand creativity using experimental and computational means. Perspectives on computational models of creativity are discussed

    Cyber-Virtual Systems: Simulation, Validation & Visualization

    Full text link
    We describe our ongoing work and view on simulation, validation and visualization of cyber-physical systems in industrial automation during development, operation and maintenance. System models may represent an existing physical part - for example an existing robot installation - and a software simulated part - for example a possible future extension. We call such systems cyber-virtual systems. In this paper, we present the existing VITELab infrastructure for visualization tasks in industrial automation. The new methodology for simulation and validation motivated in this paper integrates this infrastructure. We are targeting scenarios, where industrial sites which may be in remote locations are modeled and visualized from different sites anywhere in the world. Complementing the visualization work, here, we are also concentrating on software modeling challenges related to cyber-virtual systems and simulation, testing, validation and verification techniques for them. Software models of industrial sites require behavioural models of the components of the industrial sites such as models for tools, robots, workpieces and other machinery as well as communication and sensor facilities. Furthermore, collaboration between sites is an important goal of our work.Comment: Preprint, 9th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE 2014

    Towards a Unified Knowledge-Based Approach to Modality Choice

    Get PDF
    This paper advances a unified knowledge-based approach to the process of choosing the most appropriate modality or combination of modalities in multimodal output generation. We propose a Modality Ontology (MO) that models the knowledge needed to support the two most fundamental processes determining modality choice – modality allocation (choosing the modality or set of modalities that can best support a particular type of information) and modality combination (selecting an optimal final combination of modalities). In the proposed ontology we model the main levels which collectively determine the characteristics of each modality and the specific relationships between different modalities that are important for multi-modal meaning making. This ontology aims to support the automatic selection of modalities and combinations of modalities that are suitable to convey the meaning of the intended message
    corecore