4,068 research outputs found

    A Flexible Shallow Approach to Text Generation

    Full text link
    In order to support the efficient development of NL generation systems, two orthogonal methods are currently pursued with emphasis: (1) reusable, general, and linguistically motivated surface realization components, and (2) simple, task-oriented template-based techniques. In this paper we argue that, from an application-oriented perspective, the benefits of both are still limited. In order to improve this situation, we suggest and evaluate shallow generation methods associated with increased flexibility. We advise a close connection between domain-motivated and linguistic ontologies that supports the quick adaptation to new tasks and domains, rather than the reuse of general resources. Our method is especially designed for generating reports with limited linguistic variations.Comment: LaTeX, 10 page

    CASP-DM: Context Aware Standard Process for Data Mining

    Get PDF
    We propose an extension of the Cross Industry Standard Process for Data Mining (CRISPDM) which addresses specific challenges of machine learning and data mining for context and model reuse handling. This new general context-aware process model is mapped with CRISP-DM reference model proposing some new or enhanced outputs

    Feature-based generation of pervasive systems architectures utilizing software product line concepts

    Get PDF
    As the need for pervasive systems tends to increase and to dominate the computing discipline, software engineering approaches must evolve at a similar pace to facilitate the construction of such systems in an efficient manner. In this thesis, we provide a vision of a framework that will help in the construction of software product lines for pervasive systems by devising an approach to automatically generate architectures for this domain. Using this framework, designers of pervasive systems will be able to select a set of desired system features, and the framework will automatically generate architectures that support the presence of these features. Our approach will not compromise the quality of the architecture especially as we have verified that by comparing the generated architectures to those manually designed by human architects. As an initial step, and in order to determine the most commonly required features that comprise the widely most known pervasive systems, we surveyed more than fifty existing architectures for pervasive systems in various domains. We captured the most essential features along with the commonalities and variabilities between them. The features were categorized according to the domain and the environment that they target. Those categories are: General pervasive systems, domain-specific, privacy, bridging, fault-tolerance and context-awareness. We coupled the identified features with well-designed components, and connected the components based on the initial features selected by a system designer to generate an architecture. We evaluated our generated architectures against architectures designed by human architects. When metrics such as coupling, cohesion, complexity, reusability, adaptability, modularity, modifiability, packing density, and average interaction density were used to test our framework, our generated architectures were found comparable, if not better than the human generated architectures

    Extractability Effectiveness on Software Product Line

    Get PDF
    A software product line consists of a family of software systems. Most of quality attributes are defined for single systems. When we are facing a family of products instead of a single system, some aspects of architecture evaluation, such as cost, time, and reusability of available assets, become more highlighted. In this paper a new quality attribute for software product line, which we called it extractability, is introduced. Also extractability measuring method and relationship between extractability with some quality attributes is presented. At the end, Extractability Effectiveness on Software Product Line is evaluated in practice.DOI:http://dx.doi.org/10.11591/ijece.v4i1.410

    AC-Norm: Effective Tuning for Medical Image Analysis via Affine Collaborative Normalization

    Full text link
    Driven by the latest trend towards self-supervised learning (SSL), the paradigm of "pretraining-then-finetuning" has been extensively explored to enhance the performance of clinical applications with limited annotations. Previous literature on model finetuning has mainly focused on regularization terms and specific policy models, while the misalignment of channels between source and target models has not received sufficient attention. In this work, we revisited the dynamics of batch normalization (BN) layers and observed that the trainable affine parameters of BN serve as sensitive indicators of domain information. Therefore, Affine Collaborative Normalization (AC-Norm) is proposed for finetuning, which dynamically recalibrates the channels in the target model according to the cross-domain channel-wise correlations without adding extra parameters. Based on a single-step backpropagation, AC-Norm can also be utilized to measure the transferability of pretrained models. We evaluated AC-Norm against the vanilla finetuning and state-of-the-art fine-tuning methods on transferring diverse pretrained models to the diabetic retinopathy grade classification, retinal vessel segmentation, CT lung nodule segmentation/classification, CT liver-tumor segmentation and MRI cardiac segmentation tasks. Extensive experiments demonstrate that AC-Norm unanimously outperforms the vanilla finetuning by up to 4% improvement, even under significant domain shifts where the state-of-the-art methods bring no gains. We also prove the capability of AC-Norm in fast transferability estimation. Our code is available at https://github.com/EndoluminalSurgicalVision-IMR/ACNorm

    Feature Engineering for Predictive Modeling using Reinforcement Learning

    Full text link
    Feature engineering is a crucial step in the process of predictive modeling. It involves the transformation of given feature space, typically using mathematical functions, with the objective of reducing the modeling error for a given target. However, there is no well-defined basis for performing effective feature engineering. It involves domain knowledge, intuition, and most of all, a lengthy process of trial and error. The human attention involved in overseeing this process significantly influences the cost of model generation. We present a new framework to automate feature engineering. It is based on performance driven exploration of a transformation graph, which systematically and compactly enumerates the space of given options. A highly efficient exploration strategy is derived through reinforcement learning on past examples

    TO DESIGN NEW QUALITY MODEL FOR EVALUATING COTS COMPONENTS

    Get PDF
    Abstract: The purpose of this paper is to build an ISO 9126 based new quality model that describes quality characteristics for the successful assessment of COTS software and to guide the industries that are making COTS-based systems. Some new features are added as well as some dropped in existing model ISO 9126 for better evaluation of COTS components. In the proposed model, some new sub-characteristic such as availability, resource utilization, and capacity associated with high-level characteristic efficiency (performance) are included. Some new sub-characteristics are also added such as scalability, reconfigurability, stability, and self-contained.Keywords: COTS, Quality model, ISO 9126, Component-Based Software Development (CBSD)

    NeuroPrime: a Pythonic framework for the priming of brain states in self-regulation protocols

    Get PDF
    Due to the recent pandemic and a general boom in technology, we are facing more and more threats of isolation, depression, fear, overload of information, between others. In turn, these affect our Self, psychologically and physically. Therefore, new tools are required to assist the regulation of this unregulated Self to a more personalized, optimal and healthy Self. As such, we developed a Pythonic open-source humancomputer framework for assisted priming of subjects to “optimally” self-regulate their Neurofeedback (NF) with external stimulation, like guided mindfulness. For this, we did a three-part study in which: 1) we defined the foundations of the framework and its design for priming subjects to self-regulate their NF, 2) developed an open-source version of the framework in Python, NeuroPrime, for utility, expandability and reusability, and 3) we tested the framework in neurofeedback priming versus no-priming conditions. NeuroPrime is a research toolbox developed for the simple and fast integration of advanced online closed-loop applications. More specifically, it was validated and tuned for the research of priming brain states in an EEG neurofeedback setup. In this paper, we will explain the key aspects of the priming framework, the NeuroPrime software developed, the design decisions and demonstrate/validate the use of our toolbox by presenting use cases of priming brain states during a neurofeedback setup.MIT -Massachusetts Institute of Technology(PD/BD/114033/2015
    corecore