4,068 research outputs found
A Flexible Shallow Approach to Text Generation
In order to support the efficient development of NL generation systems, two
orthogonal methods are currently pursued with emphasis: (1) reusable, general,
and linguistically motivated surface realization components, and (2) simple,
task-oriented template-based techniques. In this paper we argue that, from an
application-oriented perspective, the benefits of both are still limited. In
order to improve this situation, we suggest and evaluate shallow generation
methods associated with increased flexibility. We advise a close connection
between domain-motivated and linguistic ontologies that supports the quick
adaptation to new tasks and domains, rather than the reuse of general
resources. Our method is especially designed for generating reports with
limited linguistic variations.Comment: LaTeX, 10 page
CASP-DM: Context Aware Standard Process for Data Mining
We propose an extension of the Cross Industry Standard Process for Data
Mining (CRISPDM) which addresses specific challenges of machine learning and
data mining for context and model reuse handling. This new general
context-aware process model is mapped with CRISP-DM reference model proposing
some new or enhanced outputs
Recommended from our members
ICOPER Project - Deliverable 4.3 ISURE: Recommendations for extending effective reuse, embodied in the ICOPER CD&R
The purpose of this document is to capture the ideas and recommendations, within and beyond the ICOPER community, concerning the reuse of learning content, including appropriate methodologies as well as established strategies for remixing and repurposing reusable resources. The overall remit of this work focuses on describing the key issues that are related to extending effective reuse embodied in such materials. The objective of this investigation, is to support the reuse of learning content whilst considering how it could be originally created and then adapted with that ‘reuse’ in mind. In these circumstances a survey on effective reuse best practices can often provide an insight into the main challenges and benefits involved in the process of creating, remixing and repurposing what we are now designating as Reusable Learning Content (RLC).
Several key issues are analysed in this report: Recommendations for extending effective reuse, building upon those described in the previous related deliverables 4.1 Content Development Methodologies and 4.2 Quality Control and Web 2.0 technologies. The findings of this current survey, however, provide further recommendations and strategies for using and developing this reusable learning content. In the spirit of ‘reuse’, this work also aims to serve as a foundation for the many different stakeholders and users within, and beyond, the ICOPER community who are interested in reusing learning resources.
This report analyses a variety of information. Evidence has been gathered from a qualitative survey that has focused on the technical and pedagogical recommendations suggested by a Special Interest Group (SIG) on the most innovative practices with respect to new media content authors (for content authoring or modification) and course designers (for unit creation). This extended community includes a wider collection of OER specialists. This collected evidence, in the form of video and audio interviews, has also been represented as multimedia assets potentially helpful for learning and useful as learning content in the New Media Space (See section 4 for further details).
Section 2 of this report introduces the concept of reusable learning content and reusability. Section 3 discusses an application created by the ICOPER community to enhance the opportunities for developing reusable content. Section 4 of this report provides an overview of the methodology used for the qualitative survey. Section 5 presents a summary of thematic findings. Section 6 highlights a list of recommendations for effective reuse of educational content, which were derived from thematic analysis described in Appendix A. Finally, section 7 summarises the key outcomes of this work
Feature-based generation of pervasive systems architectures utilizing software product line concepts
As the need for pervasive systems tends to increase and to dominate the computing discipline, software engineering approaches must evolve at a similar pace to facilitate the construction of such systems in an efficient manner. In this thesis, we provide a vision of a framework that will help in the construction of software product lines for pervasive systems by devising an approach to automatically generate architectures for this domain. Using this framework, designers of pervasive systems will be able to select a set of desired system features, and the framework will automatically generate architectures that support the presence of these features. Our approach will not compromise the quality of the architecture especially as we have verified that by comparing the generated architectures to those manually designed by human architects. As an initial step, and in order to determine the most commonly required features that comprise the widely most known pervasive systems, we surveyed more than fifty existing architectures for pervasive systems in various domains. We captured the most essential features along with the commonalities and variabilities between them. The features were categorized according to the domain and the environment that they target. Those categories are: General pervasive systems, domain-specific, privacy, bridging, fault-tolerance and context-awareness. We coupled the identified features with well-designed components, and connected the components based on the initial features selected by a system designer to generate an architecture. We evaluated our generated architectures against architectures designed by human architects. When metrics such as coupling, cohesion, complexity, reusability, adaptability, modularity, modifiability, packing density, and average interaction density were used to test our framework, our generated architectures were found comparable, if not better than the human generated architectures
Extractability Effectiveness on Software Product Line
A software product line consists of a family of software systems. Most of quality attributes are defined for single systems. When we are facing a family of products instead of a single system, some aspects of architecture evaluation, such as cost, time, and reusability of available assets, become more highlighted. In this paper a new quality attribute for software product line, which we called it extractability, is introduced. Also extractability measuring method and relationship between extractability with some quality attributes is presented. At the end, Extractability Effectiveness on Software Product Line is evaluated in practice.DOI:http://dx.doi.org/10.11591/ijece.v4i1.410
AC-Norm: Effective Tuning for Medical Image Analysis via Affine Collaborative Normalization
Driven by the latest trend towards self-supervised learning (SSL), the
paradigm of "pretraining-then-finetuning" has been extensively explored to
enhance the performance of clinical applications with limited annotations.
Previous literature on model finetuning has mainly focused on regularization
terms and specific policy models, while the misalignment of channels between
source and target models has not received sufficient attention. In this work,
we revisited the dynamics of batch normalization (BN) layers and observed that
the trainable affine parameters of BN serve as sensitive indicators of domain
information. Therefore, Affine Collaborative Normalization (AC-Norm) is
proposed for finetuning, which dynamically recalibrates the channels in the
target model according to the cross-domain channel-wise correlations without
adding extra parameters. Based on a single-step backpropagation, AC-Norm can
also be utilized to measure the transferability of pretrained models. We
evaluated AC-Norm against the vanilla finetuning and state-of-the-art
fine-tuning methods on transferring diverse pretrained models to the diabetic
retinopathy grade classification, retinal vessel segmentation, CT lung nodule
segmentation/classification, CT liver-tumor segmentation and MRI cardiac
segmentation tasks. Extensive experiments demonstrate that AC-Norm unanimously
outperforms the vanilla finetuning by up to 4% improvement, even under
significant domain shifts where the state-of-the-art methods bring no gains. We
also prove the capability of AC-Norm in fast transferability estimation. Our
code is available at https://github.com/EndoluminalSurgicalVision-IMR/ACNorm
Feature Engineering for Predictive Modeling using Reinforcement Learning
Feature engineering is a crucial step in the process of predictive modeling.
It involves the transformation of given feature space, typically using
mathematical functions, with the objective of reducing the modeling error for a
given target. However, there is no well-defined basis for performing effective
feature engineering. It involves domain knowledge, intuition, and most of all,
a lengthy process of trial and error. The human attention involved in
overseeing this process significantly influences the cost of model generation.
We present a new framework to automate feature engineering. It is based on
performance driven exploration of a transformation graph, which systematically
and compactly enumerates the space of given options. A highly efficient
exploration strategy is derived through reinforcement learning on past
examples
TO DESIGN NEW QUALITY MODEL FOR EVALUATING COTS COMPONENTS
Abstract: The purpose of this paper is to build an ISO 9126 based new quality model that describes quality characteristics for the successful assessment of COTS software and to guide the industries that are making COTS-based systems. Some new features are added as well as some dropped in existing model ISO 9126 for better evaluation of COTS components. In the proposed model, some new sub-characteristic such as availability, resource utilization, and capacity associated with high-level characteristic efficiency (performance) are included. Some new sub-characteristics are also added such as scalability, reconfigurability, stability, and self-contained.Keywords: COTS, Quality model, ISO 9126, Component-Based Software Development (CBSD)
NeuroPrime: a Pythonic framework for the priming of brain states in self-regulation protocols
Due to the recent pandemic and a general boom
in technology, we are facing more and more threats of isolation,
depression, fear, overload of information, between others. In
turn, these affect our Self, psychologically and physically.
Therefore, new tools are required to assist the regulation of this
unregulated Self to a more personalized, optimal and healthy
Self. As such, we developed a Pythonic open-source humancomputer
framework for assisted priming of subjects to
“optimally” self-regulate their Neurofeedback (NF) with
external stimulation, like guided mindfulness. For this, we did a
three-part study in which: 1) we defined the foundations of the
framework and its design for priming subjects to self-regulate
their NF, 2) developed an open-source version of the framework
in Python, NeuroPrime, for utility, expandability and
reusability, and 3) we tested the framework in neurofeedback
priming versus no-priming conditions. NeuroPrime is a
research toolbox developed for the simple and fast integration
of advanced online closed-loop applications. More specifically,
it was validated and tuned for the research of priming brain
states in an EEG neurofeedback setup. In this paper, we will
explain the key aspects of the priming framework, the
NeuroPrime software developed, the design decisions and
demonstrate/validate the use of our toolbox by presenting use
cases of priming brain states during a neurofeedback setup.MIT -Massachusetts Institute of Technology(PD/BD/114033/2015
- …