818 research outputs found
Change impact analysis of multi-language and heterogeneously-licensed software
Today software systems are built with heterogeneous languages such as Java, C, C++,
XML, Perl or Python just to name a few. This introduces new challenges both for the
software analysis domain and program evolution as programmers must to cope with a
variety of programming paradigms and languages. We believe that there is the need of
global views supporting developers to effectively cope with complexity and to facilitate
program comprehension and analysis of such heterogeneous systems. Furthermore, the
heterogeneity of the systems is not limited to the language but also impacts the components
licensing. In fact, licensing is another type of heterogeneity introduced by the large
reuse of open source code. The heterogeneity of licenses also introduces challenges such
how to legally combine components in different programming languages and licenses in
the same system and how the change of the software can create a violation of licenses. In
this context, we would like to develop a re-engineering tool for analysing change impact
of heterogeneously licensed system considering multi-language environment. First, we
want to study change impact analysis in multi-language system in general and extend
it to support the issue of licenses
Investigating Advances in the Acquisition of Secure Systems Based on Open Architecture, Open Source Software, and Software Product Lines
Naval Postgraduate School Acquisition Research Progra
Software Licenses in Context: The Challenge of Heterogeneously-Licensed Systems
The prevailing approach to free/open source software and licenses has been that each system is developed, distributed, and used under the terms of a single license. But it is increasingly common for information systems and other software to be composed with components from a variety of sources, and with a diversity of licenses. This may result in possible license conflicts and organizational liability for failure to fulfill license obligations. Research and practice to date have not kept up with this sea-change in software licensing arising from free/open source software development. System consumers and users consequently rely on ad hoc heuristics (or costly legal advice) to determine which license rights and obligations are in effect, often with less than optimal results; consulting services are offered to identify unknowing unauthorized use of licensed software in information systems; and researchers have shown how the choice of a (single) specific license for a product affects project success and system adoption. Legal scholars have examined how pairs of software licenses conflict but only in simple contexts. We present an approach for understanding and modeling software licenses, as well as for analyzing conflicts among groups of licenses in realistic system contexts, and for guiding the acquisition, integration, or development of systems with free/open source components in such an environment. This work is based on an empirical analysis of representative software licenses and of heterogeneously-licensed systems. Our approach provides guidance for achieving a “best-of-breed” component strategy while obtaining desired license rights in exchange for acceptable obligations
How Many Penguins Can Hide Under an Umbrella? An Examination of How Lay Conceptions Conceal the Contexts of Free/Open Source Software
This paper examines the attention put by IS researchers to the various contexts of the Free/Open Source Software (FOSS) phenomenon. Following a selective review of the IS literature on FOSS, we highlight some of the pitfalls that FOSS research encounter in its quest for theoretical progress. We raise awareness of these pitfalls\u27 consequences for how we propose, test, and falsify theories about the FOSS phenomenon. We conclude by proposing an agenda for future research
LiSum: Open Source Software License Summarization with Multi-Task Learning
Open source software (OSS) licenses regulate the conditions under which users
can reuse, modify, and distribute the software legally. However, there exist
various OSS licenses in the community, written in a formal language, which are
typically long and complicated to understand. In this paper, we conducted a
661-participants online survey to investigate the perspectives and practices of
developers towards OSS licenses. The user study revealed an indeed need for an
automated tool to facilitate license understanding. Motivated by the user study
and the fast growth of licenses in the community, we propose the first study
towards automated license summarization. Specifically, we released the first
high quality text summarization dataset and designed two tasks, i.e., license
text summarization (LTS), aiming at generating a relatively short summary for
an arbitrary license, and license term classification (LTC), focusing on the
attitude inference towards a predefined set of key license terms (e.g.,
Distribute). Aiming at the two tasks, we present LiSum, a multi-task learning
method to help developers overcome the obstacles of understanding OSS licenses.
Comprehensive experiments demonstrated that the proposed jointly training
objective boosted the performance on both tasks, surpassing state-of-the-art
baselines with gains of at least 5 points w.r.t. F1 scores of four
summarization metrics and achieving 95.13% micro average F1 score for
classification simultaneously. We released all the datasets, the replication
package, and the questionnaires for the community
Device specialization in heterogeneous multi-GPU environments
In the last few years there have been many activities towards coupling CPUs and GPUs in order to get the most from CPU-GPU heterogeneous systems. One of the main problems that prevent these systems to be exploited in a device-aware manner is the CPU-GPU communication bottleneck, which often doesn\u27t allow to produce code more efficient than the GPU-only and the CPU-only counterparts. As a consequence, most of the heterogeneous scheduling systems treat CPUs and GPUs as homogeneous nodes, electing map-like data partitioning to employ both these processing resources. We propose to study how the radical change in the connection between GPU, CPU and memory characterizing the APUs (Accelerated Processing Units) affect the architecture of a compiler and if it is possible to use all these computing resources in a device-aware manner. We investigate on a methodology to analyze the devices that populate heterogeneous multi-GPU systems and to classify general purpose algorithms in order to perform near-optimal control flow and data partitioning
Querying a dozen corpora and a thousand years with Fintan
Large-scale diachronic corpus studies covering longer time periods are difficult if more than one corpus are to be consulted and, as a result, different formats and annotation schemas need to be processed and queried in a uniform, comparable and replicable manner. We describes the application of the Flexible Integrated Transformation and Annotation eNgineering (Fintan) platform for studying word order in German using syntactically annotated corpora that represent its entire written history. Focusing on nominal dative and accusative arguments, this study hints at two major phases in the development of scrambling in modern German. Against more recent assumptions, it supports the traditional view that word order flexibility decreased over time, but it also indicates that this was a relatively sharp transition in Early New High German. The successful case study demonstrates the potential of Fintan and the underlying LLOD technology for historical linguistics, linguistic typology and corpus linguistics. The technological contribution of this paper is to demonstrate the applicability of Fintan for querying across heterogeneously annotated corpora, as previously, it had only been applied for transformation tasks. With its focus on quantitative analysis, Fintan is a natural complement for existing multi-layer technologies that focus on query and exploration
RadOnc: An R Package for Analysis of Dose-Volume Histogram and Three-Dimensional Structural Data
Purpose/Objectives: Dose volume histogram (DVH) data are generally analyzed within the context of a treatment planning system (TPS) on a per-patient basis, with evaluation of single-plan or comparative dose distributions. However, TPS software generally cannot perform simultaneous comparative dosimetry among a cohort of patients. The same limitations apply to parallel analyses of three-dimensional structures and other clinical data. Materials/Methods: We developed a suite of tools ("RadOnc" package) using R statistical software to better compare pooled DVH data and empower analysis of structure data and clinical correlates. Representative patient data were identified among previously analyzed adult (n=13) and pediatric (n=1) cohorts and these data were used to demonstrate the performance and functionality of the RadOnc package. Results: The RadOnc package facilitates DVH data import from the TPS and includes automated methods for DVH visualization, dosimetric parameter extraction, statistical comparison among multiple DVHs, basic three-dimensional structural processing, and visualization tools to enable customizable production of publication-quality images. Conclusions: The RadOnc package provides a potent clinical research tool with the ability to integrate robust statistical software and dosimetric data from cohorts of patients. It is made freely available to the community for their current use and remains under active development
- …