95,117 research outputs found
Product line architecture recovery with outlier filtering in software families: the Apo-Games case study
Software product line (SPL) approach has been widely adopted to achieve systematic reuse in families of software products. Despite its benefits, developing an SPL from scratch requires high up-front investment. Because of that, organizations commonly create product variants with opportunistic reuse approaches (e.g., copy-and-paste or clone-and-own). However, maintenance and evolution of a large number of product variants is a challenging task. In this context, a family of products developed opportunistically is a good starting point to adopt SPLs, known as extractive approach for SPL adoption. One of the initial phases of the extractive approach is the recovery and definition of a product line architecture (PLA) based on existing software variants, to support variant derivation and also to allow the customization according to customersâ needs. The problem of defining a PLA from existing system variants is that some variants can become highly unrelated to their predecessors, known as outlier variants. The inclusion of outlier variants in the PLA recovery leads to additional effort and noise in the common structure and complicates architectural decisions. In this work, we present an automatic approach to identify and filter outlier variants during the recovery and definition of PLAs. Our approach identifies the minimum subset of cross-product architectural information for an effective PLA recovery. To evaluate our approach, we focus on real-world variants of the Apo-Games family. We recover a PLA taking as input 34 Apo-Game variants developed by using opportunistic reuse. The results provided evidence that our automatic approach is able to identify and filter outlier variants, allowing to eliminate exclusive packages and classes without removing the whole variant. We consider that the recovered PLA can help domain experts to take informed decisions to support SPL adoption.This research was partially funded by INES 2.0; CNPq grants 465614/2014-0 and 408356/2018-9; and FAPESB grants JCB0060/2016 and BOL2443/201
Recovering Architectural Variability of a Family of Product Variants
A Software Product Line (SPL) aims at applying a pre-planned systematic reuse
of large-grained software artifacts to increase the software productivity and
reduce the development cost. The idea of SPL is to analyze the business domain
of a family of products to identify the common and the variable parts between
the products. However, it is common for companies to develop, in an ad-hoc
manner (e.g. clone and own), a set of products that share common
functionalities and differ in terms of others. Thus, many recent research
contributions are proposed to re-engineer existing product variants to a SPL.
Nevertheless, these contributions are mostly focused on managing the
variability at the requirement level. Very few contributions address the
variability at the architectural level despite its major importance. Starting
from this observation, we propose, in this paper, an approach to reverse
engineer the architecture of a set of product variants. Our goal is to identify
the variability and dependencies among architectural-element variants at the
architectural level. Our work relies on Formal Concept Analysis (FCA) to
analyze the variability. To validate the proposed approach, we experimented on
two families of open-source product variants; Mobile Media and Health Watcher.
The results show that our approach is able to identify the architectural
variability and the dependencies
Assessing the Complexity of a Recovered Design and its Potential Redesign Alternatives
Organised by: Cranfield UniversityReverse engineering techniques are applied to generate a part model where there is no existing
documentation or it is no longer up to date. To facilitate the reverse engineering tasks, a modular, multiperspective
design recovery framework has been developed. An evaluation of the product and feature
complexity characteristics can readily be extracted from the design recovery framework by using a
modification of a rapid complexity assessment tool. The results from this tool provide insight with respect to
the original design and assists with the evaluation of potential alternatives and risks, as illustrated by the
case study.Mori Seiki â The Machine Tool Compan
Analysis of Software Binaries for Reengineering-Driven Product Line Architecture\^aAn Industrial Case Study
This paper describes a method for the recovering of software architectures
from a set of similar (but unrelated) software products in binary form. One
intention is to drive refactoring into software product lines and combine
architecture recovery with run time binary analysis and existing clustering
methods. Using our runtime binary analysis, we create graphs that capture the
dependencies between different software parts. These are clustered into smaller
component graphs, that group software parts with high interactions into larger
entities. The component graphs serve as a basis for further software product
line work. In this paper, we concentrate on the analysis part of the method and
the graph clustering. We apply the graph clustering method to a real
application in the context of automation / robot configuration software tools.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301
A single-photon sampling architecture for solid-state imaging
Advances in solid-state technology have enabled the development of silicon
photomultiplier sensor arrays capable of sensing individual photons. Combined
with high-frequency time-to-digital converters (TDCs), this technology opens up
the prospect of sensors capable of recording with high accuracy both the time
and location of each detected photon. Such a capability could lead to
significant improvements in imaging accuracy, especially for applications
operating with low photon fluxes such as LiDAR and positron emission
tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs
between fill factor and spatio-temporal resolution, causing many contemporary
designs to severely underutilize the technology's full potential. Concentrating
on the low photon flux setting, this paper leverages results from group testing
and proposes an architecture for a highly efficient readout of pixels using
only a small number of TDCs, thereby also reducing both cost and power
consumption. The design relies on a multiplexing technique based on binary
interconnection matrices. We provide optimized instances of these matrices for
various sensor parameters and give explicit upper and lower bounds on the
number of TDCs required to uniquely decode a given maximum number of
simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical
digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with
a 40ps time resolution and an estimated fill factor of approximately 70%, using
only 161 TDCs. The design guarantees registration and unique recovery of up to
4 simultaneous photon arrivals using a fast decoding algorithm. In a series of
realistic simulations of scintillation events in clinical positron emission
tomography the design was able to recover the spatio-temporal location of 98.6%
of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
Designing Software Architectures As a Composition of Specializations of Knowledge Domains
This paper summarizes our experimental research and software development activities in designing robust, adaptable and reusable software architectures. Several years ago, based on our previous experiences in object-oriented software development, we made the following assumption: âA software architecture should be a composition of specializations of knowledge domainsâ. To verify this assumption we carried out three pilot projects. In addition to the application of some popular domain analysis techniques such as use cases, we identified the invariant compositional structures of the software architectures and the related knowledge domains. Knowledge domains define the boundaries of the adaptability and reusability capabilities of software systems. Next, knowledge domains were mapped to object-oriented concepts. We experienced that some aspects of knowledge could not be directly modeled in terms of object-oriented concepts. In this paper we describe our approach, the pilot projects, the experienced problems and the adopted solutions for realizing the software architectures. We conclude the paper with the lessons that we learned from this experience
Generalized Inpainting Method for Hyperspectral Image Acquisition
A recently designed hyperspectral imaging device enables multiplexed
acquisition of an entire data volume in a single snapshot thanks to
monolithically-integrated spectral filters. Such an agile imaging technique
comes at the cost of a reduced spatial resolution and the need for a
demosaicing procedure on its interleaved data. In this work, we address both
issues and propose an approach inspired by recent developments in compressed
sensing and analysis sparse models. We formulate our superresolution and
demosaicing task as a 3-D generalized inpainting problem. Interestingly, the
target spatial resolution can be adjusted for mitigating the compression level
of our sensing. The reconstruction procedure uses a fast greedy method called
Pseudo-inverse IHT. We also show on simulations that a random arrangement of
the spectral filters on the sensor is preferable to regular mosaic layout as it
improves the quality of the reconstruction. The efficiency of our technique is
demonstrated through numerical experiments on both synthetic and real data as
acquired by the snapshot imager.Comment: Keywords: Hyperspectral, inpainting, iterative hard thresholding,
sparse models, CMOS, Fabry-P\'ero
The global field of multi-family offices: An institutionalist perspective
We apply the notion of the organisational field to internationally operating multi-family offices. These organisations specialise on the preservation of enterprising and geographically dispersed familiesâ fortunes. They provide their services across generations and countries. Based on secondary data of Bloombergâs Top 50 Family Offices, we show that they constitute a global organisational field that comprises two clusters of homogeneity. Clients may decide between two different configurations of activities, depending on their preferences regarding asset management, resource management, family management, and service architecture. The findings also reveal that multi-family offices make relatively similar value propositions all over the world. The distinctiveness of the clusters within the field is not driven by the embeddedness of the multi-family offices in different national environments or their various degrees of international experience. Rather, it is weakly affected by two out of four possible value propositions, namely the exclusiveness and the transparency of services
- âŠ