361 research outputs found
Improving Object-Oriented Programming by Integrating Language Features to Support Immutability
Nowadays developers consider Object-Oriented Programming (OOP) the de-facto general programming paradigm. While successful, OOP is not without problems. In 1994, Gamma et al. published a book with a set of 23 design patterns addressing recurring problems found in OOP software. These patterns are well-known in the industry and are taught in universities as part of software engineering curricula. Despite their usefulness in solving recurring problems, these design patterns bring a certain complexity in their implementation. That complexity is influenced by the features available in the implementation language. In this thesis, we want to decrease this complexity by focusing on the problems that design patterns attempt to solve and the language features that can be used to solve them. Thus, we aim to investigate the impact of specific language features on OOP and contribute guidelines to improve OOP language design.
We first perform a mapping study to catalogue the language features that have been proposed in the literature to improve design pattern implementations. From those features, we focus on investigating the impact of immutability-related features on OOP.
We then perform an exploratory study measuring the impact of introducing immutability in OOP software with the objective of establishing the advantages and drawbacks of using immutability in the context of OOP. Results indicate that immutability may produce more granular and easier-to-understand programs.
We also perform an experiment to measure the impact of new language features added into the C\# language for better immutability support. Results show that these specific language features facilitate developers' tasks when aiming to implement immutability in OOP.
We finally present a new design pattern aimed at solving a problem with method overriding in the context of immutable hierarchies of objects. We discuss the impact of language features on the implementations of this pattern by comparing these implementations in different programming languages, including Clojure, Java, and Kotlin.
Finally, we implement these language features as a language extension to Common Lisp and discuss their usage
Assessment of Octave’s OO features based on GoF patterns
This thesis aims to evaluate the object-oriented (OO) features of the Octave programming
language, through the implementation of the popular Gang-of-Four (GoF) design
patterns. The study explores the fundamental principles of OO, including modularity,
inheritance, encapsulation, polymorphism, and abstraction, and investigates how these
concepts are supported by Octave. This research is conducted through the implementation
of two complete collections of the GoF patterns originally coded in Java and the
subsequent analysis of the quality of the implementations thus derived. This evaluation
is based on comparisons with their Java counterparts as regards modularity and flexible
module composition. To our knowledge, no study of this nature has been made on Octave.
This thesis is intended to contribute to a better understanding of Octave’s current
OO capabilities and limitations as well as its potential as a tool for developing complex
software systems.Esta tese visa avaliar as características orientadas a objetos (OO) da linguagem de programação
Octave, através da implementação dos populares design patterns dos Gang-of-Four
(GoF). O estudo explora alguns princípios fundamentais de OO, incluindo modularidade,
herança, encapsulamento, polimorfismo e abstração, e investiga o suporte de Octave a
estes conceitos. Esta investigação é conduzida através da implementação de duas coleções
completas dos padrões GoF originalmente desenvolvidos em Java e da análise subsequente
à qualidade das implementações assim derivadas. Esta avaliação é baseada em
comparações com os seus equivalentes Java no que diz respeito à modularidade e composição
de módulos flexível. Segundo a nossa pesquisa, ainda não foi feito qualquer estudo
desta natureza em Octave. Esta tese destina-se a contribuir para uma melhor compreensão
das atuais capacidades e limitações do paradigma OO em Octave, bem como do seu
potencial como ferramenta para o desenvolvimento de sistemas de software complexos
Modeling Deception for Cyber Security
In the era of software-intensive, smart and connected systems, the growing power and so-
phistication of cyber attacks poses increasing challenges to software security. The reactive
posture of traditional security mechanisms, such as anti-virus and intrusion detection
systems, has not been sufficient to combat a wide range of advanced persistent threats
that currently jeopardize systems operation. To mitigate these extant threats, more ac-
tive defensive approaches are necessary. Such approaches rely on the concept of actively
hindering and deceiving attackers. Deceptive techniques allow for additional defense by
thwarting attackers’ advances through the manipulation of their perceptions. Manipu-
lation is achieved through the use of deceitful responses, feints, misdirection, and other
falsehoods in a system. Of course, such deception mechanisms may result in side-effects
that must be handled. Current methods for planning deception chiefly portray attempts
to bridge military deception to cyber deception, providing only high-level instructions
that largely ignore deception as part of the software security development life cycle. Con-
sequently, little practical guidance is provided on how to engineering deception-based
techniques for defense. This PhD thesis contributes with a systematic approach to specify
and design cyber deception requirements, tactics, and strategies. This deception approach
consists of (i) a multi-paradigm modeling for representing deception requirements, tac-
tics, and strategies, (ii) a reference architecture to support the integration of deception
strategies into system operation, and (iii) a method to guide engineers in deception mod-
eling. A tool prototype, a case study, and an experimental evaluation show encouraging
results for the application of the approach in practice. Finally, a conceptual coverage map-
ping was developed to assess the expressivity of the deception modeling language created.Na era digital o crescente poder e sofisticação dos ataques cibernéticos apresenta constan-
tes desafios para a segurança do software. A postura reativa dos mecanismos tradicionais
de segurança, como os sistemas antivírus e de detecção de intrusão, não têm sido suficien-
tes para combater a ampla gama de ameaças que comprometem a operação dos sistemas
de software actuais. Para mitigar estas ameaças são necessárias abordagens ativas de
defesa. Tais abordagens baseiam-se na ideia de adicionar mecanismos para enganar os
adversários (do inglês deception). As técnicas de enganação (em português, "ato ou efeito
de enganar, de induzir em erro; artimanha usada para iludir") contribuem para a defesa
frustrando o avanço dos atacantes por manipulação das suas perceções. A manipula-
ção é conseguida através de respostas enganadoras, de "fintas", ou indicações erróneas
e outras falsidades adicionadas intencionalmente num sistema. É claro que esses meca-
nismos de enganação podem resultar em efeitos colaterais que devem ser tratados. Os
métodos atuais usados para enganar um atacante inspiram-se fundamentalmente nas
técnicas da área militar, fornecendo apenas instruções de alto nível que ignoram, em
grande parte, a enganação como parte do ciclo de vida do desenvolvimento de software
seguro. Consequentemente, há poucas referências práticas em como gerar técnicas de
defesa baseadas em enganação. Esta tese de doutoramento contribui com uma aborda-
gem sistemática para especificar e desenhar requisitos, táticas e estratégias de enganação
cibernéticas. Esta abordagem é composta por (i) uma modelação multi-paradigma para re-
presentar requisitos, táticas e estratégias de enganação, (ii) uma arquitetura de referência
para apoiar a integração de estratégias de enganação na operação dum sistema, e (iii) um
método para orientar os engenheiros na modelação de enganação. Uma ferramenta protó-
tipo, um estudo de caso e uma avaliação experimental mostram resultados encorajadores
para a aplicação da abordagem na prática. Finalmente, a expressividade da linguagem
de modelação de enganação é avaliada por um mapeamento de cobertura de conceitos
Emerging approaches for data-driven innovation in Europe: Sandbox experiments on the governance of data and technology
Europe’s digital transformation of the economy and society is one of the priorities of the current Commission
and is framed by the European strategy for data. This strategy aims at creating a single market for data
through the establishment of a common European data space, based in turn on domain-specific data spaces
in strategic sectors such as environment, agriculture, industry, health and transportation. Acknowledging the
key role that emerging technologies and innovative approaches for data sharing and use can play to make
European data spaces a reality, this document presents a set of experiments that explore emerging
technologies and tools for data-driven innovation, and also deepen in the socio-technical factors and forces
that occur in data-driven innovation. Experimental results shed some light in terms of lessons learned and
practical recommendations towards the establishment of European data spaces
Emerging approaches for data-driven innovation in Europe
Europe’s digital transformation of the economy and society is one of the priorities of the current Commission and is framed by the European strategy for data. This strategy aims at creating a single market for data through the establishment of a common European data space, based in turn on domain-specific data spaces in strategic sectors such as environment, agriculture, industry, health and transportation. Acknowledging the key role that emerging technologies and innovative approaches for data sharing and use can play to make European data spaces a reality, this document presents a set of experiments that explore emerging technologies and tools for data-driven innovation, and also deepen in the socio-technical factors and forces that occur in data-driven innovation. Experimental results shed some light in terms of lessons learned and practical recommendations towards the establishment of European data spaces
Open-source software product line extraction processes: the ArgoUML-SPL and Phaser cases
Software Product Lines (SPLs) are rarely developed from scratch. Commonly, they emerge
from one product when there is a need to create tailored variants, or from existing variants
created in an ad-hoc way once their separated maintenance and evolution become challenging. Despite the vast literature about re-engineering systems into SPLs and related technical
approaches, there is a lack of detailed analysis of the process itself and the effort involved.
In this paper, we provide and analyze empirical data of the extraction processes of two open source case studies, namely ArgoUML and Phaser. Both cases emerged from the transition
of a monolithic system into an SPL. The analysis relies on information mined from the
version control history of their respective source-code repositories and the discussion with
developers that took part in the process. Unlike previous works that focused mostly on the
structural results of the final SPL, the contribution of this study is an in-depth characterization of the processes. With this work, we aimed at providing a deeper understanding of the
strategies for SPL extraction and their implications. Our results indicate that the source code
changes can range from almost a fourth to over half of the total lines of code. Developers
may or may not use branching strategies for feature extraction. Additionally, the problems
faced during the extraction process may be due to lack of tool support, complexity on managing feature dependencies and issues with feature constraints. We made publicly available
the datasets and the analysis scripts of both case studies to be used as a baseline for extractive
SPL adoption research and practice.This research was partially funded by CNPq, grant no. 408356/2018-9; FAPPR, grant no. 51435; and FAPERJ PDR-10 Fellowship 202073/2020.
Open access funding provided by Johannes Kepler University Lin
Volume II Acquisition Research Creating Synergy for Informed Change, Thursday 19th Annual Acquisition Research Proceedings
ProceedingsApproved for public release; distribution is unlimited
Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning
[EN] Data analysis is a key process to foster knowledge generation in particular domains
or fields of study. With a strong informative foundation derived from the analysis of
collected data, decision-makers can make strategic choices with the aim of obtaining
valuable benefits in their specific areas of action. However, given the steady growth
of data volumes, data analysis needs to rely on powerful tools to enable knowledge
extraction.
Information dashboards offer a software solution to analyze large volumes of
data visually to identify patterns and relations and make decisions according to the
presented information. But decision-makers may have different goals and,
consequently, different necessities regarding their dashboards. Moreover, the variety
of data sources, structures, and domains can hamper the design and implementation
of these tools.
This Ph.D. Thesis tackles the challenge of improving the development process of
information dashboards and data visualizations while enhancing their quality and
features in terms of personalization, usability, and flexibility, among others.
Several research activities have been carried out to support this thesis. First, a
systematic literature mapping and review was performed to analyze different
methodologies and solutions related to the automatic generation of tailored
information dashboards. The outcomes of the review led to the selection of a modeldriven
approach in combination with the software product line paradigm to deal with
the automatic generation of information dashboards.
In this context, a meta-model was developed following a domain engineering
approach. This meta-model represents the skeleton of information dashboards and
data visualizations through the abstraction of their components and features and has
been the backbone of the subsequent generative pipeline of these tools.
The meta-model and generative pipeline have been tested through their
integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully
integrated with other meta-model to support knowledge generation in learning
ecosystems, and as a framework to conceptualize and instantiate information
dashboards in different domains.
In terms of the practical applications, the focus has been put on how to transform
the meta-model into an instance adapted to a specific context, and how to finally
transform this later model into code, i.e., the final, functional product. These practical
scenarios involved the automatic generation of dashboards in the context of a Ph.D.
Programme, the application of Artificial Intelligence algorithms in the process, and
the development of a graphical instantiation platform that combines the meta-model
and the generative pipeline into a visual generation system.
Finally, different case studies have been conducted in the employment and
employability, health, and education domains. The number of applications of the
meta-model in theoretical and practical dimensions and domains is also a result itself.
Every outcome associated to this thesis is driven by the dashboard meta-model, which
also proves its versatility and flexibility when it comes to conceptualize, generate, and
capture knowledge related to dashboards and data visualizations
- …