46 research outputs found

    The Love/Hate Relationship with the C Preprocessor: An Interview Study

    Get PDF
    The C preprocessor has received strong criticism in academia, among others regarding separation of concerns, error proneness, and code obfuscation, but is widely used in practice. Many (mostly academic) alternatives to the preprocessor exist, but have not been adopted in practice. Since developers continue to use the preprocessor despite all criticism and research, we ask how practitioners perceive the C preprocessor. We performed interviews with 40 developers, used grounded theory to analyze the data, and cross-validated the results with data from a survey among 202 developers, repository mining, and results from previous studies. In particular, we investigated four research questions related to why the preprocessor is still widely used in practice, common problems, alternatives, and the impact of undisciplined annotations. Our study shows that developers are aware of the criticism the C preprocessor receives, but use it nonetheless, mainly for portability and variability. Many developers indicate that they regularly face preprocessor-related problems and preprocessor-related bugs. The majority of our interviewees do not see any current C-native technologies that can entirely replace the C preprocessor. However, developers tend to mitigate problems with guidelines, even though those guidelines are not enforced consistently. We report the key insights gained from our study and discuss implications for practitioners and researchers on how to better use the C preprocessor to minimize its negative impact

    Un-preprocessing:Extended CPP that works with your tools

    Get PDF

    An approach to safely evolve preprocessor-based C program families.

    Get PDF
    Desde os anos 70, o pré-processador C é amplamente utilizado na prática para adaptar sistemas para diferentes plataformas e cenários de aplicação. Na academia, no entanto, o pré-processador tem recebido fortes críticas desde o início dos anos 90. Os pesquisadores têm criticado a sua falta de modularidade, a sua propensão para introduzir erros sutis e sua ofuscação do código fonte. Para entender melhor os problemas de usar o pré-processador C,considerando a percepção dos desenvolvedores, realizamos 40 entrevistas e uma pesquisa entre 202 desenvolvedores. Descobrimos que os desenvolvedores lidam com três problemas comuns na prática: erros relacionados à configuração, testes combinatórios e compreensão do código. Os desenvolvedores agravam estes problemas ao usar diretivas não disciplinadas, as quais não respeitam a estrutura sintática do código. Para evoluir famílias de programas de forma segura, foram propostas duas estratégias para a detecção de erros relacionados à configuração e um conjunto de 14 refatoramentos para remover diretivas não disciplinadas. Para lidar melhor com a grande quantidade de configurações do código fonte, a primeira estratégia considera todo o conjunto de configurações do código fonte e a segunda estratégia utiliza amostragem. Para propor um algoritmo de amostragem adequado, foram comparados 10 algoritmos com relação ao esforço (número de configurações para testar) e capacidade de detecção de erros (número de erros detectados nas configurações da amostra). Com base nos resultados deste estudo, foi proposto um algoritmo de amostragem. Estudos empíricos foram realizados usando 40 sistemas C do mundo real. Detectamos 128 erros relacionados à configuração, enviamos 43 correções para erros ainda não corrigidos e os desenvolvedores aceitaram 65% das correções. Os resultados de nossa pesquisa mostram que a maioria dos desenvolvedores preferem usar a versão refatorada,ou seja,disciplinada do código fonte,ao invés do código original com as diretivas não disciplinadas. Além disso,os desenvolvedores aceitaram 21 (75%) das 28 sugestões enviadas para transformar diretivas não disciplinadas em disciplinadas. Nossa pesquisa apresenta resultados úteis para desenvolvedores de código C durante suas tarefas de desenvolvimento, contribuindo para minimizar o número de erros relacionados à configuração, melhorar a compreensão e a manutenção do código fonte e orientar os desenvolvedores para realizar testes combinatórios.Since the 70s, the C preprocessor is still widely used in practice in a numbers of projects, including Apache,Linux ,and Libssh, totail or systems to different platforms and application scenarios. In academia,however, the preprocess or has received strong critic is msinceatl east the early 90s. Researchers have criticized its lack of separation of concerns, its proneness to introduce subtle errors, and its obfuscation of the source code. To better understand the problems of using the C preprocessor, taking the perception of developers into account, we conducted 40 interviewsandasurveyamong 202 developers. We found that developers deal with three common problems in practice: configuration-related bugs, combinatorial testing, and code comprehension. Developers aggravate these problems when using undisciplined directives (i.e., bad smells regarding preprocessor use), which are preprocessor directives thatdo notrespect thesyntactic structureof thesource code. To safely evolve preprocessor based program families, we proposed strategies to detect configuration-relatedbugs and bad smells, and a set of 14 refactorings to remove bad smells. To better deal with exponential configuration spaces, our strategies uses variability-aware analysis that considers the entire set of possible configurations, and sampling, which allows to reuse C tools that consider only one configuration at a time to detect bugs. To propose a suitable sampling algorithm, we compared 10 algorithms with respect to effort (i.e., number of configurations to test) andbug-detection capabilities (i.e.,numberofbugs detected in the sampled configurations). Based on the results, we proposed a sampling algorithm with an useful balance between effort and bug-detection capability. We performed empirical studies using a corpus of 40 C real-world systems. We detected 128 configuration-related bugs, submitted 43 patches to fix bugs not fixed yet, and developers accepted 65% of the patches. The results of our survey show that most developers prefer to use the refactored (i.e., disciplined) version of the code instead of the original code with undisciplined directives. Furthermore, developers accepted 21 (75%) out of 28 patches submitted to refactor undisciplined into disciplined directives. Our work presents useful findings for C developers during their development tasks, contributing to minimize the chances of introducing configuration-related bugs and bad smells, improve code comprehension, and guide developers to perform combinatorial testing

    Game Development using Design-by-Contract.

    Get PDF

    The Role of Complex Constraints in Feature Modeling: Master’s Thesis

    Get PDF
    Feature modeling is a method to compactly capture commonality and variability of a software product line. Multiple feature modeling languages have been proposed that evolved over the last decades to become more expressive in syntax and semantics. Most of today’s languages are capable of utilizing arbitrary propositional formulas in cross-tree constraints, denoted as complex constraints, a mechanism enabling complete expressiveness. However, many of today’s publications and feature model applications are targeting older, less expressive languages, due to their history and long domination in the product-line community. We present a study on the importance of complex constraints in feature modeling. Furthermore, to build a bridge between feature models using complex constraints and methods lacking support for complex constraints, we present a sound refactoring of complex constraints, discuss preconditions that must be met, and conduct empirical experiments on real-world feature models to evaluate its usefulness and scalability.Feature-Modellierung ist eine Methode, um Gemeinsamkeiten und Variabilität einer Produktlinie in der Softwareentwicklung kompakt darzustellen. Über die letzten Jahrzehnte wurden verschiedene Sprachen für die Feature-Modellierung vorgestellt, die sich sowohl syntaktisch als auch semantisch voneinander unterscheiden. Viele der heute eingesetzten Sprachen unterstützen die Angabe beliebiger logischer Audrücke, so genannte komplexe Constraints, um orthogonale Beziehungen zwischen Features festzulegen. Komplexe Constraints geben einer Feature-Modellierungssprache volle Ausdrucksmächtigkeit. Allerdings werden heutzutage immer noch eine große Menge an Methoden und Applikationen publiziert, die auf bekanntere Sprachen mit weniger Ausdrucksmächtigkeit aufbauen. In dieser Arbeit untersuchen wir die Notwenidigkeit von komplexen Constraints in der Feature Modellierung. Zudem überbrücken wir die Problematik zwischen neueren Sprachen mit komplexen Constraints und Methoden und Tools, die auf älteren Sprachen aufbauen, indem wir einen Ansatz präsentieren, um komplexe Constraints in Feature Modellen zu eliminieren. Wir diskutieren Vorbedingungen und evaluieren unseren Ansatz hinsichtlich Nutzen und Skalierbarkeit an Feature Modellen aus der realen Welt

    Semi-Automatische Deduktion von Feature-Lokalisierung während der Softwareentwicklung: Masterarbeit

    Get PDF
    Despite extensive research on software product lines in the last decades, ad-hoc clone-and-own development is still the dominant way for introducing variability to software systems. Therefore, the same issues for which software product lines were developed in the first place are still imminent in clone-and-own development: Fixing bugs consistently throughout clones and avoiding duplicate implementation effort is extremely diffcult as similarities and differences between variants are unknown. In order to remedy this, we enhance clone-and-own development with techniques from product-line engineering for targeted variant synchronisation such that domain knowledge can be integrated stepwise and without obligation. Contrary to retroactive feature mapping recovery (e.g., mining) techniques, we infer feature-to-code mappings directly during software development when concrete domain knowledge is present. In this thesis, we focus on the first step towards targeted synchronisation between variants: the recording of feature mappings. By letting developers specify on which feature they are working on, we derive feature mappings directly during software development. We ensure syntactic validity of feature mappings and variant synchronisation by implementing disciplined annotations through abstract syntax trees. To bridge the mismatch between change classification in the implementation and abstract layer, we synthesise semantic edits on abstract syntax trees. We show that our derivation can be used to reproduce variability-related real-world code changes and compare it to the feature mapping derivation of the projectional variation control system VTS by Stanciulescu et al.Trotz umfangreicher Forschung zu Software-Produktlinien in den letzten Jahrzehnten ist Clone-and-Own immer noch der dominierende Ansatz zur Einführung von Variabilität in Softwaresystemen. Daher stehen bei Clone-and-Own immer noch die gleichen Probleme im Vordergrund, für die Software-Produktlinien überhaupt erst entwickelt wurden: Die konsistente Behebung von Fehlern in allen Klonen und die Vermeidung von doppeltem Implementierungsaufwand sind äußerst schwierig, da Ähnlichkeiten und Unterschiede zwischen den Varianten unbekannt sind. Um hier Abhilfe zu schaffen, erweitern wir die Clone-and-Own-Entwicklung mit Techniken aus der Produktlinien-Entwicklung zur gezielten Synchronisierung von Varianten, sodass Entwickler ihr Domänenwissen schrittweise und unverbindlich integrieren können. Im Gegensatz zu nachträglich arbeitenden Feature-Mapping-Recovery- oder auch Mining-Techniken, leiten wir Zuordungen von Features zu Quellcode direkt während der Softwareentwicklung ab, wenn konkretes Domänenwissen vorhanden ist. In dieser Arbeit entwickeln wir den ersten Schritt zur gezielten Synchronisation von Varianten: die Aufzeichnung von Feature-Mappings. Indem Entwickler spezifizieren an welchem Feature sie arbeiten, leiten wir Feature-Mappings direkt während der Softwareentwicklung ab. Wir stellen die syntaktische Korrektheit von Feature-Mappings und der Synchronisation von Varianten sicher, indem wir disziplinierte Annotationen mithilfe von abstrakten Syntaxbäumen implementieren. Um die Diskrepanz der Klassifizierung von Änderungen zwischen der Implementierungs- und der Abstraktionsschicht zu überbrücken, synthetisieren wir Semantic Edits auf abstrakten Syntaxbäumen. Wir zeigen, dass unsere Ableitung von Feature-Mappings in der Lage ist reale Codeänderungen zu reproduzieren und vergleichen sie mit der Feature-Mapping-Ableitung des Variationskontrollsystems VTS von Stanciulescu et al

    Sharpening Your Tools: Updating bulk_extractor for the 2020s

    Full text link
    Bulk_extractor is a high-performance digital forensics tool written in C++. Between 2018 and 2022 we updated the program from C++98 to C++17, performed a complete code refactoring, and adopted a unit test framework. The new version typically runs with 75\% more throughput than the previous version, which we attribute to improved multithreading. We provide lessons and recommendations for other digital forensics tool maintainers

    Self-Organizing Software Architectures

    Get PDF
    Looking at engineering productivity is a source for improving the state of software engineering. We present two approaches to improve productivity: bottom-up modeling and self-configuring software components. Productivity, as measured in the ability to produce correctly working software features using limited resources is improved by performing less wasteful activities and by concentrating on the required activities to build sustainable software development organizations. Bottom-up modeling is a way to combine improved productivity with agile software engineering. Instead of focusing on tools and up-front planning, the models used emerge, as the requirements to the product are unveiled during a project. The idea is to build the modeling formalisms strong enough to be employed in code generation and as runtime models. This brings the benefits of model-driven engineering to agile projects, where the benefits have been rare. Self-configuring components are a development of bottom-up modeling. The notion of a source model is extended to incorporate the software entities themselves. Using computational reflection and introspection, dependent components of the software can be automatically updated to reflect changes in the dependence. This improves maintainability, thus making software changes faster. The thesis contains a number of case studies explaining the ways of applying the presented techniques. In addition to constructing the case studies, an empirical validation with test subjects is presented to show the usefulness of the techniques.Itseorganisoituvat ohjelmistoarkkitehtuurit Ohjelmistokehityksen tuottavuus on monen ohjelmistokehitysorganisaation huolenaihe. Erityisesti ylläpitovaiheessa ohjelmistojen heikko muokattavuus tuottaa turhia kustannuksia ja pettymyksiä asiakassuhteissa, kun vaikeasti muokattavaan ohjelmistoon tulisi tehdä muutoksia. Tässä työssä esitetään kaksi menetelmää ohjelmistojen muokattavuuden parantamiseksi: kokoava mallinnuskielten käyttäminen sekä itseorganisoituvat ohjelmistokomponentit. Mallipohjaisessa ohjelmistotuotannossa ohjelmistoille kehitetään soveltuvat mallinnuskielet ja -työkalut, joiden pohjalta kehitettävä ohjelmisto voidaan automaattisesti tuottaa. Uuden mallinnuskielen kehittäminen ja sitä tukevan välineistön rakentaminen on kuitenkin aikaaviepää ja vaikeaa. Vaarana on, että kehitetty kieli on valmistuessaan vanhentunut. Niin kutsutuissa ketterissä ohjelmistomenetelmissä yritetään välttää perinteisten, suunittelupainotteisten kehitysmenetelmien tuottamia sudenkuoppia. Liiallinen ketteryys voi kuitenkin kostautua heikkona tuottavuutena, kun kehitysväen kaikki aika kuluu näppäryysharjoituksiin varsinaisen tuottavan työn sijaan. Kokoava mallipohjainen tuotanto keskittyy kehittämään vain riittävän hyviä malleja, joiden perusteella voidaan yhdistää mallipohjaisen ohjelmistotuotannon ja ketterien prosessimallien tuomat edut. Ulkoisten, erikseen kehiteltyjen mallikielten lisäksi työssä esitellään ajatus ohjelmakoodin itsensä käyttämisestä mallipohjaisen ohjelmistotuotannon työkaluna. Näin syntyy itseorganisoituva ohjelmistoarkkitehtuuri. Tällä tavoin kehitystyön tuottavuus paranee, sillä ohjelmakoodin sisäisten riippuvuuksien määrä laskee, ja näin ollen muokkausten tekeminen on helpompaa. Työssä esitellään tapaustutkimuksia ohjelmakoodiin perustuvasta mallipohjaisen ohjelmistotuotannon ohjelmistokehyksistä sekä empiirinen validointi itseorganisoituvuuden hyödyllisyydestä tuottavuusnäkökulmasta katsoen

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs
    corecore