421 research outputs found

    Challenges in Validating FLOSS Conguration

    Get PDF
    Part 3: Licensing, Strategies, and PracticesInternational audienceDevelopers invest much effort into validating configuration during startup of free/libre and open source software (FLOSS) applications. Nevertheless, hardly any tools exist to validate configuration files to detect misconfigurations earlier. This paper aims at understanding the challenges to provide better tools for configuration validation. We use mixed methodology: (1) We analyzed 2,683 run-time configuration accesses in the source-code of 16 applications comprising 50 million lines of code. (2) We conducted a questionnaire survey with 162 FLOSS contributors completing the survey. We report our experiences about building up a FLOSS community that tackles the issues by unifying configuration validation with an external configuration access specification. We discovered that information necessary for validation is often missing in the applications and FLOSS developers dislike dependencies on external packages for such validations

    Rasa-ptbr-boilerplate : FLOSS project that enables brazilian portuguese chatbot development by non-experts

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Faculdade UnB Gama (FGA), Engenharia de Software, 2019.Chatbots possuem a capacidade de conversar com pessoas por meio de imitação do comportamento humano. Atualmente, chatbots são capazes de desempenhar tarefas simples como responder perguntas sobre um determinado contexto e desempenhar tarefas complexas como o gerenciamento completo de residências. No entanto, o desenvolvimento de um projeto de chatbot requer uma equipe completa formada por vários especialistas, que podem consumir tempo e recursos. É comum projetos de chatbots terem requisitos de software semelhantes e apenas se difenciar no domínio da solução específico o que poderia resultar na reutilização de software de código aberto (OSS) relacionado à chatbots. Neste trabalho, é examinado como os projetos de chatbot podem se beneficiar da reutilização no nível do projeto (reutilização de caixa preta). Foi demonstrado que é possível combinar estrategicamente a arquitetura e os diálogos com a utilização do modelo de processo CRISP-DM em novos contextos e propósitos de conversação. A principal contribuição deste trabalho é a apresentação de um projeto de chatbot chamado Rasa-ptbr-boilerplate com configurações e integrações de tecnologias voltado para a reutilização de forma que não especialistas sejam capazes de desenvolver um chatbot como caixa-preta.Chatbots have the ability to talk to people through the imitation of human behavior. Currently, chatbots are able to perform simple tasks such as answering questions about a particular context and performing complex tasks such as complete home management. However, the development of a chatbot project requires a full team of many experts, which can consume time and resources. It is common for chatbot projects to have similar software requirements and only to differ in the domain of the specific solution which could result in the re-use of open source software (OSS) related to chatbots. In this work, it is examined how chatbot projects can benefit from reuse at the project level (black box reuse). It has been shown that it is possible to strategically combine the architecture and dialogues with the use of CRISPDM process model in new contexts and conversational purposes. The main contribution of this work is the presentation of a chatbot project called Rasa-ptbr-boilerplate with configurations and integrations of technologies aimed at the reuse so that non-specialists are able to develop a chatbot as a black box

    Raising the ClaSS of Streaming Time Series Segmentation

    Full text link
    Ubiquitous sensors today emit high frequency streams of numerical measurements that reflect properties of human, animal, industrial, commercial, and natural processes. Shifts in such processes, e.g. caused by external events or internal state changes, manifest as changes in the recorded signals. The task of streaming time series segmentation (STSS) is to partition the stream into consecutive variable-sized segments that correspond to states of the observed processes or entities. The partition operation itself must in performance be able to cope with the input frequency of the signals. We introduce ClaSS, a novel, efficient, and highly accurate algorithm for STSS. ClaSS assesses the homogeneity of potential partitions using self-supervised time series classification and applies statistical tests to detect significant change points (CPs). In our experimental evaluation using two large benchmarks and six real-world data archives, we found ClaSS to be significantly more precise than eight state-of-the-art competitors. Its space and time complexity is independent of segment sizes and linear only in the sliding window size. We also provide ClaSS as a window operator with an average throughput of 538 data points per second for the Apache Flink streaming engine

    An environment for sustainable research software in Germany and beyond: current state, open challenges, and call for action

    Get PDF
    Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin

    Automated Implementation of Windows-related Security-Configuration Guides

    Full text link
    Hardening is the process of configuring IT systems to ensure the security of the systems' components and data they process or store. The complexity of contemporary IT infrastructures, however, renders manual security hardening and maintenance a daunting task. In many organizations, security-configuration guides expressed in the SCAP (Security Content Automation Protocol) are used as a basis for hardening, but these guides by themselves provide no means for automatically implementing the required configurations. In this paper, we propose an approach to automatically extract the relevant information from publicly available security-configuration guides for Windows operating systems using natural language processing. In a second step, the extracted information is verified using the information of available settings stored in the Windows Administrative Template files, in which the majority of Windows configuration settings is defined. We show that our implementation of this approach can extract and implement 83% of the rules without any manual effort and 96% with minimal manual effort. Furthermore, we conduct a study with 12 state-of-the-art guides consisting of 2014 rules with automatic checks and show that our tooling can implement at least 97% of them correctly. We have thus significantly reduced the effort of securing systems based on existing security-configuration guides

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    The experience as a document: designing for the future of collaborative remembering in digital archives

    Get PDF
    How does it feel when we remember together on-line? Who gets to say what it is worth to be remembered? To understand how the user experience of participation is affecting the formation of collective memories in the context of online environments, first it is important to take into consideration how the notion of memory has been transformed under the influence of the digital revolution. I aim to contribute to the field of User Experience (UX) research theorizing on the felt experience of users from a memory perspective, taking into consideration aspects linked to both personal and collective memories in the context of connected environments.Harassment and hate speech in connected conversational environments are specially targeted to women and underprivileged communities, which has become a problem for digital archives of vernacular creativity (Burgess, J. E. 2007) such as YouTube, Twitter, Reddit and Wikipedia. An evaluation of the user experience of underprivileged communities in creative archives such as Wikipedia indicates the urgency for building a feminist space where women and queer folks can focus on knowledge production and learning without being harassed. The theoretical models and designs that I propose are a result of a series of prototype testing and case studies focused on cognitive tools for a mediated human memory operating inside transactive memory systems. With them, aims to imagine the means by which feminist protocols for UX design and research can assist in the building and maintenance of the archive as a safe/brave space.Working with perspectives from media theory, memory theory and gender studies and centering the user experience of participation for women, queer folks, people of colour (POC) and other vulnerable and underrepresented communities as the main focus of inquiring, my research takes an interdisciplinary approach to interrogate how online misogyny and other forms of abuse are perceived by communities placed outside the center of the hegemonic normativity, and how the user experience of online abuse is affecting the formation of collective memories in the context of online environments
    corecore