27 research outputs found

    Adopting Automated Bug Assignment in Practice: A Longitudinal Case Study at Ericsson

    Full text link
    The continuous inflow of bug reports is a considerable challenge in large development projects. Inspired by contemporary work on mining software repositories, we designed a prototype bug assignment solution based on machine learning in 2011-2016. The prototype evolved into an internal Ericsson product, TRR, in 2017-2018. TRR's first bug assignment without human intervention happened in April 2019. Our study evaluates the adoption of TRR within its industrial context at Ericsson. Moreover, we investigate 1) how TRR performs in the field, 2) what value TRR provides to Ericsson, and 3) how TRR has influenced the ways of working. We conduct an industrial case study combining interviews with TRR stakeholders, minutes from sprint planning meetings, and bug tracking data. The data analysis includes thematic analysis, descriptive statistics, and Bayesian causal analysis. TRR is now an incorporated part of the bug assignment process. Considering the abstraction levels of the telecommunications stack, high-level modules are more positive while low-level modules experienced some drawbacks. On average, TRR automatically assigns 30% of the incoming bug reports with an accuracy of 75%. Auto-routed TRs are resolved around 21% faster within Ericsson, and TRR has saved highly seasoned engineers many hours of work. Indirect effects of adopting TRR include process improvements, process awareness, increased communication, and higher job satisfaction. TRR has saved time at Ericsson, but the adoption of automated bug assignment was more intricate compared to similar endeavors reported from other companies. We primarily attribute the difference to the very large size of the organization and the complex products. Key facilitators in the successful adoption include a gradual introduction, product champions, and careful stakeholder analysis.Comment: Under revie

    Learning preferences for personalisation in a pervasive environment

    Get PDF
    With ever increasing accessibility to technological devices, services and applications there is also an increasing burden on the end user to manage and configure such resources. This burden will continue to increase as the vision of pervasive environments, with ubiquitous access to a plethora of resources, continues to become a reality. It is key that appropriate mechanisms to relieve the user of such burdens are developed and provided. These mechanisms include personalisation systems that can adapt resources on behalf of the user in an appropriate way based on the user's current context and goals. The key knowledge base of many personalisation systems is the set of user preferences that indicate what adaptations should be performed under which contextual situations. This thesis investigates the challenges of developing a system that can learn such preferences by monitoring user behaviour within a pervasive environment. Based on the findings of related works and experience from EU project research, several key design requirements for such a system are identified. These requirements are used to drive the design of a system that can learn accurate and up to date preferences for personalisation in a pervasive environment. A standalone prototype of the preference learning system has been developed. In addition the preference learning system has been integrated into a pervasive platform developed through an EU research project. The preference learning system is fully evaluated in terms of its machine learning performance and also its utility in a pervasive environment with real end users

    Co-design of Security Aware Power System Distribution Architecture as Cyber Physical System

    Get PDF
    The modern smart grid would involve deep integration between measurement nodes, communication systems, artificial intelligence, power electronics and distributed resources. On one hand, this type of integration can dramatically improve the grid performance and efficiency, but on the other, it can also introduce new types of vulnerabilities to the grid. To obtain the best performance, while minimizing the risk of vulnerabilities, the physical power system must be designed as a security aware system. In this dissertation, an interoperability and communication framework for microgrid control and Cyber Physical system enhancements is designed and implemented taking into account cyber and physical security aspects. The proposed data-centric interoperability layer provides a common data bus and a resilient control network for seamless integration of distributed energy resources. In addition, a synchronized measurement network and advanced metering infrastructure were developed to provide real-time monitoring for active distribution networks. A hybrid hardware/software testbed environment was developed to represent the smart grid as a cyber-physical system through hardware and software in the loop simulation methods. In addition it provides a flexible interface for remote integration and experimentation of attack scenarios. The work in this dissertation utilizes communication technologies to enhance the performance of the DC microgrids and distribution networks by extending the application of the GPS synchronization to the DC Networks. GPS synchronization allows the operation of distributed DC-DC converters as an interleaved converters system. Along with the GPS synchronization, carrier extraction synchronization technique was developed to improve the system’s security and reliability in the case of GPS signal spoofing or jamming. To improve the integration of the microgrid with the utility system, new synchronization and islanding detection algorithms were developed. The developed algorithms overcome the problem of SCADA and PMU based islanding detection methods such as communication failure and frequency stability. In addition, a real-time energy management system with online optimization was developed to manage the energy resources within the microgrid. The security and privacy were also addressed in both the cyber and physical levels. For the physical design, two techniques were developed to address the physical privacy issues by changing the current and electromagnetic signature. For the cyber level, a security mechanism for IEC 61850 GOOSE messages was developed to address the security shortcomings in the standard

    Valuing diversity and establishing an approach to supporting excluded groups

    Get PDF
    Minority students and minority employees in Higher Engineering Education experience inequality. For academic staff these inequalities impact their personal development and career progression. To continue to grow and for engineering education to thrive as a professional discipline we must encourage diversity within both the student and staff populations. This paper cautions against a simple notion of diversity, rather a truly diverse culture within engineering is needed, one in which there is diversity of opportunity, diversity of thought and diversity of experience. To enable a more inclusive environment to flourish we must understand the scale of the inequalities which exist. However, this paper demonstrates that there are significant limitations to the current diversity data within the UK which leaves room for under-reporting and over-generalising. In addition, there are cultural challenges which give further likelihood to non-disclosure and lack of self-reporting. This paper proposes that further research is needed into the true lack of diversity within engineering and describes one example of a ‘thought experiment’ conducted by the researchers to start unpacking the data and highlighting the scale of the issue

    public class Graphic_Design implements Code { // Yes, but how? }: An investigation towards bespoke Creative Coding programming courses in graphic design education

    Get PDF
    Situated in the intersection of graphic design, computer science, and pedagogy, this dissertation investigates how programming is taught within graphic design education. The research adds to the understanding of the process, practice, and challenges associated with introducing an audience of visually inclined practitioners—who are often guided by instinct—to the formal and unforgiving world of syntax, algorithms, and logic. Motivating the research is a personal desire to contribute towards the development of bespoke contextualized syllabi specifically designed to accommodate how graphic designers learn, understand, and use programming as an integral skill in their vocational practice.The initial literature review identifies a gap needing to be filled to increase both practical and theoretical knowledge within the interdisciplinary field of computational graphic design. This gap concerns a lack of solid, empirically based epistemological frameworks for teaching programming to non-programmers in a visual context, partly caused by a dichotomy in traditional pedagogical practices associated with teaching programming and graphic design, respectively. Based on this gap, the overarching research question posed in this dissertation is: “How should programming ideally be taught to graphic designers to account for how they learn and how they intend to integrate programming into their vocational practice?”A mixed methods approach using both quantitative and qualitative analyses is taken to answer the research questions. The three papers comprising the dissertation are all built on individual hypotheses that are subsequently used to define three specific research questions.Paper 1 performs a quantitative mapping of contemporary, introductory programming courses taught in design schools to establish a broader understanding of their structure and content. The paper concludes that most courses are planned to favor programming concepts rather than graphic design concepts. The paper’s finding can serve as a point of departure for a critical discussion among researchers and educators regarding the integration of programming in graphic design education.Paper 2 quantitatively assesses how the learning style profile of graphic design students compares with that of students in technical disciplines. The paper identifies a number of significant differences that call for a variety of pedagogic and didactic strategies to be employed by educators to effectively teach programming to graphic designers. Based on the results, specific recommendations are given.Paper 3 proposes a hands-on, experiential pedagogic method specifically designed to introduce graphic design students to programming. The method relies on pre-existing commercial graphic design specimens to contextualize programming into a domain familiar to graphic designers. The method was tested on the target audience and observations on its use are reported. Qualitative evaluation of student feedback suggests the method is effective and well-received. Additionally, twenty-four heuristics that elaborate and extend the paper’s findings by interweaving other relevant and influential sources encountered during the research project are provided. Together, the literature review, the three papers, and the heuristics provide comprehensive and valuable theoretical and practical insights to both researchers and educators, regarding key aspects related to introducing programming as a creative practice in graphic design education

    Continuous Rationale Management

    Get PDF
    Continuous Software Engineering (CSE) is a software life cycle model open to frequent changes in requirements or technology. During CSE, software developers continuously make decisions on the requirements and design of the software or the development process. They establish essential decision knowledge, which they need to document and share so that it supports the evolution and changes of the software. The management of decision knowledge is called rationale management. Rationale management provides an opportunity to support the change process during CSE. However, rationale management is not well integrated into CSE. The overall goal of this dissertation is to provide workflows and tool support for continuous rationale management. The dissertation contributes an interview study with practitioners from the industry, which investigates rationale management problems, current practices, and features to support continuous rationale management beneficial for practitioners. Problems of rationale management in practice are threefold: First, documenting decision knowledge is intrusive in the development process and an additional effort. Second, the high amount of distributed decision knowledge documentation is difficult to access and use. Third, the documented knowledge can be of low quality, e.g., outdated, which impedes its use. The dissertation contributes a systematic mapping study on recommendation and classification approaches to treat the rationale management problems. The major contribution of this dissertation is a validated approach for continuous rationale management consisting of the ConRat life cycle model extension and the comprehensive ConDec tool support. To reduce intrusiveness and additional effort, ConRat integrates rationale management activities into existing workflows, such as requirements elicitation, development, and meetings. ConDec integrates into standard development tools instead of providing a separate tool. ConDec enables lightweight capturing and use of decision knowledge from various artifacts and reduces the developers' effort through automatic text classification, recommendation, and nudging mechanisms for rationale management. To enable access and use of distributed decision knowledge documentation, ConRat defines a knowledge model of decision knowledge and other artifacts. ConDec instantiates the model as a knowledge graph and offers interactive knowledge views with useful tailoring, e.g., transitive linking. To operationalize high quality, ConRat introduces the rationale backlog, the definition of done for knowledge documentation, and metrics for intra-rationale completeness and decision coverage of requirements and code. ConDec implements these agile concepts for rationale management and a knowledge dashboard. ConDec also supports consistent changes through change impact analysis. The dissertation shows the feasibility, effectiveness, and user acceptance of ConRat and ConDec in six case study projects in an industrial setting. Besides, it comprehensively analyses the rationale documentation created in the projects. The validation indicates that ConRat and ConDec benefit CSE projects. Based on the dissertation, continuous rationale management should become a standard part of CSE, like automated testing or continuous integration

    Compile-time support for thread-level speculation

    Get PDF
    Una de las principales preocupaciones de las ciencias de la computación es el estudio de las capacidades paralelas tanto de programas como de los procesadores que los ejecutan. Existen varias razones que hacen muy deseable el desarrollo de técnicas que paralelicen automáticamente el código. Entre ellas se encuentran el inmenso número de programas secuenciales existentes ya escritos, la complejidad de los lenguajes de programación paralelos, y los conocimientos que se requieren para paralelizar un código. Sin embargo, los actuales mecanismos de paralelización automática implementados en los compiladores comerciales no son capaces de paralelizar la mayoría de los bucles en un código [1], debido a la dependencias de datos que existen entre ellos [2]. Por lo tanto, se hace necesaria la búsqueda de nuevas técnicas, como la paralelización especulativa [3-5], que saquen beneficio de las potenciales capacidades paralelas del hardware y arquitecturas multiprocesador actuales. Sin embargo, ésta y otras técnicas requieren la intervención manual de programadores experimentados. Antes de ofrecer soluciones alternativas, se han evaluado las capacidades de paralelización de los compiladores comerciales, exponiendo las limitaciones de los mecanismos de paralelización automática que implementan. El estudio revela que estos mecanismos de paralelización automática sólo alcanzan un 19% de speedup en promedio para los benchmarks del SPEC CPU2006 [6], siendo este un resultado significativamente inferior al obtenido por técnicas de paralelización especulativa [7]. Sin embargo, la paralelización especulativa requiere una extensa modificación manual del código por parte de programadores. Esta Tesis aborda este problema definiendo una nueva cláusula OpenMP [8], llamada ¿speculative¿, que permite señalar qué variables pueden llevar a una violación de dependencia. Además, esta Tesis también propone un sistema en tiempo de compilación que, usando la información sobre los accesos a las variables que proporcionan las cláusulas OpenMP, añade automáticamente todo el código necesario para gestionar la ejecución especulativa de un programa. Esto libera al programador de modificar el código manualmente, evitando posibles errores y una tediosa tarea. El código generado por nuestro sistema enlaza con la librería de ejecución especulativamente paralela desarrollada por Estebanez, García-Yagüez, Llanos y Gonzalez-Escribano [9,10].Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos
    corecore