42,117 research outputs found

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Transparency in Complex Computational Systems

    Get PDF
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s..

    Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS

    Full text link
    GROMACS is a widely used package for biomolecular simulation, and over the last two decades it has evolved from small-scale efficiency to advanced heterogeneous acceleration and multi-level parallelism targeting some of the largest supercomputers in the world. Here, we describe some of the ways we have been able to realize this through the use of parallelization on all levels, combined with a constant focus on absolute performance. Release 4.6 of GROMACS uses SIMD acceleration on a wide range of architectures, GPU offloading acceleration, and both OpenMP and MPI parallelism within and between nodes, respectively. The recent work on acceleration made it necessary to revisit the fundamental algorithms of molecular simulation, including the concept of neighborsearching, and we discuss the present and future challenges we see for exascale simulation - in particular a very fine-grained task parallelism. We also discuss the software management, code peer review and continuous integration testing required for a project of this complexity.Comment: EASC 2014 conference proceedin

    Knowledge Tracing: A Review of Available Technologies

    Get PDF
    As a student modeling technique, knowledge tracing is widely used by various intelligent tutoring systems to infer and trace the individual’s knowledge state during the learning process. In recent years, various models were proposed to get accurate and easy-to-interpret results. To make sense of the wide Knowledge tracing (KT) modeling landscape, this paper conducts a systematic review to provide a detailed and nuanced discussion of relevant KT techniques from the perspective of assumptions, data, and algorithms. The results show that most existing KT models consider only a fragment of the assumptions that relate to the knowledge components within items and student’s cognitive process. Almost all types of KT models take “quize data” as input, although it is insufficient to reflect a clear picture of students’ learning process. Dynamic Bayesian network, logistic regression and deep learning are the main algorithms used by various knowledge tracing models. Some open issues are identified based on the analytics of the reviewed works and discussed potential future research directions

    A Review of Data Mining in Personalized Education: Current Trends and Future Prospects

    Full text link
    Personalized education, tailored to individual student needs, leverages educational technology and artificial intelligence (AI) in the digital age to enhance learning effectiveness. The integration of AI in educational platforms provides insights into academic performance, learning preferences, and behaviors, optimizing the personal learning process. Driven by data mining techniques, it not only benefits students but also provides educators and institutions with tools to craft customized learning experiences. To offer a comprehensive review of recent advancements in personalized educational data mining, this paper focuses on four primary scenarios: educational recommendation, cognitive diagnosis, knowledge tracing, and learning analysis. This paper presents a structured taxonomy for each area, compiles commonly used datasets, and identifies future research directions, emphasizing the role of data mining in enhancing personalized education and paving the way for future exploration and innovation.Comment: 25 pages, 5 figure

    Diversities in Diversity: Exploring Moroccan Migrants’ Livelihood in Genoa

    Get PDF
    It is a largely accepted idea that complexity and recent global phenomena have generated a multi-layered diversification process in Western societies. Migration phenomena are largely responsible for this process both in receiving European societies as well as in original sending countries. Migration has been and continues to be a ubiquitous human experience. Yet, while this fact has aided the understanding of the world as something other than a mosaic of distinct cultural spaces with clearly demarcated borders, it has not decreased the incomprehension, fear and suspicion with which non–European migrants are often greeted within the industrialised cities of Europe. This article deals with one aspect of this process that seems to be quite underestimated in media, public opinion and academia. It is the idea that “ethnicity” can be approached, explored and investigated as a heterogeneous and multi-faced form of diversity itself. This is what can be defined as “diversities within diversity”. Departing from the presentation of an empirical research in Genoa it will be possible to analyse these phenomena at two different levels: namely, in terms of methods and methodology. By focusing on the idea of livelihood and employing an approach based on “Tracing” techniques, different ways of acting and being Moroccan migrants in Genoa will be revealed, presented and discussed. This method newly integrates both quantitative and qualitative information. It will allow us to analyse the experience of livelihood in a way that will reveal the simultaneous existence of many underlying different invisible and unconscious social constructions as well as visible concrete and conscious expressions of everyday life. Disclosing how the same people in the same local context produce different “adaptive” strategies and lifestyles will lead to outline a potential conceptual methodological framework of reference based on an open/close principle. In this case ideas of openness and closeness will be assumed in a dialectical double-faced process. It is not only a matter of how systems can be defined open or closed by themselves, but also how the encounter and interplay of many different systems – generation of diversity - establish the conditions and limits within which different individuals can reproduce their culture as social actors- production of diversities. After having discussed the methodological implications of this approach it will be possible to draw some final theoretical considerations. If we believe that new ways of investigating social phenomena are a determinant in the way we describe, analyze, explain and understand their complexity, we should recognize that not only theory might generate and define what we call social reality but also vice-versa. Approaching the world out there in new ways might result in rethinking and adjusting the conceptual taxonomies that drive social scholars in their search for gaining and catching social reality. This principle becomes crucial if we want social sciences to be heuristically oriented, in other words if we want to develop the capacity to hand back positive analytical readings and comparisons of social phenomena as well as useful recommendations for policy makers.Migration, Italy, Morocco, Methodology, Tracing, Open/close Model

    The role of concurrency in an evolutionary view of programming abstractions

    Full text link
    In this paper we examine how concurrency has been embodied in mainstream programming languages. In particular, we rely on the evolutionary talking borrowed from biology to discuss major historical landmarks and crucial concepts that shaped the development of programming languages. We examine the general development process, occasionally deepening into some language, trying to uncover evolutionary lineages related to specific programming traits. We mainly focus on concurrency, discussing the different abstraction levels involved in present-day concurrent programming and emphasizing the fact that they correspond to different levels of explanation. We then comment on the role of theoretical research on the quest for suitable programming abstractions, recalling the importance of changing the working framework and the way of looking every so often. This paper is not meant to be a survey of modern mainstream programming languages: it would be very incomplete in that sense. It aims instead at pointing out a number of remarks and connect them under an evolutionary perspective, in order to grasp a unifying, but not simplistic, view of the programming languages development process
    corecore