533 research outputs found

    On the minimum leaf number of cubic graphs

    Full text link
    The \emph{minimum leaf number} ml(G)\hbox{ml} (G) of a connected graph GG is defined as the minimum number of leaves of the spanning trees of GG. We present new results concerning the minimum leaf number of cubic graphs: we show that if GG is a connected cubic graph of order nn, then ml(G)n6+13\mathrm{ml}(G) \leq \frac{n}6 + \frac13, improving on the best known result in [Inf. Process. Lett. 105 (2008) 164-169] and proving the conjecture in [Electron. J. Graph Theory and Applications 5 (2017) 207-211]. We further prove that if GG is also 2-connected, then ml(G)n6.53\mathrm{ml}(G) \leq \frac{n}{6.53}, improving on the best known bound in [Math. Program., Ser. A 144 (2014) 227-245]. We also present new conjectures concerning the minimum leaf number of several types of cubic graphs and examples showing that the bounds of the conjectures are best possible.Comment: 17 page

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    A Requirements-Based Exploration of Open-Source Software Development Projects – Towards a Natural Language Processing Software Analysis Framework

    Get PDF
    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects, error-prone. Automated analysis of natural language requirements, even partial, will be of great benefit. Towards that end, I describe the design and validation of an automated natural language requirements classifier for open source software development projects. I compare two strategies for recognizing requirements in open forums of software features. The results suggest that classifying text at the forum post aggregation and sentence aggregation levels may be effective. Initial results suggest that it can reduce the effort required to analyze requirements of open source software development projects. Software development organizations and communities currently employ a large number of software development techniques and methodologies. This implied complexity is also enhanced by a wide range of software project types and development environments. The resulting lack of consistency in the software development domain leads to one important challenge that researchers encounter while exploring this area: specificity. This results in an increased difficulty of maintaining a consistent unit of measure or analysis approach while exploring a wide variety of software development projects and environments. The problem of specificity is more prominently exhibited in an area of software development characterized by a dynamic evolution, a unique development environment, and a relatively young history of research when compared to traditional software development: the open-source domain. While performing research on open source and the associated communities of developers, one can notice the same challenge of specificity being present in requirements engineering research as in the case of closed-source software development. Whether research is aimed at performing longitudinal or cross-sectional analyses, or attempts to link requirements to other aspects of software development projects and their management, specificity calls for a flexible analysis tool capable of adapting to the needs and specifics of the explored context. This dissertation covers the design, implementation, and evaluation of a model, a method, and a software tool comprising a flexible software development analysis framework. These design artifacts use a rule-based natural language processing approach and are built to meet the specifics of a requirements-based analysis of software development projects in the open-source domain. This research follows the principles of design science research as defined by Hevner et. al. and includes stages of problem awareness, suggestion, development, evaluation, and results and conclusion (Hevner et al. 2004; Vaishnavi and Kuechler 2007). The long-term goal of the research stream stemming from this dissertation is to propose a flexible, customizable, requirements-based natural language processing software analysis framework which can be adapted to meet the research needs of multiple different types of domains or different categories of analyses

    Hypohamiltonian and almost hypohamiltonian graphs

    Get PDF
    This Dissertation is structured as follows. In Chapter 1, we give a short historical overview and define fundamental concepts. Chapter 2 contains a clear narrative of the progress made towards finding the smallest planar hypohamiltonian graph, with all of the necessary theoretical tools and techniques--especially Grinberg's Criterion. Consequences of this progress are distributed over all sections and form the leitmotif of this Dissertation. Chapter 2 also treats girth restrictions and hypohamiltonian graphs in the context of crossing numbers. Chapter 3 is a thorough discussion of the newly introduced almost hypohamiltonian graphs and their connection to hypohamiltonian graphs. Once more, the planar case plays an exceptional role. At the end of the chapter, we study almost hypotraceable graphs and Gallai's problem on longest paths. The latter leads to Chapter 4, wherein the connection between hypohamiltonicity and various problems related to longest paths and longest cycles are presented. Chapter 5 introduces and studies non-hamiltonian graphs in which every vertex-deleted subgraph is traceable, a class encompassing hypohamiltonian and hypotraceable graphs. We end with an outlook in Chapter 6, where we present a selection of open problems enriched with comments and partial results

    Algorithmic Graph Theory

    Get PDF
    The main focus of this workshop was on mathematical techniques needed for the development of efficient solutions and algorithms for computationally difficult graph problems. The techniques studied at the workshhop included: the probabilistic method and randomized algorithms, approximation and optimization, structured families of graphs and approximation algorithms for large problems. The workshop Algorithmic Graph Theory was attended by 46 participants, many of them being young researchers. In 15 survey talks an overview of recent developments in Algorithmic Graph Theory was given. These talks were supplemented by 10 shorter talks and by two special sessions

    Liquid Clocks - Refinement Types for Time-Dependent Stream Functions

    Get PDF
    The concept of liquid clocks introduced in this paper is a significant step towards a more precise compile-time framework for the analysis of synchronous and polychromous languages. Compiling languages such as Lustre or SIGNAL indeed involves a number of static analyses of programs before they can be synthesized into executable code, e.g., synchronicity class characterization, clock assignment, static scheduling or causality analysis. These analyses are often equivalent to undecidable problems, necessitating abstracting such programs to provide sound yet incomplete analyses. Such abstractions unfortunately often lead to the rejection of programs that could very well be synthesized into deterministic code, provided abstraction refinement steps could be applied for more accurate analysis. To reduce the false negatives occurring during the compilation process, we leverage recent advances in type theory -- with the definition of decidable classes of value-dependent type systems -- and formal verification, linked to the development of efficient SAT/SMT solvers, to provide a type-theoretic approach that considers all the above analyses as type inference problems. In order to simplify the exposition of our new approach in this paper, we define a refinement type system for a minimalistic, synchronous, stream-processing language to concisely represent, analyse, and verify logical and quantitative properties of programs expressed as stream-processing data-flow networks. Our type system provides a new framework to represent logical time (clocks) and scheduling properties, and to describe their relations with stream values and, possibly, other quantas. We show how to analyze synchronous stream processing programs (à la Lustre, Signal) to enable previously described analyzes involved in compiling such programs. We also prove the soundness of our type system and elaborate on the adaptability of this core framework by outlining its extensibility to specific models of computations and other quantas

    Configuring and Assembling Information Retrieval based Solutions for Software Engineering Tasks.

    Get PDF
    Information Retrieval (IR) approaches are used to leverage textual or unstructured data generated during the software development process to support various software engineering (SE) tasks (e.g., concept location, traceability link recovery, change impact analysis, etc.). Two of the most important steps for applying IR techniques to support SE tasks are preprocessing the corpus and configuring the IR technique, and these steps can significantly influence the outcome and the amount of effort developers have to spend for these maintenance tasks. We present the use of Genetic Algorithms (GAs) to automatically configure and assemble an IR process to support SE tasks. The approach named IR-GA determines the (near) optimal solution to be used for each step of the IR process without requiring any training. We applied IR-GA on three different SE tasks and the results of the study indicate that IR-GA outperforms approaches previously used in the literature, and that it does not significantly differ from an ideal upper bound that could be achieved by a supervised approach and a combinatorial approach
    corecore