2,094 research outputs found

    Understory light, regeneration, and browsing effects in irregular structures created by partial harvesting in coast redwood stands

    Get PDF
    Regeneration of commercial species is central to long-term success of multiaged management for wood production. We used a replicated uneven-aged silviculture experiment to study regeneration by stump sprouting (Chapter 1) and planted seedlings (Chapter 2). In Chapter 1, we present relationships between understory light, varying overstory tree retention, and growth of coast redwood (Sequoia sempervirens) and tanoak (Notholithocarpus densiflorus) stump sprouts initiated by group selection (GS) and single-tree selection harvesting. First, we quantified understory light throughout this 20 ha experiment comparing four different silvicultural treatments repeated at four sites. Then, we related understory light to post-treatment stand density and treatment type (i.e., complete harvest in 1 ha (2.5 acre) GS opening, low density dispersed retention (LD), and either aggregated (HA) or dispersed high-density retention (HD)). Finally, we quantified height increment of stump sprouts in response to understory light, treatment type, and other candidate variables influencing growth of stump sprout regeneration after partial harvesting. Mean and maximum understory light did not differ significantly between high density treatments. However, the HD treatment had lower minimum light levels when compared to the HA treatment. At all light levels, the dominant sprout within clumps of redwood stump sprouts generally grew faster than dominant tanoak sprouts within tanoak sprout clumps. Differences in sprout height growth between high density aggregated and dispersed treatments were minimal. In the LD treatments, redwood stump sprouts outperformed tanoak sprouts by the greatest margin. Regeneration of redwood and tanoak was most rapid in high light within GS openings. In Chapter 2, we studied how incidences of animal browsing or mortality of planted seedlings related to multiaged treatment type, stand, and site variables. Deer browsing of planted seedlings was a pervasive problem. Incidence of browsing differed among seedling species, treatment type, and position on the landscape (elevation or distance to watercourse). Coast Douglas-fir (Pseudotsuga menziesii var menziesii) seedlings were preferred by browsers over redwood seedlings in this study. The most instances of browsing were recorded in GS treatments, followed by LD, HA, and HD treatments. In treatments with higher densities, browsing was less likely. As distance to watercourse and elevation increased, the probability of browsing diminished for both species. Like browsing, survival of planted seedlings was largely dependent on their position on the landscape. Seedlings planted on a southwest aspect had the lowest survival rates, while seedlings planted on a northeast aspect had nearly complete survival, regardless of species. Overall, Douglas-fir seedlings had higher mortality rates than redwood. Mortality was highest in GS, followed by HA and HD treatments, and was lowest in LD treatments. Seedling survival exhibited a rise-peak-fall pattern with increasing stand density. This pattern was the most distinct on southwest facing slopes. In general, dispersed treatments gave better results than aggregated and GS treatments when trying to maximize survival and minimize the occurrence of browsing. These results inform forest managers implementing a conversion towards multiaged management in coast redwood stands receiving partial harvesting without site preparation or herbicide treatment of re-sprouting hardwoods. Presumably, a reduction in below ground competition from hardwood control would enhance survival of planted seedlings. However, any enhancement of seedling growth and vigor may result in elevated browsing activity. Specific recommendations for management include planting extra seedlings on southern slopes and in stands of lower densities such as group selection openings (in anticipation of elevated mortality), and implementing seedling protection measures (e.g., shelters, repellant, fencing) near watercourses where browsing occurs most often

    Explainability as a non-functional requirement: challenges and recommendations

    Get PDF
    Software systems are becoming increasingly complex. Their ubiquitous presence makes users more dependent on their correctness in many aspects of daily life. As a result, there is a growing need to make software systems and their decisions more comprehensible, with more transparency in software-based decision making. Transparency is therefore becoming increasingly important as a non-functional requirement. However, the abstract quality aspect of transparency needs to be better understood and related to mechanisms that can foster it. The integration of explanations into software has often been discussed as a solution to mitigate system opacity. Yet, an important first step is to understand user requirements in terms of explainable software behavior: Are users really interested in software transparency and are explanations considered an appropriate way to achieve it? We conducted a survey with 107 end users to assess their opinion on the current level of transparency in software systems and what they consider to be the main advantages and disadvantages of embedded explanations. We assess the relationship between explanations and transparency and analyze its potential impact on software quality. As explainability has become an important issue, researchers and professionals have been discussing how to deal with it in practice. While there are differences of opinion on the need for built-in explanations, understanding this concept and its impact on software is a key step for requirements engineering. Based on our research results and on the study of existing literature, we offer recommendations for the elicitation and analysis of explainability and discuss strategies for the practice. © 2020, The Author(s)

    What Works Better? A Study of Classifying Requirements

    Full text link
    Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naive Bayes for the sub-classification of NFRs.Comment: 7 pages, the 25th IEEE International Conference on Requirements Engineering (RE'17

    GIMO : A multi-objective anytime rule mining system to ease iterative feedback from domain experts

    Get PDF
    Data extracted from software repositories is used intensively in Software Engineering research, for example, to predict defects in source code. In our research in this area, with data from open source projects as well as an industrial partner, we noticed several shortcomings of conventional data mining approaches for classification problems: (1) Domain experts’ acceptance is of critical importance, and domain experts can provide valuable input, but it is hard to use this feedback. (2) Evaluating the quality of the model is not a matter of calculating AUC or accuracy. Instead, there are multiple objectives of varying importance with hard to quantify trade-offs. Furthermore, the performance of the model cannot be evaluated on a per-instance level in our case, because it shares aspects with the set cover problem. To overcome these problems, we take a holistic approach and develop a rule mining system that simplifies iterative feedback from domain experts and can incorporate the domain-specific evaluation needs. A central part of the system is a novel multi-objective anytime rule mining algorithm. The algorithm is based on the GRASP-PR meta-heuristic but extends it with ideas from several other approaches. We successfully applied the system in the industrial context. In the current article, we focus on the description of the algorithm and the concepts of the system. We make an implementation of the system available. © 2020 The Author
    • …
    corecore