189 research outputs found

    Ways to get work done: a review and systematisation of simplification practices in the LCA literature

    Get PDF
    PurposeWithin the field of life cycle assessment (LCA), simplifications are a response to the practical restrictions in the context of a study. In the 1990s, simplifications were part of a debate on streamlining within LCA. Since then, many studies have been published on simplifying LCA but with little attention to systematise the approaches available. Also, despite being pervasive during the making of LCA studies, simplifications remain often invisible in the final results. This paper therefore reviews the literature on simplification in LCA in order to systematise the approaches found today.MethodsA review of the LCA simplification literature was conducted. The systematic search and selection process led to a sample of 166 publications. During the review phase, the conceptual contributions to the simplification discourse were evaluated. A dataset of 163 entries was created, listing the conceptual contributions to the simplification debate. An empirically grounded analysis led to the generative development of a systematisation of simplifications according to their underlying simplifying logic.Results and discussionFive simplifying logics were identified: exclusion, inventory data substitution, qualitative expert judgment, standardisation and automation. Together, these simplifying logics inform 13 simplification strategies. The identified logics represent approaches to handle the complexities of product systems and expectations of the users of LCA results with the resources available to the analyst. Each simplification strategy is discussed with regard to its main applications and challenges.ConclusionsThis paper provides a first systematisation of the different simplification logics frequently applied in LCA since the original streamlining discussion. The presented terminology can help making communication about simplification more explicit and transparent, thus important for the credibility of LCA. Despite the pervasiveness of simplification in LCA, there is a relative lack of research on simplification per se, making further research describing simplification as a practice and analysing simplifications methodologically desirable

    Bridging the gap between assessment and action: recommendations for the effective use of LCA in the building process

    Get PDF
    Environmental life cycle assessment (LCA) witnesses increasing popularity in the built environment. LCA stimulates among others an efficient use of natural resources and a reduction of carbon emissions through quantification of material and energy inputs and emissions in the building life cycle. Thereby, LCA aspires to contribute to SDG12 on ensuring sustainable consumption and production patterns. Despite high ambitions, the actual influence of LCA in construction projects is often modest. The mere application of LCA methodology in a building project is insufficient to produce a more environmentally friendly building. To better understand the practical conditions under which an LCA may induce change in a building project, we propose to analyse the use of LCA from a processual perspective. This paper presents a case study of a building product development project in which a processual perspective is applied on LCA. Using a longitudinal ethnographic methodology, key actors are followed through environmentally relevant episodes as the building project matures. A progressive LCA quantifies the potential environmental impact of the project as it progresses through different stages of the building process. Based on the learnings from this study, recommendations are presented to support the effective use of LCA in sustainable building practices, and contribute to SDG12 on sustainable consumption and production patterns

    Gibt es einen moralisch relevanten Unterschied zwischen Lügen und Irreführen?

    Get PDF
    Einer weit verbreiteten Auffassung zufolge ist es moralisch besser (bzw. weniger schlecht), eine andere Person in die Irre zu führen als sie anzulügen. Diese Auffassung ist in neuerer Zeit jedoch ausführlich von Bernard Williams und Jennifer Saul kritisiert worden – ihnen zufolge verdankt sich unsere moralische Präferenz für Irreführungen einem Irrtum und lässt sich bei näherer Betrachtung nicht aufrechterhalten. Im ersten Teil des Aufsatzes versuche ich demgegenüber zu zeigen, dass es in manchen Fällen tatsächlich moralische Gründe gibt, statt zu lügen in die Irre zu führen. Im Mittelpunkt steht dabei eine expressive und beziehungsorientierte Analyse von Irreführungen: Wir können mit Irreführungen, so die zentrale These, im Unterschied zu Lügen zumindest in manchen Fällen zum Ausdruck bringen, dass wir die andere Person respektieren und uns die Fortführung einer vertrauensvollen Beziehung mit ihr am Herzen liegt. Im zweiten Teil weise ich dann allerdings nach, dass die moralische Präferenz für Irreführungen nicht in allen Fällen gerechtfertigt ist – zum einen, weil der mit Irreführungen zum Ausdruck gebrachte Respekt gegenüber anderen Personen nicht immer angemessen ist, und zum anderen, weil mit Irreführungen nicht in allen Situationen Respekt vor der anderen Person und ein Interesse an einer vertrauensvollen Beziehung mit ihr zum Ausdruck gebracht werden kann

    Doing Justice to Patients with Dementia in ICU Triage

    Full text link

    Cell type-specific expression of endogenous cardiac Troponin I antisense RNA in the neonatal rat heart

    Get PDF
    Since the number of detected natural antisense RNA is growing, investigations upon the expression pattern of the antisense RNA become more important. As we focused our work on natural occurring antisense transcripts in human and rat heart tissues, we were interested in the question, whether the expression pattern of antisense and sense RNA can vary in different cell types of the same tissue. In our previous analysis of total neonatal rat heart tissue, we demonstrated the co-expression of both cTnI RNA species in this tissue. Now we investigated the expression of antisense and sense RNA quantitatively in neonatal cardiomyocytes (NCMs) and neonatal cardiac fibroblasts (NCFs). Performing northern blot as well as RT-PCR, we could detect natural antisense and sense RNA transcripts of cTnI in NCM and NCF implying that these transcripts are co-expressed in both cell types. The absolute amounts of the RNA transcripts were higher in NCM. Both RNA species showed identical sizes in the northern blot. Quantification by real-time PCR revealed a higher relative level of natural antisense RNA in NCF compared to NCM which points out to a cell type-specific expression of sense and antisense RNA. Our observations suggest that antisense RNA transcription may contribute to a cell type-specific regulation of the cTnI gen

    Evaluating human enhancements: the importance of ideals

    Get PDF
    Is it necessary to have an ideal of perfection in mind to identify and evaluate true biotechnological human "enhancements”, or can one do without? To answer this question we suggest employing the distinction between ideal and non-ideal theory, found in the debate in political philosophy about theories of justice: the distinctive views about whether one needs an idea of a perfectly just society or not when it comes to assessing the current situation and recommending steps to increase justice. In this paper we argue that evaluating human enhancements from a non-ideal perspective has some serious shortcomings, which can be avoided when endorsing an ideal approach. Our argument starts from a definition of human enhancement as improvement, which can be understood in two ways. The first approach is backward-looking and assesses improvements with regard to a status quo ante. The second, a forward-looking approach, evaluates improvements with regard to their proximity to a goal or according to an ideal. After outlining the limitations of an exclusively backward-looking view (non-ideal theory), we answer possible objections against a forward-looking view (ideal theory). Ultimately, we argue that the human enhancement debate would lack some important moral insights if a forward-looking view of improvement is not taken into consideration

    Detections of whale vocalizations by simultaneously deployed bottom-moored and deep-water mobile autonomous hydrophones

    Get PDF
    Funding for this work was provided by the Living Marine Resources Program (N39430-14-C-1435 and N39430-14-C-1434), the Office of Naval Research (N00014-15-1-2142, N00014-10-1-0534, and N00014-13-1-0682), and NOAA’s Southwest Fisheries Science Center. SF was supported by the National Science and Engineering Graduate Fellowship.Advances in mobile autonomous platforms for oceanographic sensing, including gliders and deep-water profiling floats, have provided new opportunities for passive acoustic monitoring (PAM) of cetaceans. However, there are few direct comparisons of these mobile autonomous systems to more traditional methods, such as stationary bottom moored recorders. Cross-platform comparisons are necessary to enable interpretation of results across historical and contemporary surveys that use different recorder types, and to identify potential biases introduced by the platform. Understanding tradeoffs across recording platforms informs best practices for future cetacean monitoring efforts. This study directly compares the PAM capabilities of a glider (Seaglider) and a deep-water profiling float (QUEphone) to a stationary seafloor system (High-frequency Acoustic Recording Package, or HARP) deployed simultaneously over a 2 week period in the Catalina Basin, California, United States. Two HARPs were deployed 4 km apart while a glider and deep-water float surveyed within 20 km of the HARPs. Acoustic recordings were analyzed for the presence of multiple cetacean species, including beaked whales, delphinids, and minke whales. Variation in acoustic occurrence at 1-min (beaked whales only), hourly, and daily scales were examined. The number of minutes, hours, and days with beaked whale echolocation clicks were variable across recorders, likely due to differences in the noise floor of each recording system, the spatial distribution of the recorders, and the short detection radius of such a high-frequency, directional signal type. Delphinid whistles and clicks were prevalent across all recorders, and at levels that may have masked beaked whale vocalizations. The number and timing of hours and days with minke whale boing sounds were nearly identical across recorder types, as was expected given the relatively long propagation distance of boings. This comparison provides evidence that gliders and deep-water floats record cetaceans at similar detection rates to traditional stationary recorders at a single point. The spatiotemporal scale over which these single hydrophone systems record sounds is highly dependent on acoustic features of the sound source. Additionally, these mobile platforms provide improved spatial coverage which may be critical for species that produce calls that propagate only over short distances such as beaked whales.Publisher PDFPeer reviewe

    Performance of ECG-based seizure detection algorithms strongly depends on training and test conditions

    Get PDF
    Objective To identify non-EEG-based signals and algorithms for detection of motor and non-motor seizures in people lying in bed during video-EEG (VEEG) monitoring and to test whether these algorithms work in freely moving people during mobile EEG recordings. Methods Data of three groups of adult people with epilepsy (PwE) were analyzed. Group 1 underwent VEEG with additional devices (accelerometry, ECG, electrodermal activity); group 2 underwent VEEG; and group 3 underwent mobile EEG recordings both including one-lead ECG. All seizure types were analyzed. Feature extraction and machine-learning techniques were applied to develop seizure detection algorithms. Performance was expressed as sensitivity, precision, F1_{1} score, and false positives per 24 hours. Results The algorithms were developed in group 1 (35 PwE, 33 seizures) and achieved best results (F1_{1} score 56%, sensitivity 67%, precision 45%, false positives 0.7/24 hours) when ECG features alone were used, with no improvement by including accelerometry and electrodermal activity. In group 2 (97 PwE, 255 seizures), this ECG-based algorithm largely achieved the same performance (F1_{1} score 51%, sensitivity 39%, precision 73%, false positives 0.4/24 hours). In group 3 (30 PwE, 51 seizures), the same ECG-based algorithm failed to meet up with the performance in groups 1 and 2 (F1_{1} score 27%, sensitivity 31%, precision 23%, false positives 1.2/24 hours). ECG-based algorithms were also separately trained on data of groups 2 and 3 and tested on the data of the other groups, yielding maximal F1 scores between 8% and 26%. Significance Our results suggest that algorithms based on ECG features alone can provide clinically meaningful performance for automatic detection of all seizure types. Our study also underscores that the circumstances under which such algorithms were developed, and the selection of the training and test data sets need to be considered and limit the application of such systems to unseen patient groups behaving in different conditions

    Field study on requirements engineering: investigation of artefacts, project parameters, and execution strategies

    Get PDF
    Context Requirements Engineering (RE) is a critical discipline mostly driven by uncertainty, since it is influenced by the customer domain or by the development process model used. Volatile project environments restrict the choice of methods and the decision about which artefacts to produce in RE. Objective We aim to investigate RE processes in successful project environments to discover characteristics and strategies that allow us to elaborate RE tailoring approaches in the future. Method We perform a field study on a set of projects at one company. First, we investigate by content analysis which RE artefacts were produced in each project and to what extent they were produced. Second, we perform qualitative analysis of semi-structured interviews to discover project parameters that relate to the produced artefacts. Third, we use cluster analysis to infer artefact patterns and probable RE execution strategies, which are the responses to specific project parameters. Fourth, we investigate by statistical tests the effort spent in each strategy in relation to the effort spent in change requests to evaluate the efficiency of execution strategies. Results We identified three artefact patterns and corresponding execution strategies. Each strategy covers different project parameters that impact the creation of certain artefacts. The effort analysis shows that the strategies have no significant differences in their effort and efficiency. Conclusions In contrast to our initial assumption that an increased effort in requirements engineering lowers the probability of change requests or project failures in general, our results show no statistically significant difference between the efficiency of the strategies. In addition, it turned out that many parameters considered as the main causes for project failures can be successfully handled. Hence, practitioners can apply the artefact patterns and related project parameters to tailor the RE process according to individual project characteristics
    corecore