73 research outputs found

    Microbial Enzyme Biotechnology to Reach Plastic Waste Circularity: Current Status, Problems and Perspectives

    Get PDF
    The accumulation of synthetic plastic waste in the environment has become a global concern. Microbial enzymes (purified or as whole-cell biocatalysts) represent emerging biotechnological tools for waste circularity; they can depolymerize materials into reusable building blocks, but their contribution must be considered within the context of present waste management practices. This review reports on the prospective of biotechnological tools for plastic bio-recycling within the framework of plastic waste management in Europe. Available biotechnology tools can support polyethylene terephthalate (PET) recycling. However, PET represents only ≈7% of unrecycled plastic waste. Polyurethanes, the principal unrecycled waste fraction, together with other thermosets and more recalcitrant thermoplastics (e.g., polyolefins) are the next plausible target for enzyme-based depolymerization, even if this process is currently effective only on ideal polyester-based polymers. To extend the contribution of biotechnology to plastic circularity, optimization of collection and sorting systems should be considered to feed chemoenzymatic technologies for the treatment of more recalcitrant and mixed polymers. In addition, new bio-based technologies with a lower environmental impact in comparison with the present approaches should be developed to depolymerize (available or new) plastic materials, that should be designed for the required durability and for being susceptible to the action of enzymes

    Barbero-Immirzi field in canonical formalism of pure gravity

    Full text link
    The Barbero-Immirzi (BI) parameter is promoted to a field and a canonical analysis is performed when it is coupled with a Nieh-Yan topological invariant. It is shown that, in the effective theory, the BI field is a canonical pseudoscalar minimally coupled with gravity. This framework is argued to be more natural than the one of the usual Holst action. Potential consequences in relation with inflation and the quantum theory are briefly discussed.Comment: 10 page

    Taking Arduino to the Internet of things: the ASIP programming model

    Get PDF
    Micro-controllers such as Arduino are widely used by all kinds of makers worldwide. Popularity has been driven by Arduino’s simplicity of use and the large number of sensors and libraries available to extend the basic capabilities of these controllers. The last decade has witnessed a surge of software engineering solutions for “the Internet of Things”, but in several cases these solutions require computational resources that are more advanced than simple, resource-limited micro-controllers. Surprisingly, in spite of being the basic ingredients of complex hardware–software systems, there does not seem to be a simple and flexible way to (1) extend the basic capabilities of micro-controllers, and (2) to coordinate inter-connected micro-controllers in “the Internet of Things”. Indeed, new capabilities are added on a per-application basis and interactions are mainly limited to bespoke, point-to-point protocols that target the hardware I/O rather than the services provided by this hardware. In this paper we present the Arduino Service Interface Programming (ASIP) model, a new model that addresses the issues above by (1) providing a “Service” abstraction to easily add new capabilities to micro-controllers, and (2) providing support for networked boards using a range of strategies, including socket connections, bridging devices, MQTT-based publish–subscribe messaging, discovery services, etc. We provide an open-source implementation of the code running on Arduino boards and client libraries in Java, Python, Racket and Erlang. We show how ASIP enables the rapid development of non-trivial applications (coordination of input/output on distributed boards and implementation of a line-following algorithm for a remote robot) and we assess the performance of ASIP in several ways, both quantitative and qualitative

    Effectiveness of Radiomic ZOT Features in the Automated Discrimination of Oncocytoma from Clear Cell Renal Cancer

    Get PDF
    Background: Benign renal tumors, such as renal oncocytoma (RO), can be erroneously diagnosed as malignant renal cell carcinomas (RCC), because of their similar imaging features. Computer-aided systems leveraging radiomic features can be used to better discriminate benign renal tumors from the malignant ones. The purpose of this work was to build a machine learning model to distinguish RO from clear cell RCC (ccRCC). Method: We collected CT images of 77 patients, with 30 cases of RO (39%) and 47 cases of ccRCC (61%). Radiomic features were extracted both from the tumor volumes identified by the clinicians and from the tumor’s zone of transition (ZOT). We used a genetic algorithm to perform feature selection, identifying the most descriptive set of features for the tumor classification. We built a decision tree classifier to distinguish between ROs and ccRCCs. We proposed two versions of the pipeline: in the first one, the feature selection was performed before the splitting of the data, while in the second one, the feature selection was performed after, i.e., on the training data only. We evaluated the efficiency of the two pipelines in cancer classification. Results: The ZOT features were found to be the most predictive by the genetic algorithm. The pipeline with the feature selection performed on the whole dataset obtained an average ROC AUC score of 0.87 ± 0.09. The second pipeline, in which the feature selection was performed on the training data only, obtained an average ROC AUC score of 0.62 ± 0.17. Conclusions: The obtained results confirm the efficiency of ZOT radiomic features in capturing the renal tumor characteristics. We showed that there is a significant difference in the performances of the two proposed pipelines, highlighting how some already published radiomic analyses could be too optimistic about the real generalization capabilities of the models

    The chemical composition of White Dwarfs as a test of convective efficiency during core He-burning

    Get PDF
    Pulsating white dwarfs provide constraints to the evolution of progenitor stars. We revise He-burning stellar models, with particular attention to core convection and to its connection with the nuclear reactions powering energy generation and chemical evolution Theoretical results are compared to the available measurements for the variable white dwarf GD 358, which indicate a rather large abundance of central oxygen. We show that the attempt to constrain the relevant nuclear reaction rate by means of the white dwarf composition is faced with a large degree of uncertainty related to evaluating the efficiency of convection-induced mixing.By combining the uncertainty of the convection theory with the error on the relevant reaction rate we derive that the present theoretical prediction for the central oxygen mass fraction in white dwarfs varies between 0.3 and 0.9. Unlike previous claims, we find that models taking into account semiconvection and a moderate C12(alpha,gamma)O16 reaction rate are able to account for a high central oxygen abundance. The rate of the C12(alpha,gamma)O16 used in these models agrees with the one recently obtained in laboratory experiments (Kunz et al. 2002). On the other hand, when semiconvection is inhibited, as in the case of classical models (bare Schwarzschild criterion) or in models with mechanical overshoot, an extremely high rate of the C12(\alpha,\gamma)O16 reaction is needed to account for a large oxygen production. Finally, we show that the apparent discrepancy between our result and those reported in previous studies depends on the method used to avoid the convective runaways (the so called breathing pulses), which are usually encountered in modeling late stage of core He-burning phase.Comment: 19 pages, 4 figures, accepted for publication in The Astrophysical Journa

    Sex difference and intra-operative tidal volume: Insights from the LAS VEGAS study

    Get PDF
    BACKGROUND: One key element of lung-protective ventilation is the use of a low tidal volume (VT). A sex difference in use of low tidal volume ventilation (LTVV) has been described in critically ill ICU patients.OBJECTIVES: The aim of this study was to determine whether a sex difference in use of LTVV also exists in operating room patients, and if present what factors drive this difference.DESIGN, PATIENTS AND SETTING: This is a posthoc analysis of LAS VEGAS, a 1-week worldwide observational study in adults requiring intra-operative ventilation during general anaesthesia for surgery in 146 hospitals in 29 countries.MAIN OUTCOME MEASURES: Women and men were compared with respect to use of LTVV, defined as VT of 8 ml kg-1 or less predicted bodyweight (PBW). A VT was deemed 'default' if the set VT was a round number. A mediation analysis assessed which factors may explain the sex difference in use of LTVV during intra-operative ventilation.RESULTS: This analysis includes 9864 patients, of whom 5425 (55%) were women. A default VT was often set, both in women and men; mode VT was 500 ml. Median [IQR] VT was higher in women than in men (8.6 [7.7 to 9.6] vs. 7.6 [6.8 to 8.4] ml kg-1 PBW, P < 0.001). Compared with men, women were twice as likely not to receive LTVV [68.8 vs. 36.0%; relative risk ratio 2.1 (95% CI 1.9 to 2.1), P < 0.001]. In the mediation analysis, patients' height and actual body weight (ABW) explained 81 and 18% of the sex difference in use of LTVV, respectively; it was not explained by the use of a default VT.CONCLUSION: In this worldwide cohort of patients receiving intra-operative ventilation during general anaesthesia for surgery, women received a higher VT than men during intra-operative ventilation. The risk for a female not to receive LTVV during surgery was double that of males. Height and ABW were the two mediators of the sex difference in use of LTVV.TRIAL REGISTRATION: The study was registered at Clinicaltrials.gov, NCT01601223

    Sediment core analysis using artificial intelligence

    No full text
    Abstract Subsurface stratigraphic modeling is crucial for a variety of environmental, societal, and economic challenges. However, the need for specific sedimentological skills in sediment core analysis may constitute a limitation. Methods based on Machine Learning and Deep Learning can play a central role in automatizing this time-consuming procedure. In this work, using a robust dataset of high-resolution digital images from continuous sediment cores of Holocene age that reflect a wide spectrum of continental to shallow-marine depositional environments, we outline a novel deep-learning-based approach to perform automatic semantic segmentation directly on core images, leveraging the power of convolutional neural networks. To optimize the interpretation process and maximize scientific value, we use six sedimentary facies associations as target classes in lieu of ineffective classification methods based uniquely on lithology. We propose an automated model that can rapidly characterize sediment cores, allowing immediate guidance for stratigraphic correlation and subsurface reconstructions
    • 

    corecore