839 research outputs found

    To Utilize or Not to Utilize? A Life-Cycle Assessment Perspective on Carbon Dioxide Utilization

    Get PDF
    Global warming is mainly driven by anthropogenic emissions of carbon dioxide. Utilizing this carbon dioxide instead of emitting it to the atmosphere therefore seems intuitively as the best means to mitigate climate change. However, utilizing the inert carbon dioxide molecule for chemical conversion usually requires energy. Since the production of this energy leads to greenhouse gas emissions, the intuitive benefits of carbon dioxide utilization cannot be taken for granted. In this work, we present pathways for chemical conversion of carbon dioxide and assess their potential environmental benefits based on life-cycle assessment (LCA). LCA quantifies the environmental impacts along the full life-cycle from cradle-to-grave and for all categories of environmental impacts. Thereby, LCA avoids problem shifting along the life-cycle and between impact categories. The application of LCA to carbon dioxide utilization leads to methodological issues which will be reviewed and critically discussed. For CO2 capture, we discuss the methodological issues in assigning a carbon footprint to captured CO2 serving as chemical feedstock and provide recommendations to ensure that the environmentally most beneficial decisions are fostered by LCA. Available CO2 sources for utilization are analyzed and ranked according to their environmental impacts. For the subsequent chemical conversion of the feedstock CO2, we illustrate the different conversion classes by highlighting novel developments from catalysis and chemistry. In particular, we discuss the production of novel fuels combining CO2 with hydrogen from renewable energy and the production of novel CO2-based polymers. The different character of these routes will be discussed from a LCA perspective. The product scope is then expanded. To overcome the high requirements on data of classical life-cycle assessment, we move to the in silico screening of the environmental potential of novel carbon dioxide utilization pathways. For this purpose, we introduce a short-cut LCA method which provides us with a quick indication to answer the question whether and when to utilize or not to utilize CO2 as chemical feedstock from an environmental perspective

    The 190th birthday of Adolf Fick

    Get PDF
    Adolf Fick’s work represents in many ways an important starting point for modern scientific research on diffusion. Diffusion itself is a slow process taking long time to progress. In this talk, we aim to discuss the progress of diffusion science. For this purpose, we present a highly subjective review of the study of diffusion since the times of Adolf Fick. Our focus is on mutual diffusion in liquids which is at the heart of many processes in (bio)chemical systems. Here, diffusion is often the rate-limiting step and thus decisive for overall process performance

    Estimating general motion and intensity from event cameras

    Get PDF
    Robotic vision algorithms have become widely used in many consumer products which enabled technologies such as autonomous vehicles, drones, augmented reality (AR) and virtual reality (VR) devices to name a few. These applications require vision algorithms to work in real-world environments with extreme lighting variations and fast moving objects. However, robotic vision applications rely often on standard video cameras which face severe limitations in fast-moving scenes or by bright light sources which diminish the image quality with artefacts like motion blur or over-saturation. To address these limitations, the body of work presented here investigates the use of alternative sensor devices which mimic the superior perception properties of human vision. Such silicon retinas were proposed by neuromorphic engineering, and we focus here on one such biologically inspired sensor called the event camera which offers a new camera paradigm for real-time robotic vision. The camera provides a high measurement rate, low latency, high dynamic range, and low data rate. The signal of the camera is composed of a stream of asynchronous events at microsecond resolution. Each event indicates when individual pixels registers a logarithmic intensity changes of a pre-set threshold size. Using this novel signal has proven to be very challenging in most computer vision problems since common vision methods require synchronous absolute intensity information. In this thesis, we present for the first time a method to reconstruct an image and es- timation motion from an event stream without additional sensing or prior knowledge of the scene. This method is based on coupled estimations of both motion and intensity which enables our event-based analysis, which was previously only possible with severe limitations. We also present the first machine learning algorithm for event-based unsu- pervised intensity reconstruction which does not depend on an explicit motion estimation and reveals finer image details. This learning approach does not rely on event-to-image examples, but learns from standard camera image examples which are not coupled to the event data. In experiments we show that the learned reconstruction improves upon our handcrafted approach. Finally, we combine our learned approach with motion estima- tion methods and show the improved intensity reconstruction also significantly improves the motion estimation results. We hope our work in this thesis bridges the gap between the event signal and images and that it opens event cameras to practical solutions to overcome the current limitations of frame-based cameras in robotic vision.Open Acces

    Simultaneous Optical Flow and Intensity Estimation from an Event Camera

    Get PDF
    Event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image. This proposed paradigm shift away from traditional frame cameras offers significant potential advantages: namely avoiding high data rates, dynamic range limitations and motion blur. Unfortunately, however, established computer vision algorithms may not at all be applied directly to event cameras. Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera, e.g. allowing only rotation. Here, we propose, to the best of our knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene. Our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval. Our implementation relies on GPU optimisation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework, including in situations where conventional cameras suffer from dynamic range limitations and motion blur

    Предисловие

    Get PDF
    corecore