31,038 research outputs found

    In-situ comparison of the NOy instruments flown in MOZAIC and SPURT

    Get PDF
    Two aircraft instruments for the measurement of total odd nitrogen (NOy) were compared side by side aboard a Learjet A35 in April 2003 during a campaign of the AFO2000 project SPURT (Spurengastransport in der Tropopausenregion). The instruments albeit employing the same measurement principle (gold converter and chemiluminescence) had different inlet configurations. The ECO-Physics instrument operated by ETH-ZĂŒrich in SPURT had the gold converter mounted outside the aircraft, whereas the instrument operated by FZ-JĂŒlich in the European project MOZAIC III (Measurements of ozone, water vapour, carbon monoxide and nitrogen oxides aboard Airbus A340 in-service aircraft) employed a Rosemount probe with 80 cm of FEP-tubing connecting the inlet to the gold converter. The NOy concentrations during the flight ranged between 0.3 and 3 ppb. The two data sets were compared in a blind fashion and each team followed its normal operating procedures. On average, the measurements agreed within 7%, i.e. within the combined uncertainty of the two instruments. This puts an upper limit on potential losses of HNO3 in the Rosemount inlet of the MOZAIC instrument. Larger transient deviations were observed during periods after calibrations and when the aircraft entered the stratosphere. The time lag of the MOZAIC instrument observed in these instances is in accordance with the time constant of the MOZAIC inlet line determined in the laboratory for HNO3

    Procedure to Approximately Estimate the Uncertainty of Material Ratio Parameters due to Inhomogeneity of Surface Roughness

    Full text link
    Roughness parameters that characterize contacting surfaces with regard to friction and wear are commonly stated without uncertainties, or with an uncertainty only taking into account a very limited amount of aspects such as repeatability of reproducibility (homogeneity) of the specimen. This makes it difficult to discriminate between different values of single roughness parameters. Therefore uncertainty assessment methods are required that take all relevant aspects into account. In the literature this is scarcely performed and examples specific for parameters used in friction and wear are not yet given. We propose a procedure to derive the uncertainty from a single profile employing a statistical method that is based on the statistical moments of the amplitude distribution and the autocorrelation length of the profile. To show the possibilities and the limitations of this method we compare the uncertainty derived from a single profile with that derived from a high statistics experiment.Comment: submitted to Meas. Sci. Technol., 12 figure

    HCU400: An Annotated Dataset for Exploring Aural Phenomenology Through Causal Uncertainty

    Full text link
    The way we perceive a sound depends on many aspects-- its ecological frequency, acoustic features, typicality, and most notably, its identified source. In this paper, we present the HCU400: a dataset of 402 sounds ranging from easily identifiable everyday sounds to intentionally obscured artificial ones. It aims to lower the barrier for the study of aural phenomenology as the largest available audio dataset to include an analysis of causal attribution. Each sample has been annotated with crowd-sourced descriptions, as well as familiarity, imageability, arousal, and valence ratings. We extend existing calculations of causal uncertainty, automating and generalizing them with word embeddings. Upon analysis we find that individuals will provide less polarized emotion ratings as a sound's source becomes increasingly ambiguous; individual ratings of familiarity and imageability, on the other hand, diverge as uncertainty increases despite a clear negative trend on average

    Interactive digital art

    Get PDF
    In this paper, we present DNArt in general, our work in DNArt’s lab including a detailed presentation of the first artwork that has come out of our lab in September 2011, entitled “ENCOUNTERS #3”, and the use of DNArt for digital art conservation. Research into the use of DNArt for digital art conservation is currently conducted by the Netherlands Institute for Media art (Nederlands Instituut voor Mediakunst, NIMk). The paper describes this research and presents preliminary results. At the end, it will offer the reader the possibility to participate in DNArt’s development

    Towards automated visual flexible endoscope navigation

    Get PDF
    Background:\ud The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud Methods:\ud A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud Results:\ud Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud Conclusions:\ud Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process

    Fast Monte Carlo Simulation for Patient-specific CT/CBCT Imaging Dose Calculation

    Full text link
    Recently, X-ray imaging dose from computed tomography (CT) or cone beam CT (CBCT) scans has become a serious concern. Patient-specific imaging dose calculation has been proposed for the purpose of dose management. While Monte Carlo (MC) dose calculation can be quite accurate for this purpose, it suffers from low computational efficiency. In response to this problem, we have successfully developed a MC dose calculation package, gCTD, on GPU architecture under the NVIDIA CUDA platform for fast and accurate estimation of the x-ray imaging dose received by a patient during a CT or CBCT scan. Techniques have been developed particularly for the GPU architecture to achieve high computational efficiency. Dose calculations using CBCT scanning geometry in a homogeneous water phantom and a heterogeneous Zubal head phantom have shown good agreement between gCTD and EGSnrc, indicating the accuracy of our code. In terms of improved efficiency, it is found that gCTD attains a speed-up of ~400 times in the homogeneous water phantom and ~76.6 times in the Zubal phantom compared to EGSnrc. As for absolute computation time, imaging dose calculation for the Zubal phantom can be accomplished in ~17 sec with the average relative standard deviation of 0.4%. Though our gCTD code has been developed and tested in the context of CBCT scans, with simple modification of geometry it can be used for assessing imaging dose in CT scans as well.Comment: 18 pages, 7 figures, and 1 tabl

    Eddy covariance raw data processing for CO2 and energy fluxes calculation at ICOS ecosystem stations

    Get PDF
    open18siThe eddy covariance is a powerful technique to estimate the surface-Atmosphere exchange of different scalars at the ecosystem scale. The EC method is central to the ecosystem component of the Integrated Carbon Observation System, a monitoring network for greenhouse gases across the European Continent. The data processing sequence applied to the collected raw data is complex, and multiple robust options for the different steps are often available. For Integrated Carbon Observation System and similar networks, the standardisation of methods is essential to avoid methodological biases and improve comparability of the results. We introduce here the steps of the processing chain applied to the eddy covariance data of Integrated Carbon Observation System stations for the estimation of final CO2, water and energy fluxes, including the calculation of their uncertainties. The selected methods are discussed against valid alternative options in terms of suitability and respective drawbacks and advantages. The main challenge is to warrant standardised processing for all stations in spite of the large differences in e.g. ecosystem traits and site conditions. The main achievement of the Integrated Carbon Observation System eddy covariance data processing is making CO2 and energy flux results as comparable and reliable as possible, given the current micrometeorological understanding and the generally accepted state-of-The-Art processing methodsopenSabbatini, Simone; Mammarella, Ivan; Arriga, Nicola; Fratini, Gerardo; Graf, Alexander; Hörtnagl, Lukas; Ibrom, Andreas; Longdoz, Bernard; Mauder, Matthias; Merbold, Lutz; Metzger, Stefan; Montagnani, Leonardo; Pitacco, Andrea; Rebmann, Corinna; Sedlåk, Pavel; Ơigut, Ladislav; Vitale, Domenico; Papale, DarioSabbatini, Simone; Mammarella, Ivan; Arriga, Nicola; Fratini, Gerardo; Graf, Alexander; Hörtnagl, Lukas; Ibrom, Andreas; Longdoz, Bernard; Mauder, Matthias; Merbold, Lutz; Metzger, Stefan; Montagnani, Leonardo; Pitacco, Andrea; Rebmann, Corinna; Sedlåk, Pavel; Ơigut, Ladislav; Vitale, Domenico; Papale, Dari
    • 

    corecore