2,502 research outputs found

    MicroPoem: experimental investigation of birch pollen emissions

    Get PDF
    Diseases due to aeroallergens constantly increased over the last decades and affect more and more people. Adequate protective and pre-emptive measures require both reliable assessment of production and release of various pollen species, and the forecasting of their atmospheric dispersion. Pollen forecast models, which may be either based on statistical knowledge or full physical transport and dispersion modeling, can provide pollen forecasts with full spatial coverage. Such models are currently being developed in many countries. The most important shortcoming in these pollen transport systems is the description of emissions, namely the dependence of the emission rate on physical processes such as turbulent exchange or mean transport and biological processes such as ripening (temperature) and preparedness for release. Thus the quantification of pollen emissions and determination of the governing mesoscale and micrometeorological factors are subject of the present project MicroPoem, which includes experimental field work as well as numerical modeling. The overall goal of the project is to derive an emission parameterization based on meteorological parameters, eventually leading to enhanced pollen forecasts. In order to have a well-defined source location, an isolated birch pollen stand was chosen for the set-up of a ‘natural tracer experiment', which was conducted during the birch pollen season in spring 2009. The site was located in a broad valley, where a mountain-plains wind system usually became effective during clear weather periods. This condition allowed to presume a rather persistent wind direction and considerable velocity during day- and nighttime. Several micrometeorological towers were operated up- and downwind of this reference source and an array of 26 pollen traps was laid out to observe the spatio-temporal variability of pollen concentrations. Additionally, the lower boundary layer was probed by means of a sodar and a tethered balloon system (also yielding a pollen concentration profile). In the present contribution a project overview is given and first results are presented. An emphasis is put on the relative performance of different sample technologies and the corresponding relative calibration in the lab and the field. The concentration distribution downwind of the birch stand exhibits a significant spatial (and temporal) variability. Small-scale numerical dispersion modeling will be used to infer the emission characteristics that optimally explain the observed concentration patterns

    Group invariance principles for causal generative models

    Full text link
    The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms. The interpretation of independence and the way it is utilized, however, varies across these methods. Our aim in this paper is to propose a group theoretic framework for ICM to unify and generalize these approaches. In our setting, the cause-mechanism relationship is assessed by comparing it against a null hypothesis through the application of random generic group transformations. We show that the group theoretic view provides a very general tool to study the structure of data generating mechanisms with direct applications to machine learning.Comment: 16 pages, 6 figure

    Telling cause from effect in deterministic linear dynamical systems

    Full text link
    Inferring a cause from its effect using observed time series data is a major challenge in natural and social sciences. Assuming the effect is generated by the cause trough a linear system, we propose a new approach based on the hypothesis that nature chooses the "cause" and the "mechanism that generates the effect from the cause" independent of each other. We therefore postulate that the power spectrum of the time series being the cause is uncorrelated with the square of the transfer function of the linear filter generating the effect. While most causal discovery methods for time series mainly rely on the noise, our method relies on asymmetries of the power spectral density properties that can be exploited even in the context of deterministic systems. We describe mathematical assumptions in a deterministic model under which the causal direction is identifiable with this approach. We also discuss the method's performance under the additive noise model and its relationship to Granger causality. Experiments show encouraging results on synthetic as well as real-world data. Overall, this suggests that the postulate of Independence of Cause and Mechanism is a promising principle for causal inference on empirical time series.Comment: This article is under review for a peer-reviewed conferenc

    A General Hilbert Space Approach to Framelets

    Get PDF
    In arbitrary separable Hilbert spaces it is possible to deffine multiscale methods of constructive approximation based on product kernels, restricting their choice in certain ways. These wavelet techniques have already filtering and localization properties and they are applicable in many areas due to their generalized deffinition. But they lack detailed information about their stability and redundancy, which are frame properties. So in this work frame conditions are introduced for approximation methods based on product kernels. In order to provide stability and redundancy the choice of product kernel ansatz function has to be restricted. Taking into account the kernel conditions for multiscale and for frame approximations one is able to deffine wavelet frames (= framelets), inheriting the approximation properties of both techniques and providing a more precise tool for multiscale analysis than the normal wavelets.In arbitrary separable Hilbert spaces it is possible to deffine multiscale methods of constructive approximation based on product kernels, restricting their choice in certain ways. These wavelet techniques have already filtering and localization properties and they are applicable in many areas due to their generalized deffinition. But they lack detailed information about their stability and redundancy, which are frame properties. So in this work frame conditions are introduced for approximation methods based on product kernels. In order to provide stability and redundancy the choice of product kernel ansatz function has to be restricted. Taking into account the kernel conditions for multiscale and for frame approximations one is able to deffine wavelet frames (= framelets), inheriting the approximation properties of both techniques and providing a more precise tool for multiscale analysis than the normal wavelets

    Harmonic Spline-Wavelets on the 3-dimensional Ball and their Application to the Reconstruction of the Earth´s Density Distribution from Gravitational Data at Arbitrarily Shaped Satellite Orbits

    Get PDF
    We introduce splines for the approximation of harmonic functions on a 3-dimensional ball. Those splines are combined with a multiresolution concept. More precisely, at each step of improving the approximation we add more data and, at the same time, reduce the hat-width of the used spline basis functions. Finally, a convergence theorem is proved. One possible application, that is discussed in detail, is the reconstruction of the Earth´s density distribution from gravitational data obtained at a satellite orbit. This is an exponentially ill-posed problem where only the harmonic part of the density can be recovered since its orthogonal complement has the potential 0. Whereas classical approaches use a truncated singular value decomposition (TSVD) with the well-known disadvantages like the non-localizing character of the used spherical harmonics and the bandlimitedness of the solution, modern regularization techniques use wavelets allowing a localized reconstruction via convolutions with kernels that are only essentially large in the region of interest. The essential remaining drawback of a TSVD and the wavelet approaches is that the integrals (i.e. the inner product in case of a TSVD and the convolution in case of wavelets) are calculated on a spherical orbit, which is not given in reality. Thus, simplifying modelling assumptions, that certainly include a modelling error, have to be made. The splines introduced here have the important advantage, that the given data need not be located on a sphere but may be (almost) arbitrarily distributed in the outer space of the Earth. This includes, in particular, the possibility to mix data from different satellite missions (different orbits, different derivatives of the gravitational potential) in the calculation of the Earth´s density distribution. Moreover, the approximating splines can be calculated at varying resolution scales, where the differences for increasing the resolution can be computed with the introduced spline-wavelet technique

    Smoothing of Piecewise Linear Paths

    Get PDF
    We present an anytime-capable fast deterministic greedy algorithm for smoothing piecewise linear paths consisting of connected linear segments. With this method, path points with only a small influence on path geometry (i.e. aligned or nearly aligned points) are successively removed. Due to the removal of less important path points, the computational and memory requirements of the paths are reduced and traversing the path is accelerated. Our algorithm can be used in many different applications, e.g. sweeping, path finding, programming-by-demonstration in a virtual environment, or 6D CNC milling. The algorithm handles points with positional and orientational coordinates of arbitrary dimension

    The Cost Impact of Spam Filters: Measuring the Effect of Information System Technologies in Organizations

    Get PDF
    More than 70% of global e-mail traffic consists of unsolicited and commercial direct marketing, also known as spam. Dealing with spam incurs high costs for organizations, prompting efforts to try to reduce spam-related costs by installing spam filters. Using modern econometric methods to reduce the selection bias of installing a spam filter, we deploy a unique data setting implemented at a German university to measure the costs associated with spam and the costs savings of spam filters. The applied methodological framework can easily be transferred to estimate the effect of other IS technologies (e.g., SAP) implemented in organizations. Our findings indicate that central IT costs are of little relevance since the majority of spam costs stem from employees who spend working time identifying and deleting spam. The working time losses caused by spam are approximately 1,200 minutes per employee per year; these costs could be reduced by roughly 35% through the installation of a spam filter mechanism. The individual efficiency of a spam filter installation depends on the amount of spam that is received and on the level of knowledge about spam.propensity score matching, treatment effects, spam filter, spam

    The kinematics of the diffuse ionized gas in NGC 4666

    Full text link
    The global properties of the interstellar medium with processes such as infall and outflow of gas and a large scale circulation of matter and its consequences for star formation and chemical enrichment are important for the understanding of galaxy evolution. In this paper we studied the kinematics and morphology of the diffuse ionized gas (DIG) in the disk and in the halo of the star forming spiral galaxy NGC 4666 to derive information about its kinematical properties. Especially, we searched for infalling and outflowing ionized gas. We determined surface brightness, radial velocity, and velocity dispersion of the warm ionized gas via high spectral resolution (R ~ 9000) Fabry-P\'erot interferometry. This allows the determination of the global velocity field and the detection of local deviations from this verlocity field. We calculated models of the DIG distribution and its kinematics for comparison with the measured data. In this way we determined fundamental parameters such as the inclination and the scale height of NGC 4666, and established the need for an additional gas component to fit our observed data. We found individual areas, especially along the minor axis, with gas components reaching into the halo which we interpret as an outflowing component of the diffuse ionized gas. As the main result of our study, we were able to determine that the vertical structure of the DIG distribution in NGC 4666 is best modeled with two components of ionized gas, a thick and a thin disk with 0.8 kpc and 0.2 kpc scale height, respectively. Therefore, the enhanced star formation in NGC 4666 drives an outflow and also maintains a thick ionized gas layer reminiscent of the Reynold's layer in the Milky Way.Comment: 12 pages, 10 figures, 3 table

    On the efficiency and correction of vertically oriented blunt bioaerosol samplers in moving air

    Get PDF
    The aspiration efficiency of vertical and wind-oriented Air-O-Cell samplers was investigated in a field study using the pollen of hazel, sweet chestnut and birch. Collected pollen numbers were compared to measurements of a Hirst-type Burkard spore trap. The discrepancy between pollen counts is substantial in the case of vertical orientation. The results indicate a strong influence of wind velocity and inlet orientation relative to the freestream on the aspiration efficiency. Various studies reported on inertial effects on aerosol motion as function of wind velocity. The measurements were compared to a physically based model for the limited case of vertical blunt samplers. Additionally, a simple linear model based on pollen counts and wind velocity was developed. Both correction models notably reduce the error of vertically oriented samplers, whereas only the physically based model can be used on independent datasets. The study also addressed the precision error of the instruments used, which was substantial for both sampler type
    • …
    corecore