95,733 research outputs found

    On the Computation Power of Name Parameterization in Higher-order Processes

    Full text link
    Parameterization extends higher-order processes with the capability of abstraction (akin to that in lambda-calculus), and is known to be able to enhance the expressiveness. This paper focuses on the parameterization of names, i.e. a construct that maps a name to a process, in the higher-order setting. We provide two results concerning its computation capacity. First, name parameterization brings up a complete model, in the sense that it can express an elementary interactive model with built-in recursive functions. Second, we compare name parameterization with the well-known pi-calculus, and provide two encodings between them.Comment: In Proceedings ICE 2015, arXiv:1508.0459

    High-Performance Inference Graph Convolutional Networks for Skeleton-Based Action Recognition

    Full text link
    Recently, significant achievements have been made in skeleton-based human action recognition with the emergence of graph convolutional networks (GCNs). However, the state-of-the-art (SOTA) models used for this task focus on constructing more complex higher-order connections between joint nodes to describe skeleton information, which leads to complex inference processes and high computational costs, resulting in reduced model's practicality. To address the slow inference speed caused by overly complex model structures, we introduce re-parameterization and over-parameterization techniques to GCNs, and propose two novel high-performance inference graph convolutional networks, namely HPI-GCN-RP and HPI-GCN-OP. HPI-GCN-RP uses re-parameterization technique to GCNs to achieve a higher inference speed with competitive model performance. HPI-GCN-OP further utilizes over-parameterization technique to bring significant performance improvement with inference speed slightly decreased. Experimental results on the two skeleton-based action recognition datasets demonstrate the effectiveness of our approach. Our HPI-GCN-OP achieves an accuracy of 93% on the cross-subject split of the NTU-RGB+D 60 dataset, and 90.1% on the cross-subject benchmark of the NTU-RGB+D 120 dataset and is 4.5 times faster than HD-GCN at the same accuracy

    A statistical–numerical aerosol parameterization scheme

    Get PDF
    A new modal aerosol parameterization scheme, the statistical–numerical aerosol parameterization (SNAP), was developed for studying aerosol processes and aerosol–cloud interactions in regional or global models. SNAP applies statistical fitting on numerical results to generate accurate parameterization formulas without sacrificing details of the growth kernel. Processes considered in SNAP include fundamental aerosol processes as well as processes related to aerosol–cloud interactions. Comparison of SNAP with numerical solutions, analytical solutions, and binned aerosol model simulations showed that the new method performs well, with accuracy higher than that of the high-order numerical quadrature technique, and with much less computation time. The SNAP scheme has been implemented in regional air quality models, producing results very close to those using binned-size schemes or numerical quadrature schemes

    A comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    Get PDF
    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations

    Universal Non-perturbative Functions for SIDIS and Drell-Yan Processes

    Full text link
    We update the well-known BLNY fit to the low transverse momentum Drell-Yan lepton pair productions in hadronic collisions, by considering the constraints from the semi-inclusive hadron production in deep inelastic scattering (SIDIS) from HERMES and COMPASS experiments. We follow the Collins-Soper-Sterman (CSS) formalism with the b_*-prescription. A universal non-perturbative form factor associated with the transverse momentum dependent quark distributions is found in the analysis with a new functional form different from that of BLNY. This releases the tension between the BLNY fit to the Drell-Yan data with the SIDIS data from HERMES/COMPASS in the CSS resummation formalism.Comment: 19 pages, 11 figures; updated the fit with running effects of \alpha_{s}, \alpha_{em}, N_f; conclusion remains; more discussions on the result

    Testing the meson cloud in the nucleon in Drell-Yan processes

    Get PDF
    We discuss the present status of the \bar u-\bar d asymmetry in the nucleon and analize the quantities which are best suited to verify the asymmetry. We find that the Drell-Yan asymmetry is the quantity insensitive to the valence quark distributions and very sensitive to the flavour asymmetry of the sea. We compare the prediction of the meson cloud model with different experimental data including the Fermilab E772 data and recent data of the NA51 Collaboration at CERN and make predictions for the planned Drell-Yan experiments.Comment: written in ReVTeX, 26 pages + 10 PS-figure

    Simulating model uncertainty of subgrid-scale processes by sampling model errors at convective scales

    Get PDF
    Ideally, perturbation schemes in ensemble forecasts should be based on the statistical properties of the model errors. Often, however, the statistical properties of these model errors are unknown. In practice, the perturbations are pragmatically modelled and tuned to maximize the skill of the ensemble forecast. In this paper a general methodology is developed to diagnose the model error, linked to a specific physical process, based on a comparison between a target and a reference model. Here, the reference model is a configuration of the ALADIN (Aire Limitée Adaptation Dynamique Développement International) model with a parameterization of deep convection. This configuration is also run with the deep-convection parameterization scheme switched off, degrading the forecast skill. The model error is then defined as the difference of the energy and mass fluxes between the reference model with scale-aware deep-convection parameterization and the target model without deep-convection parameterization. In the second part of the paper, the diagnosed model-error characteristics are used to stochastically perturb the fluxes of the target model by sampling the model errors from a training period in such a way that the distribution and the vertical and multivariate correlation within a grid column are preserved. By perturbing the fluxes it is guaranteed that the total mass, heat and momentum are conserved. The tests, performed over the period 11–20 April 2009, show that the ensemble system with the stochastic flux perturbations combined with the initial condition perturbations not only outperforms the target ensemble, where deep convection is not parameterized, but for many variables it even performs better than the reference ensemble (with scale-aware deep-convection scheme). The introduction of the stochastic flux perturbations reduces the small-scale erroneous spread while increasing the overall spread, leading to a more skillful ensemble. The impact is largest in the upper troposphere with substantial improvements compared to other state-of-the-art stochastic perturbation schemes. At lower levels the improvements are smaller or neutral, except for temperature where the forecast skill is degraded
    • …
    corecore