4,320 research outputs found

    Process behavior and product quality in fertilizer manufacturing using continuous hopper transfer pan granulation—Experimental investigations

    Get PDF
    Fertilizers are commonly used to improve the soil quality in both conventional and organic agriculture. One such fertilizer is dolomite for which soil application in granulated form is advantageous. These granules are commonly produced from ground dolomite powder in continuous pan transfer granulators. During production, the granulator’s operation parameters affect the granules’ properties and thereby also the overall performance of the fertilizer. To ensure product granules of certain specifications and an efficient overall production, process control and intensification approaches based on mathematical models can be applied. However, the latter require high-quality quantitative experimental data describing the effects of process operation parameters on the granule properties. Therefore, in this article, such data is presented for a lab-scale experimental setup. Investigations were carried out into how variations in binder spray rate, binder composition, feed powder flow rate, pan inclination angle, and angular velocity affect particle size distribution, mechanical stability, and humidity. Furthermore, in contrast to existing work samples from both, pan granules and product granules are analyzed. The influence of operation parameter variations on the differences between both, also known as trajectory separation, is described quantitatively. The results obtained indicate an increase in the average particle size with increasing binder flow rate to feed rate and increasing binder concentration and the inclination angle of the pan. Compressive strength varied significantly depending on the operating parameters. Significant differences in properties were observed for the product and the intermediate (pan) samples. In fact, for some operation parameters, e.g., binder feed rate, the magnitude of the separation effect strongly depends on the specific value of the operation parameter. The presented concise data will enable future mathematical modeling of the pan granulation process, e.g., using the framework of population balance equations

    On Convergence Properties of Shannon Entropy

    Full text link
    Convergence properties of Shannon Entropy are studied. In the differential setting, it is shown that weak convergence of probability measures, or convergence in distribution, is not enough for convergence of the associated differential entropies. A general result for the desired differential entropy convergence is provided, taking into account both compactly and uncompactly supported densities. Convergence of differential entropy is also characterized in terms of the Kullback-Liebler discriminant for densities with fairly general supports, and it is shown that convergence in variation of probability measures guarantees such convergence under an appropriate boundedness condition on the densities involved. Results for the discrete setting are also provided, allowing for infinitely supported probability measures, by taking advantage of the equivalence between weak convergence and convergence in variation in this setting.Comment: Submitted to IEEE Transactions on Information Theor

    Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path

    Get PDF
    We consider the problem of finding a near-optimal policy in continuous space, discounted Markovian Decision Problems given the trajectory of some behaviour policy. We study the policy iteration algorithm where in successive iterations the action-value functions of the intermediate policies are obtained by picking a function from some fixed function set (chosen by the user) that minimizes an unbiased finite-sample approximation to a novel loss function that upper-bounds the unmodified Bellman-residual criterion. The main result is a finite-sample, high-probability bound on the performance of the resulting policy that depends on the mixing rate of the trajectory, the capacity of the function set as measured by a novel capacity concept that we call the VC-crossing dimension, the approximation power of the function set and the discounted-average concentrability of the future-state distribution. To the best of our knowledge this is the first theoretical reinforcement learning result for off-policy control learning over continuous state-spaces using a single trajectory

    Regularized fitted Q-iteration: application to planning

    Get PDF
    We consider planning in a Markovian decision problem, i.e., the problem of finding a good policy given access to a generative model of the environment. We propose to use fitted Q-iteration with penalized (or regularized) least-squares regression as the regression subroutine to address the problem of controlling model-complexity. The algorithm is presented in detail for the case when the function space is a reproducing kernel Hilbert space underlying a user-chosen kernel function. We derive bounds on the quality of the solution and argue that data-dependent penalties can lead to almost optimal performance. A simple example is used to illustrate the benefits of using a penalized procedure

    Measurement of the W boson mass

    Get PDF

    Mathematical Modelling Method Application for Optimisation of Catalytic Reforming process

    Get PDF
    The application of mathematical modelling method monitoring of catalytic reforming unit of Komsomolsk oil-refinery is proposed. The mathematical model-based system “Catalyst's Control” which takes into account both the physical and chemical mechanisms of hydrocarbon mixture conversion reaction as well as the catalyst deactivation was used for catalytic reforming installation monitoring. The models created can be used for optimization and prediction of operating parameters (octane number, reactors outlet temperature and yield) of the reforming process. It is shown, that the work on the optimal activity allows increasing product output with a constant level of production costs, and get the information about Pt-Re catalyst work efficiency

    Data production models for the CDF experiment

    Get PDF
    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.Comment: 8 pages, 9 figures; presented at HPC Asia2005, Beijing, China, Nov 30 - Dec 3, 200

    Factors influencing awareness of community-based shorebird conservation projects in Australia

    Full text link
    We examine the awareness of potential volunteers (n = 360) living near nine community-based shorebird conservation projects. About half of the people sampled (54%) were unaware of the nearest project. Awareness of interviewees varied substantially among projects (28-78%). Apart from gaining awareness of projects through membership of natural history groups (43%), many respondents heard of projects through friends and relatives (20%), rather than through media such as newspapers (14%) and television (2.3%). We demonstrate that community-based projects can be quantitatively and critically assessed for awareness. The use of rapid, cost-effective assessments of awareness levels has application in many conservation projects. <br /

    Data processing model for the CDF experiment

    Get PDF
    The data processing model for the CDF experiment is described. Data processing reconstructs events from parallel data streams taken with different combinations of physics event triggers and further splits the events into datasets of specialized physics datasets. The design of the processing control system faces strict requirements on bookkeeping records, which trace the status of data files and event contents during processing and storage. The computing architecture was updated to meet the mass data flow of the Run II data collection, recently upgraded to a maximum rate of 40 MByte/sec. The data processing facility consists of a large cluster of Linux computers with data movement managed by the CDF data handling system to a multi-petaByte Enstore tape library. The latest processing cycle has achieved a stable speed of 35 MByte/sec (3 TByte/day). It can be readily scaled by increasing CPU and data-handling capacity as required.Comment: 12 pages, 10 figures, submitted to IEEE-TN
    corecore