4,593 research outputs found

    Freeze-drying modeling and monitoring using a new neuro-evolutive technique

    Get PDF
    This paper is focused on the design of a black-box model for the process of freeze-drying of pharmaceuticals. A new methodology based on a self-adaptive differential evolution scheme is combined with a back-propagation algorithm, as local search method, for the simultaneous structural and parametric optimization of the model represented by a neural network. Using the model of the freeze-drying process, both the temperature and the residual ice content in the product vs. time can be determine off-line, given the values of the operating conditions (the temperature of the heating shelf and the pressure in the drying chamber). This makes possible to understand if the maximum temperature allowed by the product is trespassed and when the sublimation drying is complete, thus providing a valuable tool for recipe design and optimization. Besides, the black box model can be applied to monitor the freeze-drying process: in this case, the measurement of product temperature is used as input variable of the neural network in order to provide in-line estimation of the state of the product (temperature and residual amount of ice). Various examples are presented and discussed, thus pointing out the strength of the too

    A survey on tidal analysis and forecasting methods for Tsunami detection

    Get PDF
    Accurate analysis and forecasting of tidal level are very important tasks for human activities in oceanic and coastal areas. They can be crucial in catastrophic situations like occurrences of Tsunamis in order to provide a rapid alerting to the human population involved and to save lives. Conventional tidal forecasting methods are based on harmonic analysis using the least squares method to determine harmonic parameters. However, a large number of parameters and long-term measured data are required for precise tidal level predictions with harmonic analysis. Furthermore, traditional harmonic methods rely on models based on the analysis of astronomical components and they can be inadequate when the contribution of non-astronomical components, such as the weather, is significant. Other alternative approaches have been developed in the literature in order to deal with these situations and provide predictions with the desired accuracy, with respect also to the length of the available tidal record. These methods include standard high or band pass filtering techniques, although the relatively deterministic character and large amplitude of tidal signals make special techniques, like artificial neural networks and wavelets transform analysis methods, more effective. This paper is intended to provide the communities of both researchers and practitioners with a broadly applicable, up to date coverage of tidal analysis and forecasting methodologies that have proven to be successful in a variety of circumstances, and that hold particular promise for success in the future. Classical and novel methods are reviewed in a systematic and consistent way, outlining their main concepts and components, similarities and differences, advantages and disadvantages

    Data Assimilation by Artificial Neural Networks for an Atmospheric General Circulation Model

    Get PDF
    Numerical weather prediction (NWP) uses atmospheric general circulation models (AGCMs) to predict weather based on current weather conditions. The process of entering observation data into mathematical model to generate the accurate initial conditions is called data assimilation (DA). It combines observations, forecasting, and filtering step. This paper presents an approach for employing artificial neural networks (NNs) to emulate the local ensemble transform Kalman filter (LETKF) as a method of data assimilation. This assimilation experiment tests the Simplified Parameterizations PrimitivE-Equation Dynamics (SPEEDY) model, an atmospheric general circulation model (AGCM), using synthetic observational data simulating localizations of meteorological balloons. For the data assimilation scheme, the supervised NN, the multilayer perceptrons (MLPs) networks are applied. After the training process, the method, forehead-calling MLP-DA, is seen as a function of data assimilation. The NNs were trained with data from first 3 months of 1982, 1983, and 1984. The experiment is performed for January 1985, one data assimilation cycle using MLP-DA with synthetic observations. The numerical results demonstrate the effectiveness of the NN technique for atmospheric data assimilation. The results of the NN analyses are very close to the results from the LETKF analyses, the differences of the monthly average of absolute temperature analyses are of order 10–2. The simulations show that the major advantage of using the MLP-DA is better computational performance, since the analyses have similar quality. The CPU-time cycle assimilation with MLP-DA analyses is 90 times faster than LETKF cycle assimilation with the mean analyses used to run the forecast experiment

    Data-driven Soft Sensors in the Process Industry

    Get PDF
    In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work

    Development of bent-up triangular tab shear transfer (BTTST) enhancement in cold-formed steel (CFS)-concrete composite beams

    Get PDF
    Cold-formed steel (CFS) sections, have been recognised as an important contributor to environmentally responsible and sustainable structures in developed countries, and CFS framing is considered as a sustainable 'green' construction material for low rise residential and commercial buildings. However, there is still lacking of data and information on the behaviour and performance of CFS beam in composite construction. The use of CFS has been limited to structural roof trusses and a host of nonstructural applications. One of the limiting features of CFS is the thinness of its section (usually between 1.2 and 3.2 mm thick) that makes it susceptible to torsional, distortional, lateral-torsional, lateral-distortional and local buckling. Hence, a reasonable solution is resorting to a composite construction of structural CFS section and reinforced concrete deck slab, which minimises the distance from the neutral-axis to the top of the deck and reduces the compressive bending stress in the CFS sections. Also, by arranging two CFS channel sections back-to-back restores symmetricity and suppresses lateraltorsional and to a lesser extent, lateral-distortional buckling. The two-fold advantages promised by the system, promote the use of CFS sections in a wider range of structural applications. An efficient and innovative floor system of built-up CFS sections acting compositely with a concrete deck slab was developed to provide an alternative composite system for floors and roofs in buildings. The system, called Precast Cold-Formed SteelConcrete Composite System, is designed to rely on composite actions between the CFS sections and a reinforced concrete deck where shear forces between them are effectively transmitted via another innovative shear transfer enhancement mechanism called a bentup triangular tab shear transfer (BTTST). The study mainly comprises two major components, i.e. experimental and theoretical work. Experimental work involved smallscale and large-scale testing of laboratory tests. Sixty eight push-out test specimens and fifteen large-scale CFS-concrete composite beams specimens were tested in this program. In the small-scale test, a push-out test was carried out to determine the strength and behaviour of the shear transfer enhancement between the CFS and concrete. Four major parameters were studied, which include compressive strength of concrete, CFS strength, dimensions (size and angle) of BTTST and CFS thickness. The results from push-out test were used to develop an expression in order to predict the shear capacity of innovative shear transfer enhancement mechanism, BTTST in CFS-concrete composite beams. The value of shear capacity was used to calculate the theoretical moment capacity of CFSconcrete composite beams. The theoretical moment capacities were used to validate the large-scale test results. The large-scale test specimens were tested by using four-point load bending test. The results in push-out tests show that specimens employed with BTTST achieved higher shear capacities compared to those that rely only on a natural bond between cold-formed steel and concrete and specimens with Lakkavalli and Liu bent-up tab (LYLB). Load capacities for push-out test specimens with BTTST are ii relatively higher as compared to the equivalent control specimen, i.e. by 91% to 135%. When compared to LYLB specimens the increment is 12% to 16%. In addition, shear capacities of BTTST also increase with the increase in dimensions (size and angle) of BTTST, thickness of CFS and concrete compressive strength. An equation was developed to determine the shear capacity of BTTST and the value is in good agreement with the observed test values. The average absolute difference between the test values and predicted values was found to be 8.07%. The average arithmetic mean of the test/predicted ratio (n) of this equation is 0.9954. The standard deviation (a) and the coefficient of variation (CV) for the proposed equation were 0.09682 and 9.7%, respectively. The proposed equation is recommended for the design of BTTST in CFSconcrete composite beams. In large-scale testing, specimens employed with BTTST increased the strength capacities and reduced the deflection of the specimens. The moment capacities, MU ) e X p for all specimens are above Mu>theory and show good agreement with the calculated ratio (>1.00). It is also found that, strength capacities of CFS-concrete composite beams also increase with the increase in dimensions (size and angle) of BTTST, thickness of CFS and concrete compressive strength and a CFS-concrete composite beam are practically designed with partial shear connection for equal moment capacity by reducing number of BTTST. It is concluded that the proposed BTTST shear transfer enhancement in CFS-concrete composite beams has sufficient strength and is also feasible. Finally, a standard table of characteristic resistance, P t a b of BTTST in normal weight concrete, was also developed to simplify the design calculation of CFSconcrete composite beams

    Method for hyperspectral imagery exploitation and pixel spectral unmixing

    Get PDF
    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end
    corecore