5,766 research outputs found

    Improved approximation of arbitrary shapes in dem simulations with multi-spheres

    Get PDF
    DEM simulations are originally made for spherical particles only. But most of real particles are anything but not spherical. Due to this problem, the multi-sphere method was invented. It provides the possibility to clump several spheres together to create complex shape structures. The proposed algorithm offers a novel method to create multi-sphere clumps for the given arbitrary shapes. Especially the use of modern clustering algorithms, from the field of computational intelligence, achieve satisfactory results. The clustering is embedded into an optimisation algorithm which uses a pre-defined criterion. A mostly unaided algorithm with only a few input and hyperparameters is able to approximate arbitrary shapes

    Cutting parameters optimisation in milling: expert machinist knowledge versus soft computing method

    Get PDF
    In traditional machining operations, cutting parameters are usually selected prior to machining according to machining handbooks and user’s experience. However, this method tends to be conservative and sub-optimal since part accuracy and non machining failures prevail over machining process efficiency. In this paper, a comparison between traditional cutting parameter optimisation by an expert machinist and an experimental optimisation procedure based on Soft Computing methods is conducted. The proposed methodology increases the machining performance in 6.1% and improves the understanding of the machining operation through the use of Adaptive Neuro-fuzzy Inference System

    On the dialog between experimentalist and modeler in catchment hydrology

    Get PDF
    The dialog between experimentalist and modeler in catchment hydrology has been minimal to date. The experimentalist often has a highly detailed yet highly qualitative understanding of dominant runoff processes—thus there is often much more information content on the catchment than we use for calibration of a model. While modelers often appreciate the need for 'hard data' for the model calibration process, there has been little thought given to how modelers might access this 'soft' or process knowledge. We present a new method where soft data (i.e., qualitative knowledge from the experimentalist that cannot be used directly as exact numbers) are made useful through fuzzy measures of model-simulation and parameter-value acceptability. We developed a three-box lumped conceptual model for the Maimai catchment in New Zealand, a particularly well-studied process-hydrological research catchment. The boxes represent the key hydrological reservoirs that are known to have distinct groundwater dynamics, isotopic composition and solute chemistry. The model was calibrated against hard data (runoff and groundwater-levels) as well as a number of criteria derived from the soft data (e.g. percent new water, reservoir volume, etc). We achieved very good fits for the three-box model when optimizing the parameter values with only runoff (Reff=0.93). However, parameter sets obtained in this way showed in general a poor goodness-of-fit for other criteria such as the simulated new-water contributions to peak runoff. Inclusion of soft-data criteria in the model calibration process resulted in lower Reff-values (around 0.84 when including all criteria) but led to better overall performance, as interpreted by the experimentalist’s view of catchment runoff dynamics. The model performance with respect to soft data (like, for instance, the new water ratio) increased significantly and parameter uncertainty was reduced by 60% on average with the introduction of the soft data multi-criteria calibration. We argue that accepting lower model efficiencies for runoff is 'worth it' if one can develop a more 'real' model of catchment behavior. The use of soft data is an approach to formalize this exchange between experimentalist and modeler and to more fully utilize the information content from experimental catchments

    A user-dependent approach to the perception of high-level semantics of music

    Get PDF

    Investigation on soft computing techniques for airport environment evaluation systems

    Get PDF
    Spatial and temporal information exist widely in engineering fields, especially in airport environmental management systems. Airport environment is influenced by many different factors and uncertainty is a significant part of the system. Decision support considering this kind of spatial and temporal information and uncertainty is crucial for airport environment related engineering planning and operation. Geographical information systems and computer aided design are two powerful tools in supporting spatial and temporal information systems. However, the present geographical information systems and computer aided design software are still too general in considering the special features in airport environment, especially for uncertainty. In this thesis, a series of parameters and methods for neural network-based knowledge discovery and training improvement are put forward, such as the relative strength of effect, dynamic state space search strategy and compound architecture. [Continues.

    Positive region: An enhancement of partitioning attribute based rough set for categorical data

    Get PDF
    Datasets containing multi-value attributes are often involved in several domains, like pattern recognition, machine learning and data mining. Data partition is required in such cases. Partitioning attributes is the clustering process for the whole data set which is specified for further processing. Recently, there are already existing prominent rough set-based approaches available for group objects and for handling uncertainty data that use indiscernibility attribute and mean roughness measure to perform attribute partitioning. Nevertheless, most of the partitioning attribute methods for selecting partitioning attribute algorithm for categorical data in clustering datasets are incapable of optimal partitioning. This indiscernibility and mean roughness measures, however, require the calculation of the lower approximation, which has less accuracy and it is an expensive task to compute. This reduces the growth of the set of attributes and neglects the data found within the boundary region. This paper presents a new concept called the "Positive Region Based Mean Dependency (PRD)”, that calculates the attribute dependency. In order to determine the mean dependency of the attributes, that is acceptable for categorical datasets, using a positive region-based mean dependency measure, PRD defines the method. By avoiding the lower approximation, PRD is an optimal substitute for the conventional dependency measure in partitioning attribute selection. Contrary to traditional RST partitioning methods, the proposed method can be employed as a measure of data output uncertainty and as a tailback for larger and multiple data clustering. The performance of the method presented is evaluated and compared with the algorithmes of Information-Theoretical Dependence Roughness (ITDR) and Maximum Indiscernible Attribute (MIA)
    corecore