15,658 research outputs found

    Granular-ball computing: an efficient, robust, and interpretable adaptive multi-granularity representation and computation method

    Full text link
    Human cognition operates on a "Global-first" cognitive mechanism, prioritizing information processing based on coarse-grained details. This mechanism inherently possesses an adaptive multi-granularity description capacity, resulting in computational traits such as efficiency, robustness, and interpretability. The analysis pattern reliance on the finest granularity and single-granularity makes most existing computational methods less efficient, robust, and interpretable, which is an important reason for the current lack of interpretability in neural networks. Multi-granularity granular-ball computing employs granular-balls of varying sizes to daptively represent and envelop the sample space, facilitating learning based on these granular-balls. Given that the number of coarse-grained "granular-balls" is fewer than sample points, granular-ball computing proves more efficient. Moreover, the inherent coarse-grained nature of granular-balls reduces susceptibility to fine-grained sample disturbances, enhancing robustness. The multi-granularity construct of granular-balls generates topological structures and coarse-grained descriptions, naturally augmenting interpretability. Granular-ball computing has successfully ventured into diverse AI domains, fostering the development of innovative theoretical methods, including granular-ball classifiers, clustering techniques, neural networks, rough sets, and evolutionary computing. This has notably ameliorated the efficiency, noise robustness, and interpretability of traditional methods. Overall, granular-ball computing is a rare and innovative theoretical approach in AI that can adaptively and simultaneously enhance efficiency, robustness, and interpretability. This article delves into the main application landscapes for granular-ball computing, aiming to equip future researchers with references and insights to refine and expand this promising theory

    Attribute reduction algorithm based on cognitive model of granular computing in inconsistent decision information systems

    Get PDF
    Cilj je ovoga rada istražiti novu metodu redukcije atributa u informacijskim sustavima nekonzistentne odluke. Analizirajući povezanost teorije redukcije atributa i kognitivne znanosti, u radu se predlaže algoritam redukcije atributa zasnovan na kognitivnom modelu granularnog računanja. Analiza algoritma i numerički eksperiment pokazuju vrijednost predloženog algoritma redukcije atributa. Ta se metoda može primijeniti i na konzistentne i nekonzistentne sustave. Predloženi model također daje i novi model i način razmišljanja za proučavanje povezanosti ljudske spoznaje i poimanja. Koristan je za razvoj kognitivnog modela.This article aims to explore a new method of attribute reduction in inconsistent decision information systems. By analyzing the connection of attribute reduction theory and cognitive science, an attribute reduction algorithm based on cognitive model of granular computing is proposed in this paper. Algorithm analysis and numerical experiment show the validity of the proposed attribute reductions algorithm. The method can be applied to both consistent and inconsistent information systems. The proposed model also provides a new model and thinking to study the connection of human’s cognition and notion. It is useful to the development of cognitive model

    Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition

    Full text link
    This paper assumes the hypothesis that human learning is perception based, and consequently, the learning process and perceptions should not be represented and investigated independently or modeled in different simulation spaces. In order to keep the analogy between the artificial and human learning, the former is assumed here as being based on the artificial perception. Hence, instead of choosing to apply or develop a Computational Theory of (human) Perceptions, we choose to mirror the human perceptions in a numeric (computational) space as artificial perceptions and to analyze the interdependence between artificial learning and artificial perception in the same numeric space, using one of the simplest tools of Artificial Intelligence and Soft Computing, namely the perceptrons. As practical applications, we choose to work around two examples: Optical Character Recognition and Iris Recognition. In both cases a simple Turing test shows that artificial perceptions of the difference between two characters and between two irides are fuzzy, whereas the corresponding human perceptions are, in fact, crisp.Comment: 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24 Aug 201

    Forecasting of financial data: a novel fuzzy logic neural network based on error-correction concept and statistics

    Get PDF
    First, this paper investigates the effect of good and bad news on volatility in the BUX return time series using asymmetric ARCH models. Then, the accuracy of forecasting models based on statistical (stochastic), machine learning methods, and soft/granular RBF network is investigated. To forecast the high-frequency financial data, we apply statistical ARMA and asymmetric GARCH-class models. A novel RBF network architecture is proposed based on incorporation of an error-correction mechanism, which improves forecasting ability of feed-forward neural networks. These proposed modelling approaches and SVM models are applied to predict the high-frequency time series of the BUX stock index. We found that it is possible to enhance forecast accuracy and achieve significant risk reduction in managerial decision making by applying intelligent forecasting models based on latest information technologies. On the other hand, we showed that statistical GARCH-class models can identify the presence of leverage effects, and react to the good and bad news.Web of Science421049

    An Approach for the Empirical Validation of Software Complexity Measures

    Get PDF
    Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    Rehabilitation robot cell for multimodal standing-up motion augmentation

    Get PDF
    The paper presents a robot cell for multimodal standing-up motion augmentation. The robot cell is aimed at augmenting the standing-up capabilities of impaired or paraplegic subjects. The setup incorporates the rehabilitation robot device, functional electrical stimulation system, measurement instrumentation and cognitive feedback system. For controlling the standing-up process a novel approach was developed integrating the voluntary activity of a person in the control scheme of the rehabilitation robot. The simulation results demonstrate the possibility of “patient-driven” robot-assisted standing-up training. Moreover, to extend the system capabilities, the audio cognitive feedback is aimed to guide the subject throughout rising. For the feedback generation a granular synthesis method is utilized displaying high-dimensional, dynamic data. The principle of operation and example sonification in standing-up are presented. In this manner, by integrating the cognitive feedback and “patient-driven” actuation systems, an effective motion augmentation system is proposed in which the motion coordination is under the voluntary control of the user

    Granular synthesis for display of time-varying probability densities

    Get PDF
    We present a method for displaying time-varying probabilistic information to users using an asynchronous granular synthesis technique. We extend the basic synthesis technique to include distribution over waveform source, spatial position, pitch and time inside waveforms. To enhance the synthesis in interactive contexts, we "quicken" the display by integrating predictions of user behaviour into the sonification. This includes summing the derivatives of the distribution during exploration of static densities, and using Monte-Carlo sampling to predict future user states in nonlinear dynamic systems. These techniques can be used to improve user performance in continuous control systems and in the interactive exploration of high dimensional spaces. This technique provides feedback from users potential goals, and their progress toward achieving them; modulating the feedback with quickening can help shape the users actions toward achieving these goals. We have applied these techniques to a simple nonlinear control problem as well as to the sonification of on-line probabilistic gesture recognition. We are applying these displays to mobile, gestural interfaces, where visual display is often impractical. The granular synthesis approach is theoretically elegant and easily applied in contexts where dynamic probabilistic displays are required
    corecore