16,991 research outputs found
Temporal Data Modeling and Reasoning for Information Systems
Temporal knowledge representation and reasoning is a major research field in Artificial
Intelligence, in Database Systems, and in Web and Semantic Web research. The ability to
model and process time and calendar data is essential for many applications like appointment
scheduling, planning, Web services, temporal and active database systems, adaptive
Web applications, and mobile computing applications. This article aims at three complementary
goals. First, to provide with a general background in temporal data modeling
and reasoning approaches. Second, to serve as an orientation guide for further specific
reading. Third, to point to new application fields and research perspectives on temporal
knowledge representation and reasoning in the Web and Semantic Web
Data granulation by the principles of uncertainty
Researches in granular modeling produced a variety of mathematical models,
such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets,
which are all suitable to characterize the so-called information granules.
Modeling of the input data uncertainty is recognized as a crucial aspect in
information granulation. Moreover, the uncertainty is a well-studied concept in
many mathematical settings, such as those of probability theory, fuzzy set
theory, and possibility theory. This fact suggests that an appropriate
quantification of the uncertainty expressed by the information granule model
could be used to define an invariant property, to be exploited in practical
situations of information granulation. In this perspective, a procedure of
information granulation is effective if the uncertainty conveyed by the
synthesized information granule is in a monotonically increasing relation with
the uncertainty of the input data. In this paper, we present a data granulation
framework that elaborates over the principles of uncertainty introduced by
Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is
possible to apply such principles regardless of the input data type and the
specific mathematical setting adopted for the information granules. The
proposed framework is conceived (i) to offer a guideline for the synthesis of
information granules and (ii) to build a groundwork to compare and
quantitatively judge over different data granulation procedures. To provide a
suitable case study, we introduce a new data granulation technique based on the
minimum sum of distances, which is designed to generate type-2 fuzzy sets. We
analyze the procedure by performing different experiments on two distinct data
types: feature vectors and labeled graphs. Results show that the uncertainty of
the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference
Approximations from Anywhere and General Rough Sets
Not all approximations arise from information systems. The problem of fitting
approximations, subjected to some rules (and related data), to information
systems in a rough scheme of things is known as the \emph{inverse problem}. The
inverse problem is more general than the duality (or abstract representation)
problems and was introduced by the present author in her earlier papers. From
the practical perspective, a few (as opposed to one) theoretical frameworks may
be suitable for formulating the problem itself. \emph{Granular operator spaces}
have been recently introduced and investigated by the present author in her
recent work in the context of antichain based and dialectical semantics for
general rough sets. The nature of the inverse problem is examined from
number-theoretic and combinatorial perspectives in a higher order variant of
granular operator spaces and some necessary conditions are proved. The results
and the novel approach would be useful in a number of unsupervised and semi
supervised learning contexts and algorithms.Comment: 20 Pages. Scheduled to appear in IJCRS'2017 LNCS Proceedings,
Springe
"Possible DeďŹnitions of an âA Prioriâ Granule\ud in General Rough Set Theory" by A. Mani
We introduce an abstract framework for general rough set theory from a mereological perspective and consider possible concepts of âa prioriâ granules and granulation in the same. The framework is ideal for relaxing many of the\ud
relatively superďŹuous set-theoretic axioms and for improving the semantics of many relation based, cover-based and dialectical rough set theories. This is a\ud
relatively simplified presentation of a section in three different recent research papers by the present author.\u
Stochastic information granules extraction for graph embedding and classification
3noopenGraphs are data structures able to efďŹciently describe real-world systems and, as such, have been extensively used in recent years by many branches of science, including machine learning engineering. However, the design of efďŹcient graph-based pattern recognition systems is bottlenecked by the intrinsic problem of how to properly match two graphs. In this paper, we investigate a granular computing approach for the design of a general purpose graph-based classiďŹcation system. The overall framework relies on the extraction of meaningful pivotal substructures on the top of which an embedding space can be build and in which the classiďŹcation can be performed without limitations. Due to its importance, we address whether information can be preserved by performing stochastic extraction on the training data instead of performing an exhaustive extraction procedure which is likely to be unfeasible for large datasets. Tests on benchmark datasets show that stochastic extraction can lead to a meaningful set of pivotal substructures with a much lower memory footprint and overall computational burden, making the proposed strategies suitable also for dealing with big datasets.openAccademicoBaldini, Luca; Martino, Alessio; Rizzi, AntonelloBaldini, Luca; Martino, Alessio; Rizzi, Antonell
Recommended from our members
Hierarchical wireless framework for real-time collaborative generation and distribution of telemetry data
This project introduces a novel multidisciplinary approach combining Vehicular Ad Hoc Networks and Granular Computing, to the data processing and information generation problem in large urban traffic systems. It addresses the challenge of realtime information generation and dissemination in such systems by designing and investigating a hierarchical real-time information framework. The research work is complemented by designing and developing a simulator for such a system, which provides a simulation environment for the model developed. The proposed multidisciplinary hierarchical real-time information processing and dissemination system framework utilises results from two different areas of study, which are Vehicular Ad Hoc Networks (VANETS) and Granular Computing concepts. Furthermore, a new geographically constrained VANET topology for information generation is proposed, simulated and investigated
Granular Support Vector Machines Based on Granular Computing, Soft Computing and Statistical Learning
With emergence of biomedical informatics, Web intelligence, and E-business, new challenges are coming for knowledge discovery and data mining modeling problems. In this dissertation work, a framework named Granular Support Vector Machines (GSVM) is proposed to systematically and formally combine statistical learning theory, granular computing theory and soft computing theory to address challenging predictive data modeling problems effectively and/or efficiently, with specific focus on binary classification problems. In general, GSVM works in 3 steps. Step 1 is granulation to build a sequence of information granules from the original dataset or from the original feature space. Step 2 is modeling Support Vector Machines (SVM) in some of these information granules when necessary. Finally, step 3 is aggregation to consolidate information in these granules at suitable abstract level. A good granulation method to find suitable granules is crucial for modeling a good GSVM. Under this framework, many different granulation algorithms including the GSVM-CMW (cumulative margin width) algorithm, the GSVM-AR (association rule mining) algorithm, a family of GSVM-RFE (recursive feature elimination) algorithms, the GSVM-DC (data cleaning) algorithm and the GSVM-RU (repetitive undersampling) algorithm are designed for binary classification problems with different characteristics. The empirical studies in biomedical domain and many other application domains demonstrate that the framework is promising. As a preliminary step, this dissertation work will be extended in the future to build a Granular Computing based Predictive Data Modeling framework (GrC-PDM) with which we can create hybrid adaptive intelligent data mining systems for high quality prediction
An Advanced Conceptual Diagnostic Healthcare Framework for Diabetes and Cardiovascular Disorders
The data mining along with emerging computing techniques have astonishingly
influenced the healthcare industry. Researchers have used different Data Mining
and Internet of Things (IoT) for enrooting a programmed solution for diabetes
and heart patients. However, still, more advanced and united solution is needed
that can offer a therapeutic opinion to individual diabetic and cardio
patients. Therefore, here, a smart data mining and IoT (SMDIoT) based advanced
healthcare system for proficient diabetes and cardiovascular diseases have been
proposed. The hybridization of data mining and IoT with other emerging
computing techniques is supposed to give an effective and economical solution
to diabetes and cardio patients. SMDIoT hybridized the ideas of data mining,
Internet of Things, chatbots, contextual entity search (CES), bio-sensors,
semantic analysis and granular computing (GC). The bio-sensors of the proposed
system assist in getting the current and precise status of the concerned
patients so that in case of an emergency, the needful medical assistance can be
provided. The novelty lies in the hybrid framework and the adequate support of
chatbots, granular computing, context entity search and semantic analysis. The
practical implementation of this system is very challenging and costly.
However, it appears to be more operative and economical solution for diabetes
and cardio patients.Comment: 11 PAGE
- âŚ