20,372 research outputs found
Recommended from our members
An Application on Text Classification Based on Granular Computing
Machine learning is the key to text classification, a granular computing approach to machine learning is applied to learning classification rules by considering the two basic issues: concept formation and concept relationships identification. In this paper, we concentrate on the selection of a single granule in each step to construct a granule network. A classification rule induction method is proposed
An Advanced Conceptual Diagnostic Healthcare Framework for Diabetes and Cardiovascular Disorders
The data mining along with emerging computing techniques have astonishingly
influenced the healthcare industry. Researchers have used different Data Mining
and Internet of Things (IoT) for enrooting a programmed solution for diabetes
and heart patients. However, still, more advanced and united solution is needed
that can offer a therapeutic opinion to individual diabetic and cardio
patients. Therefore, here, a smart data mining and IoT (SMDIoT) based advanced
healthcare system for proficient diabetes and cardiovascular diseases have been
proposed. The hybridization of data mining and IoT with other emerging
computing techniques is supposed to give an effective and economical solution
to diabetes and cardio patients. SMDIoT hybridized the ideas of data mining,
Internet of Things, chatbots, contextual entity search (CES), bio-sensors,
semantic analysis and granular computing (GC). The bio-sensors of the proposed
system assist in getting the current and precise status of the concerned
patients so that in case of an emergency, the needful medical assistance can be
provided. The novelty lies in the hybrid framework and the adequate support of
chatbots, granular computing, context entity search and semantic analysis. The
practical implementation of this system is very challenging and costly.
However, it appears to be more operative and economical solution for diabetes
and cardio patients.Comment: 11 PAGE
Classifying sequences by the optimized dissimilarity space embedding approach: a case study on the solubility analysis of the E. coli proteome
We evaluate a version of the recently-proposed classification system named
Optimized Dissimilarity Space Embedding (ODSE) that operates in the input space
of sequences of generic objects. The ODSE system has been originally presented
as a classification system for patterns represented as labeled graphs. However,
since ODSE is founded on the dissimilarity space representation of the input
data, the classifier can be easily adapted to any input domain where it is
possible to define a meaningful dissimilarity measure. Here we demonstrate the
effectiveness of the ODSE classifier for sequences by considering an
application dealing with the recognition of the solubility degree of the
Escherichia coli proteome. Solubility, or analogously aggregation propensity,
is an important property of protein molecules, which is intimately related to
the mechanisms underlying the chemico-physical process of folding. Each protein
of our dataset is initially associated with a solubility degree and it is
represented as a sequence of symbols, denoting the 20 amino acid residues. The
herein obtained computational results, which we stress that have been achieved
with no context-dependent tuning of the ODSE system, confirm the validity and
generality of the ODSE-based approach for structured data classification.Comment: 10 pages, 49 reference
Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition
This paper assumes the hypothesis that human learning is perception based,
and consequently, the learning process and perceptions should not be
represented and investigated independently or modeled in different simulation
spaces. In order to keep the analogy between the artificial and human learning,
the former is assumed here as being based on the artificial perception. Hence,
instead of choosing to apply or develop a Computational Theory of (human)
Perceptions, we choose to mirror the human perceptions in a numeric
(computational) space as artificial perceptions and to analyze the
interdependence between artificial learning and artificial perception in the
same numeric space, using one of the simplest tools of Artificial Intelligence
and Soft Computing, namely the perceptrons. As practical applications, we
choose to work around two examples: Optical Character Recognition and Iris
Recognition. In both cases a simple Turing test shows that artificial
perceptions of the difference between two characters and between two irides are
fuzzy, whereas the corresponding human perceptions are, in fact, crisp.Comment: 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24
Aug 201
- …