5 research outputs found
Development of ontological knowledge representation: learning hydrocarbons with double bonds at the secondary level
This paper presents the development of an ontological knowledge organization and representation, and explains how application of appropriate methods for its visualization can lead to meaningful learning. We have applied systemic diagrams (SD) as a method of visualizing ontological knowledge organization. Seven ontological models for "Hydrocarbons with double bonds", following the development from concept map to systemic diagram, are constructed. Chemical properties of alkenes are particularly elaborated and represented as a final systemic diagram (SDf). [AJCE, 3(2), June 2013
A Secure Multi-Layer e-Document Method for Improving e-Government Processes
Abstract: In recent years, there has been a tremendous growth in e-Government services due to advances in Information Communication Technology and the number of citizens engaging in eGovernment transactions. In government administration, it is very time consuming to process different types of documents and there are many data input problems. There is also a need to satisfy citizens' requests to retrieve government information and to link these requests to build an online document without asking the citizen to input the data more than once. To provide an e-Government service which is easy to access, fast and secure, the e-Document plays an important role in the management and interoperability of e-Government Systems. To meet these challenges, this paper presents a Secure Multilayer e-Application (SMeA) method for improving e-Government processes. This method involves five steps: namely (i) identifying an e-Template; (ii) building a SMeA; (iii) mapping the data; (iv) processing the e-Application; and (v) approving the e-Application. The first step involves requirements analysis and the last four involve data analysis for building a SMeA. To demonstrate its usefulness, we applied SMeA to a case study of an application for a licence to set up a new business in Vietnam
Representaci贸n computacional del lenguaje natural escrito
When humans read, or hear, words, they immediately relatethem to a concept. This is possible due to the informationalready stored in the brain and also to human鈥檚 ability toselect, process, and associate such information with words.However, for a computer, natural language text is only asequence of bits that does not convey any meaning on itsown, unless properly processed. A computer interprets thisbit sequence by modeling the processing that takes place inhuman minds, namely structuring and linking the text withpreviously stored information. During this process, as wellas when describing its results, the text is represented usingvarious formal structures that permit automatic processing,interpretation, and comparison of information. In this paper,we present a detailed description of these structures.Cuando el ser humano lee o escucha una palabra, inmediatamente la relaciona con un concepto. Esto es posible gracias a la acumulaci贸n de informaci贸n y a la posibilidad de filtrar, procesar y relacionar dicha informaci贸n. Para la m谩quina, una expresi贸n escrita en el lenguaje natural es una cadena de bits que no aporta informaci贸n por s铆 sola. Un computador interpreta esta cadena de bits, modelando el proceso que tiene lugar en la mente humana, estructurando y relacionado la cadena con informaci贸n previamente almacenada. En el proceso, as铆 como al momento de describir los resultados, el texto es representado por estructuras formales que permiten el procesamiento autom谩tico, la interpretaci贸n y la comparaci贸n de la informaci贸n. Este art铆culo presenta una descripci贸n detallada de estas estructuras
Concept Mining: A Conceptual Understanding based Approach
Due to the daily rapid growth of the information, there are
considerable needs to extract and discover valuable knowledge from
data sources such as the World Wide Web. Most of the common
techniques in text mining are based on the statistical analysis of a
term either word or phrase. These techniques consider documents as
bags of words and pay no attention to the meanings of the document
content. In addition, statistical analysis of a term frequency
captures the importance of the term within a document only. However,
two terms can have the same frequency in their documents, but one
term contributes more to the meaning of its sentences than the other
term. Therefore, there is an intensive need for a model that
captures the meaning of linguistic utterances in a formal structure.
The underlying model should indicate terms that capture the
semantics of text. In this case, the model can capture terms that
present the concepts of the sentence, which leads to discover the
topic of the document.
A new concept-based model that analyzes terms on the sentence,
document and corpus levels rather than the traditional analysis of
document only is introduced. The concept-based model can effectively
discriminate between non-important terms with respect to sentence
semantics and terms which hold the concepts that represent the
sentence meaning.
The proposed model consists of concept-based statistical analyzer,
conceptual ontological graph representation, concept extractor and
concept-based similarity measure. The term which contributes to the
sentence semantics is assigned two different weights by the
concept-based statistical analyzer and the conceptual ontological
graph representation. These two weights are combined into a new
weight. The concepts that have maximum combined weights are selected
by the concept extractor. The similarity between documents is
calculated based on a new concept-based similarity measure. The
proposed similarity measure takes full advantage of using the
concept analysis measures on the sentence, document, and corpus
levels in calculating the similarity between documents.
Large sets of experiments using the proposed concept-based model on
different datasets in text clustering, categorization and retrieval
are conducted. The experiments demonstrate extensive comparison
between traditional weighting and the concept-based weighting
obtained by the concept-based model. Experimental results in text
clustering, categorization and retrieval demonstrate the substantial
enhancement of the quality using: (1) concept-based term frequency
(tf), (2) conceptual term frequency (ctf), (3) concept-based
statistical analyzer, (4) conceptual ontological graph, (5)
concept-based combined model.
In text clustering, the evaluation of results is relied on two
quality measures, the F-Measure and the Entropy. In text
categorization, the evaluation of results is relied on three quality
measures, the Micro-averaged F1, the Macro-averaged F1 and the Error
rate. In text retrieval, the evaluation of results relies on three
quality measures, the precision at 10 documents retrieved P(10), the
preference measure (bpref), and the mean uninterpolated average
precision (MAP). All of these quality measures are improved when the
newly developed concept-based model is used to enhance the quality
of the text clustering, categorization and retrieval