494 research outputs found

    The potential of semantic paradigm in warehousing of big data

    Get PDF
    Big data have analytical potential that was hard to realize with available technologies. After new storage paradigms intended for big data such as NoSQL databases emerged, traditional systems got pushed out of the focus. The current research is focused on their reconciliation on different levels or paradigm replacement. Similarly, the emergence of NoSQL databases has started to push traditional (relational) data warehouses out of the research and even practical focus. Data warehousing is known for the strict modelling process, capturing the essence of the business processes. For that reason, a mere integration to bridge the NoSQL gap is not enough. It is necessary to deal with this issue on a higher abstraction level during the modelling phase. NoSQL databases generally lack clear, unambiguous schema, making the comprehension of their contents difficult and their integration and analysis harder. This motivated involving semantic web technologies to enrich NoSQL database contents by additional meaning and context. This paper reviews the application of semantics in data integration and data warehousing and analyses its potential in integrating NoSQL data and traditional data warehouses with some focus on document stores. Also, it gives a proposal of the future pursuit directions for the big data warehouse modelling phases

    Semantic analysis in the automation of ER modelling through natural language processing

    Get PDF

    Integrating intelligent methodological and tutoring assistance in a CASE platform: The PANDORA experience

    Get PDF
    Database Design discipline involves so different aspects as conceptual and logical modelling knowledge or domain understanding. That implies a great effort to carry out the real world abstraction task and represent it through a data model. CASE tools emerge in order to automating the database development process. These platforms try to help to the database designer in different database design phases. Nevertheless, this tools are frequently mere diagrammers and do not carry completely out the design methodology that they are supposed to support; furthermore, they do not offer intelligent methodological advice to novice designers. This paper introduces the PANDORA tool (acronym of Platform for Database Development and Learning via Internet) that is being developed in a research project which tries to mitigate some of the deficiencies observed in several CASE tools, defining methods and techniques for database development which are useful for students and practitioners. Specifically, this work is focused on two PANDORA components: Conceptual Modelling and Learning Support subsystems

    Automatic domain-specific learning: towards a methodology for ontology enrichment

    Get PDF
    [EN] At the current rate of technological development, in a world where enormous amount of data are constantly created and in which the Internet is used as the primary means for information exchange, there exists a need for tools that help processing, analyzing and using that information. However, while the growth of information poses many opportunities for social and scientific advance, it has also highlighted the difficulties of extracting meaningful patterns from massive data. Ontologies have been claimed to play a major role in the processing of large-scale data, as they serve as universal models of knowledge representation, and are being studied as possible solutions to this. This paper presents a method for the automatic expansion of ontologies based on corpus and terminological data exploitation. The proposed ¿ontology enrichment method¿ (OEM) consists of a sequence of tasks aimed at classifying an input keyword automatically under its corresponding node within a target ontology. Results prove that the method can be successfully applied for the automatic classification of specialized units into a reference ontology.Financial support for this research has been provided by the DGI, Spanish Ministry of Education and Science, grant FFI2011-29798-C0201.Ureña Gómez-Moreno, P.; Mestre-Mestre, EM. (2017). Automatic domain-specific learning: towards a methodology for ontology enrichment. LFE. Revista de Lenguas para Fines Específicos. 23(2):63-85. http://hdl.handle.net/10251/148357S638523

    Semi-Automated Development of Conceptual Models from Natural Language Text

    Get PDF
    The process of converting natural language specifications into conceptual models requires detailed analysis of natural language text, and designers frequently make mistakes when undertaking this transformation manually. Although many approaches have been used to help designers translate natural language text into conceptual models, each approach has its limitations. One of the main limitations is the lack of a domain-independent ontology that can be used as a repository for entities and relationships, thus guiding the transition from natural language processing into a conceptual model. Such an ontology is not currently available because it would be very difficult and time consuming to produce. In this thesis, a semi-automated system for mapping natural language text into conceptual models is proposed. The model, which is called SACMES, combines a linguistic approach with an ontological approach and human intervention to achieve the task. The model learns from the natural language specifications that it processes, and stores the information that is learnt in a conceptual model ontology and a user history knowledge database. It then uses the stored information to improve performance and reduce the need for human intervention. The evaluation conducted on SACMES demonstrates that (1) designers’ creation of conceptual models is improved when using the system comparing with not using any system, and that (2) the performance of the system is improved by processing more natural language requirements, and thus, the need for human intervention has decreased. However, these advantages may be improved further through development of the learning and retrieval techniques used by the system

    Defining and Classifying Learning Outcomes: A case study

    Get PDF
    Bologne came to globalize education in higher education, creating a unified architecture that potentiates higher education and enhances the continued interconnection of the spaces of education policy in higher education in the world, in particular in Europe. The aim of this work consists in the presentation of an identification model and skills’ classification and learning outcomes, based on the official documents of the course units (syllabus and assessment components) of a course of Higher Education. We are aware that the adoption of this model by different institutions, will contribute to interoperability learning outcomes, thus enhancing the mobility of teachers and students in the EHEA (European Higher Education Area) and third countries

    Intelligent machine for ontological representation of massive pedagogical knowledge based on neural networks

    Get PDF
    Higher education is increasingly integrating free learning management systems (LMS). The main objective underlying such systems integration is the automatization of online educational processes for the benefit of all the involved actors who use these systems. The said processes are developed through the integration and implementation of learning scenarios similar to traditional learning systems. LMS produce big data traces emerging from actors’ interactions in online learning. However, we note the absence of instruments adequate for representing knowledge extracted from big traces. In this context, the research at hand is aimed at transforming the big data produced via interactions into big knowledge that can be used in MOOCs by actors falling within a given learning level within a given learning domain, be it formal or informal. In order to achieve such an objective, ontological approaches are taken, namely: mapping, learning and enrichment, in addition to artificial intelligence-based approaches which are relevant in our research context. In this paper, we propose three interconnected algorithms for a better ontological representation of learning actors’ knowledge, while premising heavily on artificial intelligence approaches throughout the stages of this work. For verifying the validity of our contribution, we will implement an experiment about knowledge sources example

    Towards a clinical trial ontology using a concern-oriented approach

    Get PDF
    Not yet availablePer ridurre i costi e migliorare la qualita\u27 della ricerca nei trial clinici (CT) e\u27 necessario un approccio piu\u27 sistematico all\u27automazione dei CT per rinforzare l\u27interoperabilita\u27 a vari livelli del processo di ricerca. Per questo scopo e\u27 stato sviluppato un modello concettuale di CT. Alla base di ogni approccio di modellizzazione ci sono criteri di partizione che ci permettono di dominare la complessita\u27 dell\u27universo da modellare. In questo rapporto noi introduciamo un metodo originale di analisi basato sui concern degli stakeholder per partizionare il domino concettuale dei CT in sotto-domini orientati agli stakeholder. Le rappresentazioni mentali degli stakeholder relative a ciascun concern sono identificati come cluster di concetti collegati ad altri concetti. Noi consideriamo ciascun cluster come una base razionale per il relativo concern. I concetti trovati nelle basi razionali popolano l\u27universo del discorso specifico per ogni stakeholder e compongono il vocabolario degli stakeholder. Alcuni concetti sono condivisi con altri stakeholder, mentre altri sono specifici di uno stakehoder; alcuni concetti sono specifici dei CT, mentre altri sono concetti medici o generali. In questo modo un\u27ontologia orientata ai concern per i CT puo\u27 essere creata. Il metodo e\u27 illustrato utilizzando i criteri di selezione dei soggetti, una componente di un progetto di CT, ma puo\u27 essere usato per ogni altra componente del protocollo del CT. La tassonomia del vocabolario dei concetti dei CT e la rete delle relative basi razionali ci fornisce una struttura possibile per lo sviluppo del software specialmente se si adotta una soluzione basata su architetture orientate ai servizi

    Research in progress: report on the ICAIL 2017 doctoral consortium

    Get PDF
    This paper arose out of the 2017 international conference on AI and law doctoral consortium. There were five students who presented their Ph.D. work, and each of them has contributed a section to this paper. The paper offers a view of what topics are currently engaging students, and shows the diversity of their interests and influences
    • …
    corecore