1,665 research outputs found

    The Johannine Gospel in Gnostic Exegesis: Heracleon\u27s Commentary on John [review] / Pagels, Elaine H.

    Get PDF

    Developing a Generic Predictive Computational Model using Semantic data Pre-Processing with Machine Learning Techniques and its application for Stock Market Prediction Purposes

    Get PDF
    In this paper, we present a Generic Predictive Computational Model (GPCM) and apply it by building a Use Case for the FTSE 100 index forecasting. This involves the mining of heterogeneous data based on semantic methods (ontology), graph-based methods (knowledge graphs, graph databases) and advanced Machine Learning methods. The main focus of our research is data pre-processing aimed at a more efficient selection of input features. The GPCM model pipelineā€™s cycles involve the propagation of the (initially raw) data to the Graph Database structured by an ontology and regular updates of the featuresā€™ weights in the Graph Database by the feedback loop from the Machine Learning Engine. The Graph Database queries output the most valuable features that, in turn, serve as the input for the Machine Learning-based prediction. The end-product of this process is fed back to the Graph Database to update the weights. We report on practical experiments evaluating the effectiveness of the GPCM application in forecasting the FTSE 100 index. The underlying dataset contains multiple parameters related to predicting time-series data, where Long Short-Term Memory (LSTM) is known to be one of the most efficient machine learning methods. The most challenging task here has been to overcome the known restrictions of LSTM, which is capable of analysing one input parameter only. We solved this problem by combining several parallel LSTMs, a Concatenation unit, which merges the LSTMsā€™ outputs (into a time-series matrix), and a Linear Regression Unit, which produces the final resul

    The Digital Health Evidence Generator

    Get PDF

    Live Demonstration of the PITHIA e-Science Centre

    Get PDF
    PITHIA-NRF (Plasmasphere Ionosphere Thermosphere Integrated Research Environment and Access services: a Network of Research Facilities) is a four-year project funded by the European Commissionā€™s H2020 programme to integrate data, models and physical observing facilities for further advancing European research capacity in this area. A central point of PITHIA-NRF is the PITHIA e-Science Centre (PeSC), a science gateway that provides access to distributed data sources and prediction models to support scientific discovery. As the project reached its half-way point in March 2023, the first official prototype of the e-Science Centre was released. This live demonstration will provide an overview of the current status and capabilities of the PeSC, highlighting the underlying ontology and metadata structure, the registration process for models and datasets, the ontology-based search functionalities and the interaction methods for executing models and processing data. One of the main objectives of the PeSC is to enable scientists to register their Data Collections, that can be both raw or higher-level datasets and prediction models, using a standard metadata format and a domain ontology. For these purposes, PITHIA builds on the results of the ESPAS FP7 project by adopting and modifying its ontology and metadata specification. The project utilises the ISO 19156 standard on Observations and Measurements (O&M) to describe Data Collections in an XML format that is widely used within the research community. Following the standard, Data Collections are referring to other XML documents, such as Computations that a model used to derive the results, Acquisitions describing how the data was collected, Instruments that were used during the data collection process, or Projects that were responsible for the data/model. Within the XML documents, specific keywords of the Space Physics ontology can be used to describe the various elements. For example, Observed Property can be Field, Particle, Wave, or Mixed, at the top level. When preparing the XML metadata file, only these values are accepted for validation. Once described in XML format, Data Collections can be published in the PeSC and searched using the ontology-based search engine. Besides large and typically changing/growing Data Collections, PeSC also supports the registration of Catalogues. These are smaller sets of data, originating from a Data Collection and related to specific events, e.g. volcano eruptions. Catalogue Data Subsets can be assigned DOIs to be referenced in publications and provide a permanent set of data for reproducibility. Additionally, to publication and search, the PeSC also provides several mechanisms for interacting with Data Collections, e.g. executing a model or downloading subsets of the data. In the current version two of the four planned interaction methods are implemented: accessing the Data Collection by a direct link and interacting with it via an API and an automatically generated GUI. Data Collections can either be hosted by the local provider or can be deployed on EGI cloud computing resources. The development of the PeSC is still work in progress. Authentication and authorisation are currently being implemented using EGI Checkin and the PERUN Attribute Management System. Further interaction mechanisms enabling local execution and dynamic deployment in the cloud will also be added in the near future. The main screen of the PeSC is illustrated on Figure 1. The source code is open and available in GitHub

    Semantic Data Pre-Processing for Machine Learning Based Bankruptcy Prediction Computational Model

    Get PDF
    This paper studies a Bankruptcy Prediction Computational Model (BPCM model) ā€“ a comprehensive methodology of evaluating companiesā€™ bankruptcy level, which combines storing, structuring and pre-processing of raw financial data using semantic methods with machine learning analysis techniques. Raw financial data are interconnected, diverse, often potentially inconsistent, and open to duplication. The main goal of our research is to develop data pre-processing techniques, where ontologies play a central role. We show how ontologies are used to extract and integrate information from different sources, prepare data for further processing, and enable communication in natural language. Using ontology, we give meaning to the disparate and raw business data, build logical relationships between data in various formats and sources and establish relevant context. Our Ontology of Bankruptcy Prediction (OBP Ontology) which provides a conceptual framework for companiesā€™ financial analysis, is built in the widely established Prote Ģge Ģ environment. An OBP Ontology can be effectively described with a graph database. Graph database expands the capabilities of traditional databases tackling the interconnected nature of economic data and providing graph-based structures to store information allowing the effective selection of the most relevant input features for the machine learning algorithm. To create and manage the BPCM Graph Database (Graph DB), we use the Neo4j environment and Neo4j query language, Cypher, to perform feature selection of the structured data. Selected key features are used for the Machine Learning Engine ā€“ supervised MLP Neural Network with Sigmoid activation function. The programming of this component is performed in Python. We illustrate the approach and advantages of semantic data pre-processing applying it to a representative use case

    EnAbled: A Psychology Profile based Academic Compass to Build and Navigate Students' Learning Paths

    Get PDF
    Inthe moderneducational environmentstudents are faced with a plethora of different options in their learning journey during the University years. To help them to make optimal choices among all these options,that best correspond to their individual-ity, we have conducted a research project ā€œEnabled: Educational Network Amplifying Learning Experienceā€ (EnAbled). The project aims at ā€œmappingā€ these choices to per-sonal preferences and individual learning styles. We allow students to either self-assess their profiles or usethe Lumina Psychological Traits of Behavioral Preferencestests.We argue that this approach will be beneficial not only to the students but also to the academics assisting them in the preparation and delivery of modules and providing them with more insight into what and how teaching is delivered

    CV20019

    Get PDF
    This report provides the main results of the 2020 underwater television survey on the ā€˜Labadie, Jones and Cockburn Banksā€™ ICES assessment area; Functional Unit 2021. The 2020 survey was multi-disciplinary in nature collecting UWTV, and other ecosystem data. A total of 97 UWTV stations were completed at 6nm intervals over a randomised isometric grid design. The mean burrow density was 0.102 burrows/m2 compared with 0.06 burrows/m2 in 2019. The 2020 geostatistical abundance estimate was 1020 million, a 65% increase on the abundance from 2019, with a CV of 5%, which is well below the upper limit of 20% recommended by SGNEPS 2012. Low to medium densities were observed throughout the ground. Using the 2020 estimate of abundance and updated stock data implies catch in 2021 that correspond to the F ranges in the EU multi annual plan for Western Waters are between 1682 and 1710 tonnes (assuming that discard rates and fishery selection patterns do not change from the average of 2017ā€“2019). One species of sea-pen (Virgularia mirabilis) were recorded as present at the stations surveyed. Trawl marks were observed at 36% of the stations surveyed

    Science Gateways with Embedded Ontology-based E-learning Support

    Get PDF
    Science gateways are widely utilised in a range of scientific disciplines to provide user-friendly access to complex distributed computing infrastructures. The traditional approach in science gateway development is to concentrate on this simplified resource access and provide scientists with a graphical user interface to conduct their experiments and visualise the results. However, as user communities behind these gateways are growing and opening their doors to less experienced scientists or even to the general public as ā€œcitizen scientistsā€, there is an emerging need to extend these gateways with training and learning support capabilities. This paper describes a novel approach showing how science gateways can be extended with embedded e-learning support using an ontology-based learning environment called Knowledge Repository Exchange and Learning (KREL). The paper also presents a prototype implementation of a science gateway for analysing earthquake data and demonstrates how the KREL can extend this gateway with ontology-based embedded e-learning support
    • ā€¦
    corecore