321,263 research outputs found

    Development and Validation of a Rule-based Time Series Complexity Scoring Technique to Support Design of Adaptive Forecasting DSS

    Get PDF
    Evidence from forecasting research gives reason to believe that understanding time series complexity can enable design of adaptive forecasting decision support systems (FDSSs) to positively support forecasting behaviors and accuracy of outcomes. Yet, such FDSS design capabilities have not been formally explored because there exists no systematic approach to identifying series complexity. This study describes the development and validation of a rule-based complexity scoring technique (CST) that generates a complexity score for time series using 12 rules that rely on 14 features of series. The rule-based schema was developed on 74 series and validated on 52 holdback series using well-accepted forecasting methods as benchmarks. A supporting experimental validation was conducted with 14 participants who generated 336 structured judgmental forecasts for sets of series classified as simple or complex by the CST. Benchmark comparisons validated the CST by confirming, as hypothesized, that forecasting accuracy was lower for series scored by the technique as complex when compared to the accuracy of those scored as simple. The study concludes with a comprehensive framework for design of FDSS that can integrate the CST to adaptively support forecasters under varied conditions of series complexity. The framework is founded on the concepts of restrictiveness and guidance and offers specific recommendations on how these elements can be built in FDSS to support complexity

    A conceptual framework for intelligent real-time information processing

    Get PDF
    By combining artificial intelligence concepts with the human information processing model of Rasmussen, a conceptual framework was developed for real time artificial intelligence systems which provides a foundation for system organization, control and validation. The approach is based on the description of system processing terms of an abstraction hierarchy of states of knowledge. The states of knowledge are organized along one dimension which corresponds to the extent to which the concepts are expressed in terms of the system inouts or in terms of the system response. Thus organized, the useful states form a generally triangular shape with the sensors and effectors forming the lower two vertices and the full evaluated set of courses of action the apex. Within the triangle boundaries are numerous processing paths which shortcut the detailed processing, by connecting incomplete levels of analysis to partially defined responses. Shortcuts at different levels of abstraction include reflexes, sensory motor control, rule based behavior, and satisficing. This approach was used in the design of a real time tactical decision aiding system, and in defining an intelligent aiding system for transport pilots

    A Framework For Refining Text Classification and Object Recognition from Academic Articles

    Full text link
    With the widespread use of the internet, it has become increasingly crucial to extract specific information from vast amounts of academic articles efficiently. Data mining techniques are generally employed to solve this issue. However, data mining for academic articles is challenging since it requires automatically extracting specific patterns in complex and unstructured layout documents. Current data mining methods for academic articles employ rule-based(RB) or machine learning(ML) approaches. However, using rule-based methods incurs a high coding cost for complex typesetting articles. On the other hand, simply using machine learning methods requires annotation work for complex content types within the paper, which can be costly. Furthermore, only using machine learning can lead to cases where patterns easily recognized by rule-based methods are mistakenly extracted. To overcome these issues, from the perspective of analyzing the standard layout and typesetting used in the specified publication, we emphasize implementing specific methods for specific characteristics in academic articles. We have developed a novel Text Block Refinement Framework (TBRF), a machine learning and rule-based scheme hybrid. We used the well-known ACL proceeding articles as experimental data for the validation experiment. The experiment shows that our approach achieved over 95% classification accuracy and 90% detection accuracy for tables and figures.Comment: This paper has been accepted at 'The International Symposium on Innovations in Intelligent Systems and Applications 2023 (INISTA 2023)

    Cloud enabled data analytics and visualization framework for health-shocks prediction

    Get PDF
    In this paper, we present a data analytics and visualization framework for health-shocks prediction based on large-scale health informatics dataset. The framework is developed using cloud computing services based on Amazon web services (AWS) integrated with geographical information systems (GIS) to facilitate big data capture, storage, index and visualization of data through smart devices for different stakeholders. In order to develop a predictive model for health-shocks, we have collected a unique data from 1000 households, in rural and remotely accessible regions of Pakistan, focusing on factors like health, social, economic, environment and accessibility to healthcare facilities. We have used the collected data to generate a predictive model of health-shock using a fuzzy rule summarization technique, which can provide stakeholders with interpretable linguistic rules to explain the causal factors affecting health-shocks. The evaluation of the proposed system in terms of the interpret-ability and accuracy of the generated data models for classifying health-shock shows promising results. The prediction accuracy of the fuzzy model based on a k-fold cross-validation of the data samples shows above 89% performance in predicting health-shocks based on the given factors

    Semantic model-driven framework for validating quality requirements of Internet of Things streaming data

    Get PDF
    The rise of Internet of Things has provided platforms mostly enhanced by real-time data-driven services for reactive services and Smart Cities innovations. However, IoT streaming data are known to be compromised by quality problems, thereby influencing the performance and accuracy of IoT-based reactive services or Smart applications. This research investigates the suitability of the semantic approach for the run-time validation of IoT streaming data for quality problems. To realise this aim, Semantic IoT Streaming Data Validation with its framework (SISDaV) is proposed. The novel approach involves technologies for semantic query and reasoning with semantic rules defined on an established relationship with external data sources with consideration for specific run-time events that can influence the quality of streams. The work specifically targets quality issues relating to inconsistency, plausibility, and incompleteness in IoT streaming data. In particular, the investigation covers various RDF stream processing and rule-based reasoning techniques and effects of RDF Serialised formats on the reasoning process. The contributions of the work include the hierarchy of IoT data stream quality problem, lightweight evolving Smart Space and Sensor Measurement Ontology, generic time-aware validation rules and, SISDaV framework- a unified semantic rule-based validation system for RDF-based IoT streaming data that combines the popular RDF stream processing the system with generic enhanced time-aware rules. The semantic validation process ensures the conformance of the raw streaming data value produced by the IoT node(s) with IoT streaming data quality requirements and the expected value. This is facilitated through a set of generic continuous validation rules, which has been realised by extending the popular Jena rule syntax with a time element. The comparative evaluation of SISDaV is based on its effectiveness and efficiency based on the expressivity of the different serialised RDF data formats. The results are interpreted with relevant statistical estimations and performance metrics. The results from the evaluation approve of the feasibility of the framework in terms of containing the semantic validation process within the interval between reads of sensor nodes as well as provision of additional requirements that can enhance IoT streaming data processing systems which are currently missing in most related state-of-art RDF stream processing systems. Furthermore, the approach can satisfy the main research objectives as identified by the study

    UK phenomics platform for developing and validating electronic health record phenotypes: CALIBER

    Get PDF
    Objective: Electronic health records (EHRs) are a rich source of information on human diseases, but the information is variably structured, fragmented, curated using different coding systems, and collected for purposes other than medical research. We describe an approach for developing, validating, and sharing reproducible phenotypes from national structured EHR in the United Kingdom with applications for translational research. Materials and Methods: We implemented a rule-based phenotyping framework, with up to 6 approaches of validation. We applied our framework to a sample of 15 million individuals in a national EHR data source (population-based primary care, all ages) linked to hospitalization and death records in England. Data comprised continuous measurements (for example, blood pressure; medication information; coded diagnoses, symptoms, procedures, and referrals), recorded using 5 controlled clinical terminologies: (1) read (primary care, subset of SNOMED-CT [Systematized Nomenclature of Medicine Clinical Terms]), (2) International Classification of Diseases–Ninth Revision and Tenth Revision (secondary care diagnoses and cause of mortality), (3) Office of Population Censuses and Surveys Classification of Surgical Operations and Procedures, Fourth Revision (hospital surgical procedures), and (4) DMþD prescription codes. Results: Using the CALIBER phenotyping framework, we created algorithms for 51 diseases, syndromes, biomarkers, and lifestyle risk factors and provide up to 6 validation approaches. The EHR phenotypes are curated in the open-access CALIBER Portal (https://www.caliberresearch.org/portal) and have been used by 40 national and international research groups in 60 peer-reviewed publications. Conclusions: We describe a UK EHR phenomics approach within the CALIBER EHR data platform with initial evidence of validity and use, as an important step toward international use of UK EHR data for health research

    Implementation of an intelligent control system

    Get PDF
    A laboratory testbed facility which was constructed at NASA LeRC for the development of an Intelligent Control System (ICS) for reusable rocket engines is described. The framework of the ICS consists of a hierarchy of various control and diagnostic functions. The traditional high speed, closed-loop controller resides at the lowest level of the ICS hierarchy. Above this level resides the diagnostic functions which identify engine faults. The ICS top level consists of the coordination function which manages the interaction between an expert system and a traditional control system. The purpose of the testbed is to demonstrate the feasibility of the OCS concept by implementing the ICS as the primary controller in a simulation of the Space Shuttle Main Engine (SSME). The functions of the ICS which are implemented in the testbed are as follows: an SSME dynamic simulation with selected fault mode models, a reconfigurable controller, a neural network for sensor validation, a model-based failure detection algorithm, a rule based failure detection algorithm, a diagnostic expert system, an intelligent coordinator, and a user interface which provides a graphical representation of the event occurring within the testbed. The diverse nature of the ICS has led to the development of a distributed architecture consisting of specialized hardware and software for the implementation of the various functions. This testbed is made up of five different computer systems. These individual computers are discussed along with the schemes used to implement the various ICS components. The communication between computers and the timing and synchronization between components are also addressed

    A knowledge-based approach to information extraction for semantic interoperability in the archaeology domain

    Get PDF
    The paper presents a method for automatic semantic indexing of archaeological grey-literature reports using empirical (rule-based) Information Extraction techniques in combination with domain-specific knowledge organization systems. Performance is evaluated via the Gold Standard method. The semantic annotation system (OPTIMA) performs the tasks of Named Entity Recognition, Relation Extraction, Negation Detection and Word Sense disambiguation using hand-crafted rules and terminological resources for associating contextual abstractions with classes of the standard ontology (ISO 21127:2006) CIDOC Conceptual Reference Model (CRM) for cultural heritage and its archaeological extension, CRM-EH, together with concepts from English Heritage thesauri and glossaries.Relation Extraction performance benefits from a syntactic based definition of relation extraction patterns derived from domain oriented corpus analysis. The evaluation also shows clear benefit in the use of assistive NLP modules relating to word-sense disambiguation, negation detection and noun phrase validation, together with controlled thesaurus expansion.The semantic indexing results demonstrate the capacity of rule-based Information Extraction techniques to deliver interoperable semantic abstractions (semantic annotations) with respect to the CIDOC CRM and archaeological thesauri. Major contributions include recognition of relevant entities using shallow parsing NLP techniques driven by a complimentary use of ontological and terminological domain resources and empirical derivation of context-driven relation extraction rules for the recognition of semantic relationships from phrases of unstructured text. The semantic annotations have proven capable of supporting semantic query, document study and cross-searching via the ontology framework
    • …
    corecore