5,879 research outputs found

    Knowledge-infused and Consistent Complex Event Processing over Real-time and Persistent Streams

    Full text link
    Emerging applications in Internet of Things (IoT) and Cyber-Physical Systems (CPS) present novel challenges to Big Data platforms for performing online analytics. Ubiquitous sensors from IoT deployments are able to generate data streams at high velocity, that include information from a variety of domains, and accumulate to large volumes on disk. Complex Event Processing (CEP) is recognized as an important real-time computing paradigm for analyzing continuous data streams. However, existing work on CEP is largely limited to relational query processing, exposing two distinctive gaps for query specification and execution: (1) infusing the relational query model with higher level knowledge semantics, and (2) seamless query evaluation across temporal spaces that span past, present and future events. These allow accessible analytics over data streams having properties from different disciplines, and help span the velocity (real-time) and volume (persistent) dimensions. In this article, we introduce a Knowledge-infused CEP (X-CEP) framework that provides domain-aware knowledge query constructs along with temporal operators that allow end-to-end queries to span across real-time and persistent streams. We translate this query model to efficient query execution over online and offline data streams, proposing several optimizations to mitigate the overheads introduced by evaluating semantic predicates and in accessing high-volume historic data streams. The proposed X-CEP query model and execution approaches are implemented in our prototype semantic CEP engine, SCEPter. We validate our query model using domain-aware CEP queries from a real-world Smart Power Grid application, and experimentally analyze the benefits of our optimizations for executing these queries, using event streams from a campus-microgrid IoT deployment.Comment: 34 pages, 16 figures, accepted in Future Generation Computer Systems, October 27, 201

    Promoting Students’ Reading Proficiency through Reciprocal Technique

    Get PDF
    Teaching English in High School level (Junior High School and Senior High School) is more exposed in reading and writing skills for the target of final examination (UN).Consequently, students need to sharpen their reading and writing ability. It also must be done by students in a non-formal Senior High School known as KejarPaket C without any exception. Improving reading skill can be done through doing more reading activities in various types of text. This activity will add students’ vocabularies and their knowledge about more topics of the texts. Besides, the way teachers deliver the material deals with techniques, methods, or strategies also contributes in students’ progress of reading skill. Seeing the background of students in KejarPaket C who have various backgrounds and mostly are from low economic level or less motivated students for studying, teachers should concern more in exposing the material through the appropriate and effective technique. Carter (2001) brings an idea about a technique which teacher and students are taking place in dialogue and it resulted in students learning how to construct meaning when they are placed in must-read situations (tests or assignments). This was experimental research which was conducted to find the difference of students’ reading proficiency taught by using a technique called Reciprocal. The study found that the technique contributed significantly in improving students’ reading proficiency after they were taught by using Reciprocal. In brief, Reciprocal technique is an effective way to improve students’ reading proficiency

    Analytical, Theoretical and Empirical Advances in Genome-Scale Algorithmics

    Get PDF
    Ever-increasing amounts of complex biological data continue to come on line daily. Examples include proteomic, transcriptomic, genomic and metabolomic data generated by a plethora of high-throughput methods. Accordingly, fast and effective data processing techniques are more and more in demand. This issue is addressed in this dissertation through an investigation of various algorithmic alternatives and enhancements to routine and traditional procedures in common use. In the analysis of gene co-expression data, for example, differential measures of entropy and variation are studied as augmentations to mere differential expression. These novel metrics are shown to help elucidate disease-related genes in wide assortments of case/control data. In a more theoretical spirit, limits on the worst-case behavior of density based clustering methods are studied. It is proved, for instance, that the well-known paraclique algorithm, under proper tuning, can be guaranteed never to produce subgraphs with density less than 2/3. Transformational approaches to efficient algorithm design are also considered. Classic graph search problems are mapped to and from well-studied versions of satisfiability and integer linear programming. In so doing, regions of the input space are classified for which such transforms are effective alternatives to direct graph optimizations. In all these efforts, practical implementations are emphasized in order to advance the boundary of effective computation

    How Do Welcome Statements Differ from Mission Statements?: The Salience of Genre

    Get PDF
    In this analysis, we sought to identify key linguistic properties of mission statements and to explain how these properties function toward managerial purposes. Data included a target corpus of 920 community college mission statements (47,943 words), a domain-specific corpus of 632 “welcome statements” published on websites by community college presidents (173,534 words), and a general reference corpus extracted from the Corpus of Contemporary American English (16.53 million words) (Davies, 2017). We used specialized corpus linguistics software to generate standardized word frequencies and to tag each corpus for parts of speech. We then identified the words and parts of speech that were statistically underrepresented and overrepresented in mission statements compared to each reference corpus. We then interpreted the findings using Fairclough’s (2003) framework for analysis of genre

    Improving Hypoglycemia Protocol Compliance through Nursing Education

    Get PDF
    Nurses hold a vital role in glucose management in order to ensure safety and quality outcome for hospitalized patients. Background: Assessing serum and point of care glucose results and bringing abnormal results to the attention of the healthcare team can help maintain optimal management. Despite challenges to hypoglycemic protocol compliance, informed nurses can advocate effectively for their patients. Understanding insulin action and the effective use of evidence-based guidelines/protocols can help nurses promote optimal patient outcomes. Common barriers to glucose control and education of current best practices in the acute care setting were reviewed. Purpose: To improve hypoglycemic protocol compliance through education in the acute care setting. To achieve Healthy People 2020 goals of reducing the disease and economic burden of diabetes and improving the quality of life for all persons with diabetes. Design methods: A retrospective chart review of hypoglycemic episodes analyzing the nursing behavior in rechecking blood sugars per hospital protocol and education implementation to improve hypoglycemic management protocol compliance. Conclusion: N=13. Thirteen registered nurses participated in the hypoglycemia protocol compliance training and pre-survey. A two-sample t-test was used at the end of the implementation to determine statistical significance between pre-survey and post-survey mean scores. Compliance with Hypoglycemia Protocol is a quality measure at this healthcare system. Improving the nursing staff’s compliance with hypoglycemia protocol is likely to decrease cost, length of stay, improve patient quality of care and prevent avoidable deaths

    Multipartite Graph Algorithms for the Analysis of Heterogeneous Data

    Get PDF
    The explosive growth in the rate of data generation in recent years threatens to outpace the growth in computer power, motivating the need for new, scalable algorithms and big data analytic techniques. No field may be more emblematic of this data deluge than the life sciences, where technologies such as high-throughput mRNA arrays and next generation genome sequencing are routinely used to generate datasets of extreme scale. Data from experiments in genomics, transcriptomics, metabolomics and proteomics are continuously being added to existing repositories. A goal of exploratory analysis of such omics data is to illuminate the functions and relationships of biomolecules within an organism. This dissertation describes the design, implementation and application of graph algorithms, with the goal of seeking dense structure in data derived from omics experiments in order to detect latent associations between often heterogeneous entities, such as genes, diseases and phenotypes. Exact combinatorial solutions are developed and implemented, rather than relying on approximations or heuristics, even when problems are exceedingly large and/or difficult. Datasets on which the algorithms are applied include time series transcriptomic data from an experiment on the developing mouse cerebellum, gene expression data measuring acute ethanol response in the prefrontal cortex, and the analysis of a predicted protein-protein interaction network. A bipartite graph model is used to integrate heterogeneous data types, such as genes with phenotypes and microbes with mouse strains. The techniques are then extended to a multipartite algorithm to enumerate dense substructure in multipartite graphs, constructed using data from three or more heterogeneous sources, with applications to functional genomics. Several new theoretical results are given regarding multipartite graphs and the multipartite enumeration algorithm. In all cases, practical implementations are demonstrated to expand the frontier of computational feasibility
    corecore