60 research outputs found

    Turbulent Flow and Combustion in Homogeneous Charge Compression Ignition Engines with Detailed Chemical Kinetics

    Get PDF
    Homogeneous Charge Compression Ignition (HCCI) engines have the potential to achieve higher thermal efficiency and lower emissions compared with conventional Internal Combustion (IC) engines. However, the organization of HCCI engine combustion is extremely critical in order to take advantage of HCCI combustion. In this dissertation, an integrated numerical solver (named CKL solver) has been developed by integrating the original KIVA-3V solver with CHEMKIN and Large Eddy Simulation. This integrated solver has been validated by comparing the numerical results with the available experimental results, and has been employed to evaluate the combustion performance of the innovative HCCI combustion strategy with the Internal Mixing and Reformation (IMR) chamber that was proposed in the present study. The results show that: (1) the CKL solver can provide detailed information on HCCI combustion in terms of turbulent flow structures, temperature fields, concentration fields of all species involved including emissions (NO x, CO, HC), engine performance (indicated mean effective pressure (IMEP), heat release rate (HRR), thermal efficiency), and spray-flow interactions. (2) the CKL solver predicts the averaged pressure, IMEP, thermal efficiency, emissions and HRR which are in good agreement with corresponding experimental data, proving that the CKL solver can be applied to practical engineering applications with the accuracy, depending on the intake temperature values, for IMEP of 5-10%, and for peak pressure of 1-7.5%. (3) the functions of the IMR chamber have been demonstrated and evaluated, showing that the IMR technology is a promising combustion strategy and needs further investigation in the future

    Personalized News Recommender using Twitter

    Get PDF
    Online news reading has become a widely popular way to read news articles from news sources around the globe. With the enormous amount of news articles available, users are easily swamped by information of little interest to them. News recommender systems are one approach to help users find interesting articles to read. News recommender systems present the articles to individual users based on their interests rather than presenting articles in order of their occurrence. In this thesis, we present our research on developing personalized news recommendation system with the help of a popular micro-blogging service Twitter . The news articles are ranked based on the popularity of the article that is identified with the help of the tweets from the Twitter\u27s public timeline. Also, user profiles are built based on the user\u27s interests and the news articles are ranked by matching the characteristics of the user profile. With the help of these two approaches, we present a hybrid news recommendation model that recommends interesting news stories to the user based on their popularity and their relevance to the user profile

    Specialising Parsers for Queries

    Get PDF
    Many software systems consist of data processing components that analyse large datasets to gather information and learn from these. Often, only part of the data is relevant for analysis. Data processing systems contain an initial preprocessing step that filters out the unwanted information. While efficient data analysis techniques and methodologies are accessible to non-expert programmers, data preprocessing seems to be forgotten, or worse, ignored. This despite real performance gains being possible by efficiently preprocessing data. Implementations of the data preprocessing step traditionally have to trade modularity for performance: to achieve the former, one separates the parsing of raw data and filtering it, and leads to slow programs because of the creation of intermediate objects during execution. The efficient version is a low-level implementation that interleaves parsing and querying. In this dissertation we demonstrate a principled and practical technique to convert the modular, maintainable program into its interleaved efficient counterpart. Key to achieving this objective is the removal, or deforestation, of intermediate objects in a program execution. We first show that by encoding data types using Böhm-Berarducci encodings (often referred to as Church encodings), and combining these with partial evaluation for function composition we achieve deforestation. This allows us to implement optimisations themselves as libraries, with minimal dependence on an underlying optimising compiler. Next we illustrate the applicability of this approach to parsing and preprocessing queries. The approach is general enough to cover top-down and bottom-up parsing techniques, and deforestation of pipelines of operations on lists and streams. We finally present a set of transformation rules that for a parser on a nested data format and a query on the structure, produces a parser specialised for the query. As a result we preserve the modularity of writing parsers and queries separately while also minimising resource usage. These transformation rules combine deforested implementations of both libraries to yield an efficient, interleaved result

    SAGE: Sequential Attribute Generator for Analyzing Glioblastomas using Limited Dataset

    Full text link
    While deep learning approaches have shown remarkable performance in many imaging tasks, most of these methods rely on availability of large quantities of data. Medical image data, however, is scarce and fragmented. Generative Adversarial Networks (GANs) have recently been very effective in handling such datasets by generating more data. If the datasets are very small, however, GANs cannot learn the data distribution properly, resulting in less diverse or low-quality results. One such limited dataset is that for the concurrent gain of 19 and 20 chromosomes (19/20 co-gain), a mutation with positive prognostic value in Glioblastomas (GBM). In this paper, we detect imaging biomarkers for the mutation to streamline the extensive and invasive prognosis pipeline. Since this mutation is relatively rare, i.e. small dataset, we propose a novel generative framework - the Sequential Attribute GEnerator (SAGE), that generates detailed tumor imaging features while learning from a limited dataset. Experiments show that not only does SAGE generate high quality tumors when compared to standard Deep Convolutional GAN (DC-GAN) and Wasserstein GAN with Gradient Penalty (WGAN-GP), it also captures the imaging biomarkers accurately

    Accelerating parser combinators with macros

    Get PDF
    Parser combinators provide an elegant way of writing parsers: parser implementations closely follow the structure of the underlying grammar, while accommodating interleaved host language code for data processing. However, the host language features used for composition introduce substantial overhead, which leads to poor performance. In this paper, we present a technique to systematically eliminate this overhead. We use Scala macros to analyse the grammar specification at compile-time and remove composition, leaving behind an efficient top-down, recursive-descent parser. We compare our macro-based approach to a staging-based approach using the LMS framework, and provide an experience report in which we discuss the advantages and drawbacks of both methods. Our library outperforms Scala's standard parser combinators on a set of benchmarks by an order of magnitude, and is 2x faster than code generated by LMS

    Fold-based fusion as a library: a generative programming pearl

    Get PDF
    Fusion is a program optimisation technique commonly implemented using special-purpose compiler support. In this paper, we present an alternative approach, implementing fold-based fusion as a standalone library. We use staging to compose operations on folds; the operations are partially evaluated away, yielding code that does not construct unnecessary intermediate data structures. The technique extends to partitioning and grouping of collections

    What are the Odds? Probabilistic programming in Scala

    Get PDF
    Probabilistic programming is a powerful high-level paradigm for probabilistic modeling and inference. We present Odds, a small domain-specific language (DSL) for probabilistic programming, embedded in Scala. Odds provides first-class support for random variables and probabilistic choice, while reusing Scala's abstraction and modularity facilities for composing probabilistic computations and for executing deterministic program parts. Odds accurately represents possibly dependent random variables using a probability monad that models committed choice. This monadic representation of probabilistic models can be combined with a range of inference procedures. We present engines for exact inference, rejection sampling and importance sampling with look-ahead, but other types of solvers are conceivable as well. We evaluate Odds on several non-trivial probabilistic programs from the literature and we demonstrate how the basic probabilistic primitives can be used to build higher-level abstractions, such as rule-based logic programming facilities, using advanced Scala features

    Commercialisation of eHealth Innovations in the Market of UK Healthcare Sector: A Framework for Sustainable Business Model.

    Get PDF
    This is the peer reviewed version of the following article: Festus Oluseyi Oderanti, and Feng Li, ‘Commercialization of eHealth innovations in the market of the UK healthcare sector: A framework for a sustainable business model’, Psychology & Marketing, Vol. 35 (2): 120-137, February 2018, which has been published in final form at https://doi.org/10.1002/mar.21074. Under embargo until 10 January 2020. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.Demographic trends with extended life expectancy are placing increasing pressures on the UK state-funded healthcare budgets. eHealth innovations are expected to facilitate new avenues for cost-effective and safe methods of care, for enabling elderly people to live independently at their own homes and for assisting governments to cope with the demographic challenges. However, despite heavy investment in these innovations, large-scale deployment of eHealth continues to face significant obstacles, and lack of sustainable business models (BMs) is widely regarded as part of the greatest barriers. Through various empirical methods that include facilitated workshops, case studies of relevant organizations, and user groups, this paper investigates the reasons the private market of eHealth innovations has proved difficult to establish, and therefore it develops a framework for sustainable BMs that could elimiesnate barriers of eHealth innovation commercialization. Results of the study suggest that to achieve sustainable commercialization, BM frameworks and innovation diffusion characteristics should be considered complements but not substitutes.Peer reviewe

    ExPASy: SIB bioinformatics resource portal

    Get PDF
    ExPASy (http://www.expasy.org) has worldwide reputation as one of the main bioinformatics resources for proteomics. It has now evolved, becoming an extensible and integrative portal accessing many scientific resources, databases and software tools in different areas of life sciences. Scientists can henceforth access seamlessly a wide range of resources in many different domains, such as proteomics, genomics, phylogeny/evolution, systems biology, population genetics, transcriptomics, etc. The individual resources (databases, web-based and downloadable software tools) are hosted in a ‘decentralized' way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Specifically, a single web portal provides a common entry point to a wide range of resources developed and operated by different SIB groups and external institutions. The portal features a search function across ‘selected' resources. Additionally, the availability and usage of resources are monitored. The portal is aimed for both expert users and people who are not familiar with a specific domain in life sciences. The new web interface provides, in particular, visual guidance for newcomers to ExPAS
    • 

    corecore