2,037 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    CrunchGPT: A chatGPT assisted framework for scientific machine learning

    Full text link
    Scientific Machine Learning (SciML) has advanced recently across many different areas in computational science and engineering. The objective is to integrate data and physics seamlessly without the need of employing elaborate and computationally taxing data assimilation schemes. However, preprocessing, problem formulation, code generation, postprocessing and analysis are still time consuming and may prevent SciML from wide applicability in industrial applications and in digital twin frameworks. Here, we integrate the various stages of SciML under the umbrella of ChatGPT, to formulate CrunchGPT, which plays the role of a conductor orchestrating the entire workflow of SciML based on simple prompts by the user. Specifically, we present two examples that demonstrate the potential use of CrunchGPT in optimizing airfoils in aerodynamics, and in obtaining flow fields in various geometries in interactive mode, with emphasis on the validation stage. To demonstrate the flow of the CrunchGPT, and create an infrastructure that can facilitate a broader vision, we built a webapp based guided user interface, that includes options for a comprehensive summary report. The overall objective is to extend CrunchGPT to handle diverse problems in computational mechanics, design, optimization and controls, and general scientific computing tasks involved in SciML, hence using it as a research assistant tool but also as an educational tool. While here the examples focus in fluid mechanics, future versions will target solid mechanics and materials science, geophysics, systems biology and bioinformatics.Comment: 20 pages, 26 figure

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    Local Editing in Lempel-Ziv Compressed Data

    Get PDF
    This thesis explores the problem of editing data while compressed by a variant of Lempel-Ziv compression. We show that the random-access properties of the LZ-End compression allow random edits, and present the first algorithm to achieve this. The thesis goes on to adapt the LZ-End parsing so that the random access properties become local access, which has tighter memory bounds. Furthermore, the new parsing allows a much improved algorithm to edit the compressed data

    Academic writing for IT students

    Get PDF
    This textbook is intended for Master and PhD Information Technology students (B1-C1 level of English proficiency). The instructions of how to write a research paper in English and the relevant exercises are given. The peculiarities of each section of a paper are presented. The exercises are based on real science materials taken from peer-reviewed journals. The subject area covers a wide scope of different Information Technology domains

    IMAGINING, GUIDING, PLAYING INTIMACY: - A Theory of Character Intimacy Games -

    Get PDF
    Within the landscape of Japanese media production, and video game production in particular, there is a niche comprising video games centered around establishing, developing, and fulfilling imagined intimate relationships with anime-manga characters. Such niche, although very significant in production volume and lifespan, is left unexplored or underexplored. When it is not, it is subsumed within the scope of wider anime-manga media. This obscures the nature of such video games, alternatively identified with descriptors including but not limited to ‘visual novel’, ‘dating simulator’ and ‘adult computer game’. As games centered around developing intimacy with characters, they present specific ensembles of narrative content, aesthetics and software mechanics. These ensembles are aimed at eliciting in users what are, by all intents and purposes, parasocial phenomena towards the game’s characters. In other words, these software products encourage players to develop affective and bodily responses towards characters. They are set in a way that is coherent with shared, circulating scripts for sexual and intimate interaction to guide player imaginative action. This study defines games such as the above as ‘character intimacy games’, video game software where traversal is contingent on players knowingly establishing, developing, and fulfilling intimate bonds with fictional characters. To do so, however, player must recognize themselves as playing that type of game, and to be looking to develop that kind of response towards the game’s characters. Character Intimacy Games are contingent upon player developing affective and bodily responses, and thus presume that players are, at the very least, non-hostile towards their development. This study approaches Japanese character intimacy games as its corpus, and operates at the intersection of studies of communication, AMO studies and games studies. The study articulates a research approach based on the double need of approaching single works of significance amidst a general scarcity of scholarly background on the subject. It juxtaposes data-driven approaches derived from fan-curated databases – The Visual Novel Database and Erogescape -Erogē Hyōron Kūkan – with a purpose-created ludo-hermeneutic process. By deploying an observation of character intimacy games through fan-curated data and building ludo-hermeneutics on the resulting ontology, this study argues that character intimacy games are video games where traversal is contingent on players knowingly establishing, developing, and fulfilling intimate bonds with fictional characters and recognizing themselves as doing so. To produce such conditions, the assemblage of software mechanics and narrative content in such games facilitates intimacy between player and characters. This is, ultimately, conductive to the emergence of parasocial phenomena. Parasocial phenomena, in turn, are deployed as an integral assumption regarding player activity within the game’s wider assemblage of narrative content and software mechanics

    Optimizing scientific communication : the role of relative clauses as markers of complexity in English and German scientific writing between 1650 and 1900

    Get PDF
    The aim of this thesis is to show that both scientific English and German have become increasingly optimized for scientific communication from 1650 to 1900 by adapting the usage of relative clauses as markers of grammatical complexity. While the lexico-grammatical changes in terms of features and their frequency distribution in scientific writing during this period are well documented, in the present work we are interested in the underlying factors driving these changes and how they affect efficient scientific communication. As the scientific register emerges and evolves, it continuously adapts to the changing communicative needs posed by extra-linguistic pressures arising from the scientific community and its achievements. We assume that, over time, scientific language maintains communicative efficiency by balancing lexico-semantic expansion with a reduction in (lexico-)grammatical complexity on different linguistic levels. This is based on the idea that linguistic complexity affects processing difficulty and, in turn, communicative efficiency. To achieve optimization, complexity is adjusted on the level of lexico-grammar, which is related to expectation-based processing cost, and syntax, which is linked to working memory-based processing cost. We conduct five corpus-based studies comparing English and German scientific writing to general language. The first two investigate the development of relative clauses in terms of lexico-grammar, measuring the paradigmatic richness and syntagmatic predictability of relativizers as indicators of expectation-based processing cost. The results confirm that both levels undergo a reduction in complexity over time. The other three studies focus on the syntactic complexity of relative clauses, investigating syntactic intricacy, locality, and accessibility. Results show that intricacy and locality decrease, leading to lower grammatical complexity and thus mitigating memory-based processing cost. However, accessibility is not a factor of complexity reduction over time. Our studies reveal a register-specific diachronic complexity reduction in scientific language both in lexico-grammar and syntax. The cross-linguistic comparison shows that English is more advanced in its register-specific development while German lags behind due to a later establishment of the vernacular as a language of scientific communication.This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 232722074 – SFB 110

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    Advances in automatic terminology processing: methodology and applications in focus

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.The information and knowledge era, in which we are living, creates challenges in many fields, and terminology is not an exception. The challenges include an exponential growth in the number of specialised documents that are available, in which terms are presented, and the number of newly introduced concepts and terms, which are already beyond our (manual) capacity. A promising solution to this ‘information overload’ would be to employ automatic or semi-automatic procedures to enable individuals and/or small groups to efficiently build high quality terminologies from their own resources which closely reflect their individual objectives and viewpoints. Automatic terminology processing (ATP) techniques have already proved to be quite reliable, and can save human time in terminology processing. However, they are not without weaknesses, one of which is that these techniques often consider terms to be independent lexical units satisfying some criteria, when terms are, in fact, integral parts of a coherent system (a terminology). This observation is supported by the discussion of the notion of terms and terminology and the review of existing approaches in ATP presented in this thesis. In order to overcome the aforementioned weakness, we propose a novel methodology in ATP which is able to extract a terminology as a whole. The proposed methodology is based on knowledge patterns automatically extracted from glossaries, which we considered to be valuable, but overlooked resources. These automatically identified knowledge patterns are used to extract terms, their relations and descriptions from corpora. The extracted information can facilitate the construction of a terminology as a coherent system. The study also aims to discuss applications of ATP, and describes an experiment in which ATP is integrated into a new NLP application: multiplechoice test item generation. The successful integration of the system shows that ATP is a viable technology, and should be exploited more by other NLP applications

    Northeastern Illinois University, Academic Catalog 2023-2024

    Get PDF
    https://neiudc.neiu.edu/catalogs/1064/thumbnail.jp
    corecore