3,724 research outputs found

    Pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inženjerstvu vođenom modelima

    Get PDF
    In this thesis, we present an approach to the production process specification and generation based on the model-driven paradigm, with the goal to increase the flexibility of factories and respond to the challenges that emerged in the era of Industry 4.0 more efficiently. To formally specify production processes and their variations in the Industry 4.0 environment, we created a novel domain-specific modeling language, whose models are machine-readable. The created language can be used to model production processes that can be independent of any production system, enabling process models to be used in different production systems, and process models used for the specific production system. To automatically transform production process models dependent on the specific production system into instructions that are to be executed by production system resources, we created an instruction generator. Also, we created generators for different manufacturing documentation, which automatically transform production process models into manufacturing documents of different types. The proposed approach, domain-specific modeling language, and software solution contribute to introducing factories into the digital transformation process. As factories must rapidly adapt to new products and their variations in the era of Industry 4.0, production must be dynamically led and instructions must be automatically sent to factory resources, depending on products that are to be created on the shop floor. The proposed approach contributes to the creation of such a dynamic environment in contemporary factories, as it allows to automatically generate instructions from process models and send them to resources for execution. Additionally, as there are numerous different products and their variations, keeping the required manufacturing documentation up to date becomes challenging, which can be done automatically by using the proposed approach and thus significantly lower process designers' time.У овој дисертацији представљен је приступ спецификацији и генерисању производних процеса заснован на инжењерству вођеном моделима, у циљу повећања флексибилности постројења у фабрикама и ефикаснијег разрешавања изазова који се појављују у ери Индустрије 4.0. За потребе формалне спецификације производних процеса и њихових варијација у амбијенту Индустрије 4.0, креиран је нови наменски језик, чије моделе рачунар може да обради на аутоматизован начин. Креирани језик има могућност моделовања производних процеса који могу бити независни од производних система и тиме употребљени у различитим постројењима или фабрикама, али и производних процеса који су специфични за одређени систем. Како би моделе производних процеса зависних од конкретног производног система било могуће на аутоматизован начин трансформисати у инструкције које ресурси производног система извршавају, креиран је генератор инструкција. Такође су креирани и генератори техничке документације, који на аутоматизован начин трансформишу моделе производних процеса у документе различитих типова. Употребом предложеног приступа, наменског језика и софтверског решења доприноси се увођењу фабрика у процес дигиталне трансформације. Како фабрике у ери Индустрије 4.0 морају брзо да се прилагоде новим производима и њиховим варијацијама, неопходно је динамички водити производњу и на аутоматизован начин слати инструкције ресурсима у фабрици, у зависности од производа који се креирају у конкретном постројењу. Тиме што је у предложеном приступу могуће из модела процеса аутоматизовано генерисати инструкције и послати их ресурсима, доприноси се креирању једног динамичког окружења у савременим фабрикама. Додатно, услед великог броја различитих производа и њихових варијација, постаје изазовно одржавати неопходну техничку документацију, што је у предложеном приступу могуће урадити на аутоматизован начин и тиме значајно уштедети време пројектаната процеса.U ovoj disertaciji predstavljen je pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inženjerstvu vođenom modelima, u cilju povećanja fleksibilnosti postrojenja u fabrikama i efikasnijeg razrešavanja izazova koji se pojavljuju u eri Industrije 4.0. Za potrebe formalne specifikacije proizvodnih procesa i njihovih varijacija u ambijentu Industrije 4.0, kreiran je novi namenski jezik, čije modele računar može da obradi na automatizovan način. Kreirani jezik ima mogućnost modelovanja proizvodnih procesa koji mogu biti nezavisni od proizvodnih sistema i time upotrebljeni u različitim postrojenjima ili fabrikama, ali i proizvodnih procesa koji su specifični za određeni sistem. Kako bi modele proizvodnih procesa zavisnih od konkretnog proizvodnog sistema bilo moguće na automatizovan način transformisati u instrukcije koje resursi proizvodnog sistema izvršavaju, kreiran je generator instrukcija. Takođe su kreirani i generatori tehničke dokumentacije, koji na automatizovan način transformišu modele proizvodnih procesa u dokumente različitih tipova. Upotrebom predloženog pristupa, namenskog jezika i softverskog rešenja doprinosi se uvođenju fabrika u proces digitalne transformacije. Kako fabrike u eri Industrije 4.0 moraju brzo da se prilagode novim proizvodima i njihovim varijacijama, neophodno je dinamički voditi proizvodnju i na automatizovan način slati instrukcije resursima u fabrici, u zavisnosti od proizvoda koji se kreiraju u konkretnom postrojenju. Time što je u predloženom pristupu moguće iz modela procesa automatizovano generisati instrukcije i poslati ih resursima, doprinosi se kreiranju jednog dinamičkog okruženja u savremenim fabrikama. Dodatno, usled velikog broja različitih proizvoda i njihovih varijacija, postaje izazovno održavati neophodnu tehničku dokumentaciju, što je u predloženom pristupu moguće uraditi na automatizovan način i time značajno uštedeti vreme projektanata procesa

    Unifying context with labeled property graph: A pipeline-based system for comprehensive text representation in NLP

    Get PDF
    Extracting valuable insights from vast amounts of unstructured digital text presents significant challenges across diverse domains. This research addresses this challenge by proposing a novel pipeline-based system that generates domain-agnostic and task-agnostic text representations. The proposed approach leverages labeled property graphs (LPG) to encode contextual information, facilitating the integration of diverse linguistic elements into a unified representation. The proposed system enables efficient graph-based querying and manipulation by addressing the crucial aspect of comprehensive context modeling and fine-grained semantics. The effectiveness of the proposed system is demonstrated through the implementation of NLP components that operate on LPG-based representations. Additionally, the proposed approach introduces specialized patterns and algorithms to enhance specific NLP tasks, including nominal mention detection, named entity disambiguation, event enrichments, event participant detection, and temporal link detection. The evaluation of the proposed approach, using the MEANTIME corpus comprising manually annotated documents, provides encouraging results and valuable insights into the system\u27s strengths. The proposed pipeline-based framework serves as a solid foundation for future research, aiming to refine and optimize LPG-based graph structures to generate comprehensive and semantically rich text representations, addressing the challenges associated with efficient information extraction and analysis in NLP

    NURSING AND MIDWIFERY STUDENTS’ LENS: CONNECTING THEORETICAL KNOWLEDGE WITH CLINICAL PRACTICE: AN INTERPRETATIVE PHENOMENOLOGICAL STUDY

    Get PDF
    Aim: To explore and critically analyse the strategies employed by final-year BSc pre-registration nursing and midwifery students at an inner London university to connect theoretical knowledge with clinical practice, to promote their learning and professional development. Background: Navigating the theory-practice gap has been a significant challenge for nursing and midwifery students. While there are many perspectives from academics and clinicians, how theoretical knowledge is connected with clinical practice is rarely discussed and studied from the students’ perspectives. Design: Interpretative phenomenological analysis was used to understand nursing and midwifery students' experiences in connecting theoretical knowledge with clinical practice. Rather than attempting to establish objective truth, this thesis focused on participants’ subjective experiences. Method: This study employed a qualitative research design. The data was obtained using semi-structured interviews and analysed using an inductive approach. The study population included (n=12) pre-registration nursing and midwifery students enrolled on a Bachelor of Science programs. Findings: Four themes emerged (1) Complexity of embodied knowledge; (2) Sensing the meaning of personal and professional learning; (3) Demographic attributes and self-understanding; (4) Sense-making of COVID-19. Conclusion: The process by which pre-registration nursing and midwifery students connect theoretical knowledge with clinical practice is complex and multifaceted. It intersects with other factors and cannot be understood in isolation. This interconnectedness necessitates a thorough examination of all the variables involved

    A Zero-shot and Few-shot Study of Instruction-Finetuned Large Language Models Applied to Clinical and Biomedical Tasks

    Full text link
    We evaluate four state-of-the-art instruction-tuned large language models (LLMs) -- ChatGPT, Flan-T5 UL2, Tk-Instruct, and Alpaca -- on a set of 13 real-world clinical and biomedical natural language processing (NLP) tasks in English, such as named-entity recognition (NER), question-answering (QA), relation extraction (RE), etc. Our overall results demonstrate that the evaluated LLMs begin to approach performance of state-of-the-art models in zero- and few-shot scenarios for most tasks, and particularly well for the QA task, even though they have never seen examples from these tasks before. However, we observed that the classification and RE tasks perform below what can be achieved with a specifically trained model for the medical field, such as PubMedBERT. Finally, we noted that no LLM outperforms all the others on all the studied tasks, with some models being better suited for certain tasks than others.Comment: Under review proces

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (Forlì Campus) in collaboration with the Romagna Chamber of Commerce (Forlì-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Untersuchung von Performanzveränderungen auf Quelltextebene

    Get PDF
    Änderungen am Quelltext einer Software können zu veränderter Performanz führen. Um das Auftreten von Regressionen zu verhindern und die Effekte von Quelltextänderungen, von denen eine Verbesserung erwartet wird, zu überprüfen, ist die Messung der Auswirkungen von Quelltextänderungen auf die Performanz sowie das tiefgehende Verständnis des Laufzeitverhaltens der beteiligten Quelltextkonstrukte notwendig. Die Spezifikation von Benchmarks oder Lasttests, um Regressionen zu erkennen, erfordert immensen manuellen Aufwand. Für das Verständnis der Änderungen sind anschließend oft weitere Experimente notwendig. In der vorliegenden Arbeit wird der Ansatz Performanzanalyse von Softwaresystemen (Peass) entwickelt. Peass beruht auf der Annahme, dass Performanzänderungen durch Messung der Performanz von Unittests erkennbar ist. Peass besteht aus (1) einer Methode zur Regressionstestselektion, d. h. zur Bestimmung, zwischen welchen Commits sich die Performanz geändert haben kann basierend auf statischer Quelltextanalyse und Analyse des Laufzeitverhaltens, (2) einer Methode zur Umwandlung von Unittests in Performanztests und zur statistisch zuverlässigen und reproduzierbaren Messung der Performanz und (3) einer Methode zur Unterstützung des Verstehens von Ursachen von Performanzänderungen. Der Peass-Ansatzes ermöglicht es somit, durch den Workload von Unittests messbare Performanzänderungen automatisiert zu untersuchen. Die Validität des Ansatzes wird geprüft, indem gezeigt wird, dass (1) typische Performanzprobleme in künstlichen Testfällen und (2) reale, durch Entwickler markierte Performanzänderungen durch Peass gefunden werden können. Durch eine Fallstudie in einem laufenden Softwareentwicklungsprojekt wird darüber hinaus gezeigt, dass Peass in der Lage ist, relevante Performanzänderungen zu erkennen.:1 Einleitung 1.1 Motivation 1.2 Ansatz 1.3 Forschungsfragen 1.4 Beiträge 1.5 Aufbau der Arbeit 2 Grundlagen 2.1 Software Performance Engineering 2.2 Modellbasierter Ansatz 2.2.1 Überblick 2.2.2 Performanzantipattern 2.3 Messbasierter Ansatz 2.3.1 Messprozess 2.3.2 Messwertanalyse 2.4 Messung in künstlichen Umgebungen 2.4.1 Benchmarking 2.4.2 Lasttests 2.4.3 Performanztests 2.5 Messung in realen Umgebungen: Monitoring 2.5.1 Überblick 2.5.2 Umsetzung 2.5.3 Werkzeuge 3 Regressionstestselektion 3.1 Ansatz 3.1.1 Grundidee 3.1.2 Voraussetzungen 3.1.3 Zweistufiger Prozess 3.2 Statische Testselektion 3.2.1 Selektierte Änderungen 3.2.2 Prozess 3.2.3 Implementierung 3.3 Tracevergleich 3.3.1 Selektierte Änderungen 3.3.2 Prozess 3.3.3 Implementierung 3.3.4 Kombination mit statischer Analyse 3.4 Evaluation 3.4.1 Implementierung 3.4.2 Exaktheit 3.4.3 Korrektheit 3.4.4 Diskussion der Validität 3.5 Verwandte Arbeiten 3.5.1 Funktionale Regressionstestbestimmung 3.5.2 Regressionstestbestimmung für Performanztests 4 Messprozess 4.1 Vergleich von Mess- und Analysemethoden 4.1.1 Vorgehen 4.1.2 Fehlerbetrachtung 4.1.3 Workloadgröße der künstlichen Unittestpaare 4.2 Messmethode 4.2.1 Aufbau einer Iteration 4.2.2 Beenden von Messungen 4.2.3 Garbage Collection je Iteration 4.2.4 Umgang mit Standardausgabe 4.2.5 Zusammenfassung der Messmethode 4.3 Analysemethode 4.3.1 Auswahl des statistischen Tests 4.3.2 Ausreißerentfernung 4.3.3 Parallelisierung 4.4 Evaluierung 4.4.1 Vergleich mit JMH 4.4.2 Reproduzierbarkeit der Ergebnisse 4.4.3 Fazit 4.5 Verwandte Arbeiten 4.5.1 Beenden von Messungen 4.5.2 Änderungserkennung 4.5.3 Anomalieerkennung 5 Ursachenanalyse 5.1 Reduktion des Overheads der Messung einzelner Methoden 5.1.1 Generierung von Beispielprojekten 5.1.2 Messung von Methodenausführungsdauern 5.1.3 Optionen zur Overheadreduktion 5.1.4 Messergebnisse 5.1.5 Überprüfung mit MooBench 5.2 Messkonfiguration der Ursachenanalyse 5.2.1 Grundlagen 5.2.2 Fehlerbetrachtung 5.2.3 Ansatz 5.2.4 Messergebnisse 5.3 Verwandte Arbeiten 5.3.1 Monitoringoverhead 5.3.2 Ursachenanalyse für Performanzänderungen 5.3.3 Ursachenanalyse für Performanzprobleme 6 Evaluation 6.1 Validierung durch künstliche Performanzprobleme 6.1.1 Reproduktion durch Benchmarks 6.1.2 Umwandlung der Benchmarks 6.1.3 Überprüfen von Problemen mit Peass 6.2 Evaluation durch reale Performanzprobleme 6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten 6.2.2 Untersuchung der Performanzänderungen in GeoMap 7 Zusammenfassung und Ausblick 7.1 Zusammenfassung 7.2 AusblickChanges to the source code of a software may result in varied performance. In order to prevent the occurance of regressions and check the effect of source changes, which are expected to result in performance improvements, both the measurement of the impact of source code changes and a deep understanding of the runtime behaviour of the used source code elements are necessary. The specification of benchmarks and load tests, which are able to detect performance regressions, requires immense manual effort. To understand the changes, often additional experiments are necessary. This thesis develops the Peass approach (Performance analysis of software systems). Peass is based on the assumption, that performance changes can be identified by unit tests. Therefore, Peass consists of (1) a method for regression test selection, which determines between which commits the performance may have changed based on static code analysis and analysis of the runtime behavior, (2) a method for transforming unit tests into performance tests and for statistically reliable and reproducible measurement of the performance and (3) a method for aiding the diagnosis of root causes of performance changes. The Peass approach thereby allows to automatically examine performance changes that are measurable by the workload of unit tests. The validity of the approach is evaluated by showing that (1) typical performance problems in artificial test cases and (2) real, developer-tagged performance changes can be found by Peass. Furthermore, a case study in an ongoing software development project shows that Peass is able to detect relevant performance changes.:1 Einleitung 1.1 Motivation 1.2 Ansatz 1.3 Forschungsfragen 1.4 Beiträge 1.5 Aufbau der Arbeit 2 Grundlagen 2.1 Software Performance Engineering 2.2 Modellbasierter Ansatz 2.2.1 Überblick 2.2.2 Performanzantipattern 2.3 Messbasierter Ansatz 2.3.1 Messprozess 2.3.2 Messwertanalyse 2.4 Messung in künstlichen Umgebungen 2.4.1 Benchmarking 2.4.2 Lasttests 2.4.3 Performanztests 2.5 Messung in realen Umgebungen: Monitoring 2.5.1 Überblick 2.5.2 Umsetzung 2.5.3 Werkzeuge 3 Regressionstestselektion 3.1 Ansatz 3.1.1 Grundidee 3.1.2 Voraussetzungen 3.1.3 Zweistufiger Prozess 3.2 Statische Testselektion 3.2.1 Selektierte Änderungen 3.2.2 Prozess 3.2.3 Implementierung 3.3 Tracevergleich 3.3.1 Selektierte Änderungen 3.3.2 Prozess 3.3.3 Implementierung 3.3.4 Kombination mit statischer Analyse 3.4 Evaluation 3.4.1 Implementierung 3.4.2 Exaktheit 3.4.3 Korrektheit 3.4.4 Diskussion der Validität 3.5 Verwandte Arbeiten 3.5.1 Funktionale Regressionstestbestimmung 3.5.2 Regressionstestbestimmung für Performanztests 4 Messprozess 4.1 Vergleich von Mess- und Analysemethoden 4.1.1 Vorgehen 4.1.2 Fehlerbetrachtung 4.1.3 Workloadgröße der künstlichen Unittestpaare 4.2 Messmethode 4.2.1 Aufbau einer Iteration 4.2.2 Beenden von Messungen 4.2.3 Garbage Collection je Iteration 4.2.4 Umgang mit Standardausgabe 4.2.5 Zusammenfassung der Messmethode 4.3 Analysemethode 4.3.1 Auswahl des statistischen Tests 4.3.2 Ausreißerentfernung 4.3.3 Parallelisierung 4.4 Evaluierung 4.4.1 Vergleich mit JMH 4.4.2 Reproduzierbarkeit der Ergebnisse 4.4.3 Fazit 4.5 Verwandte Arbeiten 4.5.1 Beenden von Messungen 4.5.2 Änderungserkennung 4.5.3 Anomalieerkennung 5 Ursachenanalyse 5.1 Reduktion des Overheads der Messung einzelner Methoden 5.1.1 Generierung von Beispielprojekten 5.1.2 Messung von Methodenausführungsdauern 5.1.3 Optionen zur Overheadreduktion 5.1.4 Messergebnisse 5.1.5 Überprüfung mit MooBench 5.2 Messkonfiguration der Ursachenanalyse 5.2.1 Grundlagen 5.2.2 Fehlerbetrachtung 5.2.3 Ansatz 5.2.4 Messergebnisse 5.3 Verwandte Arbeiten 5.3.1 Monitoringoverhead 5.3.2 Ursachenanalyse für Performanzänderungen 5.3.3 Ursachenanalyse für Performanzprobleme 6 Evaluation 6.1 Validierung durch künstliche Performanzprobleme 6.1.1 Reproduktion durch Benchmarks 6.1.2 Umwandlung der Benchmarks 6.1.3 Überprüfen von Problemen mit Peass 6.2 Evaluation durch reale Performanzprobleme 6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten 6.2.2 Untersuchung der Performanzänderungen in GeoMap 7 Zusammenfassung und Ausblick 7.1 Zusammenfassung 7.2 Ausblic

    Into the Single Cell Multiverse: an End-to-End Dataset for Procedural Knowledge Extraction in Biomedical Texts

    Full text link
    Many of the most commonly explored natural language processing (NLP) information extraction tasks can be thought of as evaluations of declarative knowledge, or fact-based information extraction. Procedural knowledge extraction, i.e., breaking down a described process into a series of steps, has received much less attention, perhaps in part due to the lack of structured datasets that capture the knowledge extraction process from end-to-end. To address this unmet need, we present FlaMB\'e (Flow annotations for Multiverse Biological entities), a collection of expert-curated datasets across a series of complementary tasks that capture procedural knowledge in biomedical texts. This dataset is inspired by the observation that one ubiquitous source of procedural knowledge that is described as unstructured text is within academic papers describing their methodology. The workflows annotated in FlaMB\'e are from texts in the burgeoning field of single cell research, a research area that has become notorious for the number of software tools and complexity of workflows used. Additionally, FlaMB\'e provides, to our knowledge, the largest manually curated named entity recognition (NER) and disambiguation (NED) datasets for tissue/cell type, a fundamental biological entity that is critical for knowledge extraction in the biomedical research domain. Beyond providing a valuable dataset to enable further development of NLP models for procedural knowledge extraction, automating the process of workflow mining also has important implications for advancing reproducibility in biomedical research.Comment: Submitted to NeurIPS 2023 Datasets and Benchmarks Trac

    Referring to discourse participants in Ibero-Romance languages

    Get PDF
    Synopsis: This volume brings together contributions by researchers focusing on personal pronouns in Ibero-Romance languages, going beyond the well-established variable of expressed vs. non-expressed subjects. While factors such as agreement morphology, topic shift and contrast or emphasis have been argued to account for variable subject expression, several corpus studies on Ibero-Romance languages have shown that the expression of subject pronouns goes beyond these traditionally established factors and is also subject to considerable dialectal variation. One of the factors affecting choice and expression of personal pronouns or other referential devices is whether the construction is used personally or impersonally. The use and emergence of new impersonal constructions, eventually also new (im)personal pronouns, as well as the variation found in the expression of human impersonality in different Ibero-Romance language varieties is another interesting research area that has gained ground in the recent years. In addition to variable subject expression, similar methods and theoretical approaches have been applied to study the expression of objects. Finally, the reference to the addressee(s) using different address pronouns and other address forms is an important field of study that is closely connected to the variable expression of pronouns. The present book sheds light on all these aspects of reference to discourse participants. The volume contains contributions with a strong empirical background and various methods and both written and spoken corpus data from Ibero-Romance languages. The focus on discourse participants highlights the special properties of first and second person referents and the factors affecting them that are often different from the anaphoric third person. The chapters are organized into three thematic sections: (i) Variable expression of subjects and objects, (ii) Between personal and impersonal, and (iii) Reference to the addressee

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Construction contract risk identification based on knowledge-augmented language model

    Full text link
    Contract review is an essential step in construction projects to prevent potential losses. However, the current methods for reviewing construction contracts lack effectiveness and reliability, leading to time-consuming and error-prone processes. While large language models (LLMs) have shown promise in revolutionizing natural language processing (NLP) tasks, they struggle with domain-specific knowledge and addressing specialized issues. This paper presents a novel approach that leverages LLMs with construction contract knowledge to emulate the process of contract review by human experts. Our tuning-free approach incorporates construction contract domain knowledge to enhance language models for identifying construction contract risks. The use of a natural language when building the domain knowledge base facilitates practical implementation. We evaluated our method on real construction contracts and achieved solid performance. Additionally, we investigated how large language models employ logical thinking during the task and provide insights and recommendations for future research
    corecore