447 research outputs found
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Digital agriculture platforms: Understanding innovations in rural finance and logistics in Uganda’s agrifood sector
This work is part of the CGIAR Research Initiatives on Rethinking Markets. CGIAR launched Rethinking Markets with national and international partners to leverage markets and value-chains to address nutrition, livelihoods, and environmental concerns in food systems, at national and subnational levels in seven countries in Africa, Asia, and Latin America. Other CGIAR centers participating in Rethinking Markets are: International Water Management Institute (IWMI), The Alliance of Bioversity International and the International Center for Tropical Agriculture (Alliance Bioversity-CIAT), International Institute of Tropical Agriculture (IITA), International Maize and Wheat Improvement Center (CIMMYT), International Center for Agricultural Research in the Dry Areas (ICARDA), and WorldFish.Agriculture is the mainstay of Uganda’s economy, contributing about 25% of the GDP, a third of the export earnings and almost all the country’s food requirements. Yet, the sector still faces various challenges that affect production and the income derived from it. Systemic issues impact smallholder farmers' livelihoods across markets, land, skills, and capital, with cross-cutting social exclusion issues. Effective application of digital agricultural technologies has emerged as a catalyst in addressing productivity and efficiency challenges and enhancing inclusiveness in agri-food systems. Digital technologies have shown potential to address bottlenecks in access to extension services, marketing systems, suitable financial products, reliable weather information, transport services and logistics as well as supply chain management. Scaling of digital agricultural technologies in Uganda is critical for improving productivity and addressing challenges in the agricultural sector. However, for scaling to be undertaken effectively and inclusively, there is need to address the barriers that limit the use of digital innovations for some populations. However, the issues surrounding scaling and inclusivity of digital services are not well understood. This study therefore sought to contribute to bridging this knowledge gap through an assessment of the existing digitally enabled innovative cross-value chain services to gain insights into how the services are addressing inefficiencies, creating opportunities for improving efficiency and inclusiveness as well as identifying promising innovations for scaling. Specifically, the study focused on innovations in finance and logistics for value chains. For finance, the study specifically looked at digital payments, credit, and insurance, while for logistics, the focus was on supply chain management, transportation, traceability, digital platforms for e-commerce, and (cold) storage across value chains.
The study used a qualitative approach to collect data in two phases: the first phase involved conducting an inception workshop followed with key informant interviews with 39 service providers and enablers. The second phase involved case studies using 12 Focus Group Discussions (FGDs), 12 individual interviews and a validation workshop. The findings show potential for digital agricultural innovations to address some of the challenges. Several benefits are associated with the current use of digital innovations including linking value chain actors with the enabling services (e.g., insurance companies, banks, government), increased access to markets, access to extension and advisory services including market and weather information and quality agri-inputs and tractor services, services such as credit and savings, agri-insurance, agri-trucking, Heating Ventilation and Conditioning (HVAC) and warehousing. The use of digital innovations has for instance enabled access to agri-insurance and digital loans or credit which was not possible before. However, there are some challenges with the use of digital finance and logistic services. For instance, the study identified challenges of access due low awareness and information due to limited digital literacy, social norms and power-relations that disadvantage some sections of the population such as the women and youth from access and some technology design issues that need to be addressed for effective and inclusive uptake and scaling. Additionally, the ICT infrastructure in Uganda is unevenly distributed with significant gaps between rural and urban connectivity. Communication infrastructure (e.g., network coverage and broadband services) is established in urban centers, but rural areas have poor or no connectivity. Limited access to electricity is a major cause of the discrepancies in urban–rural Internet use and mobile phone penetration rates in Uganda. Yet mobile technology is at the heart of the digital transformation in Uganda as in most parts of sub-Saharan Africa. The study also identified some promising innovations that offer opportunities for scaling, following prioritization by stakeholders of the most significant challenges to scaling (low awareness of services and lack of information to support farmers to make a case for investing in digital services). These innovations have active SMEs and start-ups engaged in them and include digital input supplies and payments bundled with agronomic advisories, e-market places for outputs bundled with digital payments services and logistics, and agricultural logistics services involving transport and warehousing services
A Comprehensive Study Of Bills Of Materials For Software Systems
Software Bills of Materials (SBOMs) have emerged as tools to facilitate the management of software dependencies, vulnerabilities, licenses, and the supply chain. Significant effort has been devoted to increasing SBOM awareness and developing SBOM formats and tools. Despite this effort, recent studies have shown that SBOMs are still an early technology not adequately adopted in practice yet, mainly due to limited SBOM tooling and lack of industry consensus on SBOM content, tool usage, and practical benefits. Expanding on previous research, this paper reports a comprehensive study that first investigates the current challenges stakeholders encounter when creating and using SBOMs. The study surveyed 138 practitioners belonging to five groups of stakeholders (practitioners familiar with SBOMs, members of critical open source projects, AI/ML practitioners, experts of cyber-physical systems, and legal professionals), using differentiated questionnaires. We interviewed eight survey respondents to gather further insights about their experience. We identified fourteen major challenges facing the creation and use of SBOMs, including those related to the material included in SBOMs, deficiencies in SBOM tools, SBOM maintenance and verification, and domain-specific challenges. We propose and discuss six actionable solutions to the identified challenges and present the major avenues for future research and development. We hope these solutions can be adopted by the community to improve SBOM formats, tools, and adoption, and thus, enable the full potential of SBOMs
Large Language Models for Software Engineering: A Systematic Literature Review
Large Language Models (LLMs) have significantly impacted numerous domains,
notably including Software Engineering (SE). Nevertheless, a well-rounded
understanding of the application, effects, and possible limitations of LLMs
within SE is still in its early stages. To bridge this gap, our systematic
literature review takes a deep dive into the intersection of LLMs and SE, with
a particular focus on understanding how LLMs can be exploited in SE to optimize
processes and outcomes. Through a comprehensive review approach, we collect and
analyze a total of 229 research papers from 2017 to 2023 to answer four key
research questions (RQs). In RQ1, we categorize and provide a comparative
analysis of different LLMs that have been employed in SE tasks, laying out
their distinctive features and uses. For RQ2, we detail the methods involved in
data collection, preprocessing, and application in this realm, shedding light
on the critical role of robust, well-curated datasets for successful LLM
implementation. RQ3 allows us to examine the specific SE tasks where LLMs have
shown remarkable success, illuminating their practical contributions to the
field. Finally, RQ4 investigates the strategies employed to optimize and
evaluate the performance of LLMs in SE, as well as the common techniques
related to prompt optimization. Armed with insights drawn from addressing the
aforementioned RQs, we sketch a picture of the current state-of-the-art,
pinpointing trends, identifying gaps in existing research, and flagging
promising areas for future study
Machine Learning Algorithm for the Scansion of Old Saxon Poetry
Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools
deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We
implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon
and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and
we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm
reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested
the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that
the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input
verses
Computational acquisition of knowledge in small-data environments: a case study in the field of energetics
The UK’s defence industry is accelerating its implementation of artificial intelligence, including
expert systems and natural language processing (NLP) tools designed to supplement human
analysis. This thesis examines the limitations of NLP tools in small-data environments (common
in defence) in the defence-related energetic-materials domain. A literature review identifies
the domain-specific challenges of developing an expert system (specifically an ontology). The
absence of domain resources such as labelled datasets and, most significantly, the preprocessing
of text resources are identified as challenges. To address the latter, a novel general-purpose
preprocessing pipeline specifically tailored for the energetic-materials domain is developed. The
effectiveness of the pipeline is evaluated.
Examination of the interface between using NLP tools in data-limited environments to either
supplement or replace human analysis completely is conducted in a study examining the subjective
concept of importance. A methodology for directly comparing the ability of NLP tools
and experts to identify important points in the text is presented. Results show the participants
of the study exhibit little agreement, even on which points in the text are important. The NLP,
expert (author of the text being examined) and participants only agree on general statements.
However, as a group, the participants agreed with the expert. In data-limited environments,
the extractive-summarisation tools examined cannot effectively identify the important points
in a technical document akin to an expert.
A methodology for the classification of journal articles by the technology readiness level (TRL)
of the described technologies in a data-limited environment is proposed. Techniques to overcome
challenges with using real-world data such as class imbalances are investigated. A methodology
to evaluate the reliability of human annotations is presented. Analysis identifies a lack of
agreement and consistency in the expert evaluation of document TRL.Open Acces
Fostering the entrepreneur-opportunity nexus in entrepreneurship education - a design science approach
The doctoral thesis uses the Design Science Research approach to investigate key concepts in entrepreneurship education and subsequently develops, tests and evaluates a course design for opportunity recognition in an academic setting at the Karlsruhe Institute of Technology (KIT). Starting with a systematic literature review on entrepreneurial competences published in 2020 (Tittel and Terzidis, 2020), 57 critical entrepreneurial competences were compiled and categorized into an entrepreneurial competence framework. In addition, a conceptual definition of competence and entrepreneurial competence was developed and presented to the scientific community.
A qualitative study with 26 experts, including five entrepreneurship lecturers, ten entrepreneurs, seven consultants and four company experts, was conducted to validate the list of competences identified in the recent academic literature. The interviews were analyzed based on the text and content analysis framework proposed by Mayring (2014). As a result, the experts could confirm 39 of the initial entrepreneurial competencies. In addition, 22 new competences could be identified through inductive coding of the interviews. Based on that, critical implications for developing entrepreneurial education could be developed and proposed. Both studies identified business idea generation and opportunity recognition as critical entrepreneurial competencies and highly relevant concepts for entrepreneurship. Therefore, a pedagogical intervention was developed, tested and evaluated in 12 entrepreneurship courses at the KIT.
A bibliometric analysis was performed to find scientific evidence and relevant associations between Ikigai and entrepreneurship. Using the Ikigai (生き甲斐) framework, a traditional Japanese concept for "life worth living", the four key pillars (What you love, what you are good at, what the world needs, what you can be paid for) were operationalized and implemented into the pedagogical setting. The opportunity recognition course framework was then quantitatively evaluated with a structural equation model (SEM) proposed by Hair et al. (2021). As a result, the personal values-business idea fit was identified to influence the business idea’s desirability significantly. The subsequent interviews with the student teams reveal that the perceived profitability of the business idea also plays a crucial role in the perceived desirability of the business idea developed in class
Managing healthcare transformation towards P5 medicine (Published in Frontiers in Medicine)
Health and social care systems around the world are facing radical organizational, methodological and technological paradigm changes to meet the requirements for improving quality and safety of care as well as efficiency and efficacy of care processes. In this they’re trying to manage the challenges of ongoing demographic changes towards aging, multi-diseased societies, development of human resources, a health and social services consumerism, medical and biomedical progress, and exploding costs for health-related R&D as well as health services delivery. Furthermore, they intend to achieve sustainability of global health systems by transforming them towards intelligent, adaptive and proactive systems focusing on health and wellness with optimized quality and safety outcomes.
The outcome is a transformed health and wellness ecosystem combining the approaches of translational medicine, 5P medicine (personalized, preventive, predictive, participative precision medicine) and digital health towards ubiquitous personalized health services realized independent of time and location. It considers individual health status, conditions, genetic and genomic dispositions in personal social, occupational, environmental and behavioural context, thus turning health and social care from reactive to proactive. This requires the advancement communication and cooperation among the business actors from different domains (disciplines) with different methodologies, terminologies/ontologies, education, skills and experiences from data level (data sharing) to concept/knowledge level (knowledge sharing). The challenge here is the understanding and the formal as well as consistent representation of the world of sciences and practices, i.e. of multidisciplinary and dynamic systems in variable context, for enabling mapping between the different disciplines, methodologies, perspectives, intentions, languages, etc. Based on a framework for dynamically, use-case-specifically and context aware representing multi-domain ecosystems including their development process, systems, models and artefacts can be consistently represented, harmonized and integrated. The response to that problem is the formal representation of health and social care ecosystems through an system-oriented, architecture-centric, ontology-based and policy-driven model and framework, addressing all domains and development process views contributing to the system and context in question.
Accordingly, this Research Topic would like to address this change towards 5P medicine. Specifically, areas of interest include, but are not limited:
• A multidisciplinary approach to the transformation of health and social systems
• Success factors for sustainable P5 ecosystems
• AI and robotics in transformed health ecosystems
• Transformed health ecosystems challenges for security, privacy and trust
• Modelling digital health systems
• Ethical challenges of personalized digital health
• Knowledge representation and management of transformed health ecosystems
Table of Contents:
04 Editorial: Managing healthcare transformation towards P5
medicine
Bernd Blobel and Dipak Kalra
06 Transformation of Health and Social Care Systems—An
Interdisciplinary Approach Toward a Foundational
Architecture
Bernd Blobel, Frank Oemig, Pekka Ruotsalainen and Diego M. Lopez
26 Transformed Health Ecosystems—Challenges for Security,
Privacy, and Trust
Pekka Ruotsalainen and Bernd Blobel
36 Success Factors for Scaling Up the Adoption of Digital
Therapeutics Towards the Realization of P5 Medicine
Alexandra Prodan, Lucas Deimel, Johannes Ahlqvist, Strahil Birov,
Rainer Thiel, Meeri Toivanen, Zoi Kolitsi and Dipak Kalra
49 EU-Funded Telemedicine Projects – Assessment of, and
Lessons Learned From, in the Light of the SARS-CoV-2
Pandemic
Laura Paleari, Virginia Malini, Gabriella Paoli, Stefano Scillieri,
Claudia Bighin, Bernd Blobel and Mauro Giacomini
60 A Review of Artificial Intelligence and Robotics in
Transformed Health Ecosystems
Kerstin Denecke and Claude R. Baudoin
73 Modeling digital health systems to foster interoperability
Frank Oemig and Bernd Blobel
89 Challenges and solutions for transforming health ecosystems
in low- and middle-income countries through artificial
intelligence
Diego M. López, Carolina Rico-Olarte, Bernd Blobel and Carol Hullin
111 Linguistic and ontological challenges of multiple domains
contributing to transformed health ecosystems
Markus Kreuzthaler, Mathias Brochhausen, Cilia Zayas, Bernd Blobel
and Stefan Schulz
126 The ethical challenges of personalized digital health
Els Maeckelberghe, Kinga Zdunek, Sara Marceglia, Bobbie Farsides
and Michael Rigb
Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse
This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses.
This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups.
In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena
Towards Usable API Documentation
The learning and usage of an API is supported by documentation. Like source code, API documentation is itself a software product. Several research results show that bad design in API documentation can make the reuse of API features difficult. Indeed, similar to code smells, poorly designed API documentation can also exhibit 'smells'. Such documentation smells can be described as bad documentation styles that do not necessarily produce incorrect documentation but make the documentation difficult to understand and use. This thesis aims to enhance API documentation usability by addressing such documentation smells in three phases. In the first phase, we developed a catalog of five API documentation smells consulting literature on API documentation issues and online developer discussion. We validated their presence in the real world by creating a benchmark of 1K official Java API documentation units and conducting a survey of 21 developers. The developers confirmed that these smells hinder their productivity and called for automatic detection and fixing. In the second phase, we developed machine-learning models to detect the smells using the 1K benchmark, however, they performed poorly when evaluated on larger and more diverse documentation sources. We explored more advanced models; employed re-training and hyperparameter tuning to further improve the performance. Our best-performing model, RoBERTa, achieved F1-scores of 0.71-0.93 in detecting different smells. In the third phase, we first focused on evaluating the feasibility and impact of fixing various smells in the eyes of practitioners. Through a second survey of 30 practitioners, we found that fixing the lazy smell was perceived as the most feasible and impactful. However, there was no universal consensus on whether and how other smells can/should be fixed. Finally, we proposed a two-stage pipeline for fixing lazy documentation, involving additional textual description and documentation-specific code example generation. Our approach utilized a large language model, GPT- 3, to generate enhanced documentation based on non-lazy examples and to produce code examples. The generated code examples were refined iteratively until they were error-free. Our technique demonstrated a high success rate with a significant number of lazy documentation instances being fixed and error-free code examples being generated
- …