38 research outputs found
Designing a Crowd-Based Relocation System—The Case of Car-Sharing
Car-sharing services promise environmentally sustainable and cost-efficient alternatives
to private car ownership, contributing to more environmentally sustainable mobility. However, the
challenge of balancing vehicle supply and demand needs to be addressed for further improvement of
the service. Currently, employees must relocate vehicles from low-demand to high-demand areas,
which generates extra personnel costs, driven kilometers, and emissions. This study takes a Design
Science Research (DSR) approach to develop a new way of balancing the supply and demand of
vehicles in car-sharing, namely crowd-based relocation. We base our approach on crowdsourcing, a
concept by which customers are requested to perform vehicle relocations. This paper reports on our
comprehensive DSR project on designing and instantiating a crowd-based relocation information
system (CRIS). We assessed the resulting artifact in a car-sharing simulation and conducted a real world car-sharing service system field test. The evaluation reveals that CRIS has the potential for
improving vehicle availability, increasing environmental sustainability, and reducing operational
costs. Further, the prescriptive knowledge derived in our DSR project can be used as a starting point
to improve individual parts of the CRIS and to extend its application beyond car-sharing into other
sharing services, such as power bank- or e-scooter-sharing
Situativität, Funktionalität und Vertrauen: Ergebnisse einer szenariobasierten Interviewstudie zur Erklärbarkeit von KI in der Medizin
A central requirement for the use of artificial intelligence (AI) in medicine is its explainability, i. e., the provision of addressee-oriented information about its functioning. This leads to the question of how socially adequate explainability can be designed. To identify evaluation factors, we interviewed healthcare stakeholders about two scenarios: diagnostics and documentation. The scenarios vary the influence that an AI system has on decision-making through the interaction design and the amount of data processed. We present key evaluation factors for explainability at the interactional and procedural levels. Explainability must not interfere situationally in the doctor-patient conversation and question the professional role. At the same time, explainability functionally legitimizes an AI system as a second opinion and is central to building trust. A virtual embodiment of the AI system is advantageous for language-based explanation
ADAM10 is expressed in human podocytes and found in urinary vesicles of patients with glomerular kidney diseases
<p>Abstract</p> <p>Background</p> <p>The importance of the Notch signaling in the development of glomerular diseases has been recently described. Therefore we analyzed in podocytes the expression and activity of ADAM10, one important component of the Notch signaling complex.</p> <p>Methods</p> <p>By Western blot, immunofluorescence and immunohistochemistry analysis we characterized the expression of ADAM10 in human podocytes, human urine and human renal tissue.</p> <p>Results</p> <p>We present evidence, that differentiated human podocytes possessed increased amounts of mature ADAM10 and released elevated levels of L1 adhesion molecule, one well known substrate of ADAM10. By using specific siRNA and metalloproteinase inhibitors we demonstrate that ADAM10 is involved in the cleavage of L1 in human podocytes. Injury of podocytes enhanced the ADAM10 mediated cleavage of L1. In addition, we detected ADAM10 in urinary podocytes from patients with kidney diseases and in tissue sections of normal human kidney. Finally, we found elevated levels of ADAM10 in urinary vesicles of patients with glomerular kidney diseases.</p> <p>Conclusions</p> <p>The activity of ADAM10 in human podocytes may play an important role in the development of glomerular kidney diseases.</p
The German National Pandemic Cohort Network (NAPKON): rationale, study design and baseline characteristics
Schons M, Pilgram L, Reese J-P, et al. The German National Pandemic Cohort Network (NAPKON): rationale, study design and baseline characteristics. European Journal of Epidemiology . 2022.The German government initiated the Network University Medicine (NUM) in early 2020 to improve national research activities on the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pandemic. To this end, 36 German Academic Medical Centers started to collaborate on 13 projects, with the largest being the National Pandemic Cohort Network (NAPKON). The NAPKON's goal is creating the most comprehensive Coronavirus Disease 2019 (COVID-19) cohort in Germany. Within NAPKON, adult and pediatric patients are observed in three complementary cohort platforms (Cross-Sectoral, High-Resolution and Population-Based) from the initial infection until up to three years of follow-up. Study procedures comprise comprehensive clinical and imaging diagnostics, quality-of-life assessment, patient-reported outcomes and biosampling. The three cohort platforms build on four infrastructure core units (Interaction, Biosampling, Epidemiology, and Integration) and collaborations with NUM projects. Key components of the data capture, regulatory, and data privacy are based on the German Centre for Cardiovascular Research. By April 01, 2022, 34 university and 40 non-university hospitals have enrolled 5298 patients with local data quality reviews performed on 4727 (89%). 47% were female, the median age was 52 (IQR 36-62-) and 50 pediatric cases were included. 44% of patients were hospitalized, 15% admitted to an intensive care unit, and 12% of patients deceased while enrolled. 8845 visits with biosampling in 4349 patients were conducted by April 03, 2022. In this overview article, we summarize NAPKON's design, relevant milestones including first study population characteristics, and outline the potential of NAPKON for German and international research activities.Trial registration https://clinicaltrials.gov/ct2/show/NCT04768998 . https://clinicaltrials.gov/ct2/show/NCT04747366 . https://clinicaltrials.gov/ct2/show/NCT04679584. © 2022. The Author(s)
CYBEREMOTIONS – Collective Emotions in Cyberspace
AbstractEmotions are an important part of most societal dynamics. As with face to face meetings, Internet exchanges may not only include factual information but may also elicit emotional responses; how participants feel about the subject discussed or other group members. The development of automatic sentiment analysis has made large scale emotion detection and analysis possible using text messages collected from the web. We present results of two years of studies performed in the EU Large Scale Integrating Project CYBEREMOTIONS (Collective emotions in cyberspace) Our goal is to understand the role of collective emotions in creating, forming and breaking-up ICT mediated communities and to prepare the background for the next generation of emotionally-intelligent ICT services. Project results have already attracted a lot of attention from various mass media and research journals including the Science and New Scientist magazines. Nine Project teams are organised in three layers (data, theory and ICT output)
Usability and User Experience of a Chatbot for Student Support
Additional material for usability and user experience evaluation of conversational agent
Towards speech-based interactive post hoc explanations in explainable AI
AI-based systems offer solutions for information extraction (e.g., finding information), information transformation
(e.g., machine translation), classification (e.g., classifying news as fake or true), or decision support (e.g., providing
diagnoses and treatment proposals for medical doctors) in many real-world applications. The solutions are based on
machine leaning (ML) models and are commonly offered to a large and diverse group of users, some of them experts,
many others naĂŻve users from a large population. Nowadays and in particular deep neural network architectures are
black-boxes for users and even developers [1, 4, 9] (also cp. [6]). A major goal of Explainable Artificial Intelligence (XAI)
is making complex decision-making systems more trustworthy and accountable [7, p. 2]. That is why XAI seeks to
ensure transparency, interpretability, and explainability [9].
Common to most users is that they are not able to understand the functioning of the AI-based systems, i.e., those are
perceived as black-boxes. Humans are confronted with the results, but they cannot comprehend what information was
used by the system for reaching this result (interpretability), and in which way this information was processed and
weighted (transparency). The underlying reason is that an explicit functional description of the system is missing or
even not possible in most Machine-Learning-(ML)-based AI systems – the function is trained by adjusting the internal
parameters, and sometimes also the architecture is learned from a basic set of standard architectures. However, natural
language and speech-based explanations allow better explainability [1, p. 11] due to interactive post hoc explanations
in from of an informed dialog [7, p. 2]. Additionally, AI is also addressed by regulations, e.g., of the EU [2, 3], and thus
becomes even more relevant for industry and research. Here, not at least recognition of bias in AI systems’ decision
plays an important role