4,103 research outputs found

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    A survey on bias in machine learning research

    Full text link
    Current research on bias in machine learning often focuses on fairness, while overlooking the roots or causes of bias. However, bias was originally defined as a "systematic error," often caused by humans at different stages of the research process. This article aims to bridge the gap between past literature on bias in research by providing taxonomy for potential sources of bias and errors in data and models. The paper focus on bias in machine learning pipelines. Survey analyses over forty potential sources of bias in the machine learning (ML) pipeline, providing clear examples for each. By understanding the sources and consequences of bias in machine learning, better methods can be developed for its detecting and mitigating, leading to fairer, more transparent, and more accurate ML models.Comment: Submitted to journal. arXiv admin note: substantial text overlap with arXiv:2308.0946

    Investigation into Photon Emissions as a Side-Channel Leakage in Two Microcontrollers: A Focus on SRAM Blocks

    Get PDF
    Microcontrollers are extensively utilized across a diverse range of applications. However, with the escalating usage of these devices, the risk to their security and the valuable data they process correspondingly intensifies. These devices could potentially be susceptible to various security threats, with side channel leakage standing out as a notable concern. Among the numerous types of side-channel leakages, photon emissions from active devices emerge as a potentially significant concern. These emissions, a characteristic of all semiconductor devices including microcontrollers, occur during their operation. Depending on the operating point and the internal state of the chip, these emissions can reflect the device’s internal operations. Therefore, a malicious individual could potentially exploit these emissions to gain insights into the computations being performed within the device. This dissertation delves into the investigation of photon emissions from the SRAM blocks of two distinct microcontrollers, utilizing a cost-effective setup. The aim is to extract information from these emissions, analyzing them as potential side-channel leakage points. In the first segment of the study, a PIC microcontroller variant is investigated. The quiescent photon emissions from the SRAM are examined. A correlation attack was successfully executed on these emissions, which led to the recovery of the AES encryption key. Furthermore, differential analysis was used to examine the location of SRAM bits. The combination of this information with the application of an image processing method, namely the Structural Similarity Index (SSIM), assisted in revealing the content of SRAM cells from photon emission images. The second segment of this study, for the first time, emphasizes on a RISC-V chip, examining the photon emissions of the SRAM during continuous reading. Probing the photon emissions from the row and column detectors led to the identification of a target word location, which is capable of revealing the AES key. Also, the content of target row was retrieved through the photon emissions originating from the drivers and the SRAM cells themselves. Additionally, the SSIM technique was utilized to determine the address of a targeted word in RISC-V photon emissions which cannot be analyzed through visual inspection. The insights gained from this research contribute to a deeper understanding of side-channel leakage via photon emissions and demonstrate its potential potency in extracting critical information from digital devices. Moreover, this information significantly contributes to the development of innovative security measures, an aspect becoming increasingly crucial in our progressively digitized world

    Efficient resilience analysis and decision-making for complex engineering systems

    Get PDF
    Modern societies around the world are increasingly dependent on the smooth functionality of progressively more complex systems, such as infrastructure systems, digital systems like the internet, and sophisticated machinery. They form the cornerstones of our technologically advanced world and their efficiency is directly related to our well-being and the progress of society. However, these important systems are constantly exposed to a wide range of threats of natural, technological, and anthropogenic origin. The emergence of global crises such as the COVID-19 pandemic and the ongoing threat of climate change have starkly illustrated the vulnerability of these widely ramified and interdependent systems, as well as the impossibility of predicting threats entirely. The pandemic, with its widespread and unexpected impacts, demonstrated how an external shock can bring even the most advanced systems to a standstill, while the ongoing climate change continues to produce unprecedented risks to system stability and performance. These global crises underscore the need for systems that can not only withstand disruptions, but also, recover from them efficiently and rapidly. The concept of resilience and related developments encompass these requirements: analyzing, balancing, and optimizing the reliability, robustness, redundancy, adaptability, and recoverability of systems -- from both technical and economic perspectives. This cumulative dissertation, therefore, focuses on developing comprehensive and efficient tools for resilience-based analysis and decision-making of complex engineering systems. The newly developed resilience decision-making procedure is at the core of these developments. It is based on an adapted systemic risk measure, a time-dependent probabilistic resilience metric, as well as a grid search algorithm, and represents a significant innovation as it enables decision-makers to identify an optimal balance between different types of resilience-enhancing measures, taking into account monetary aspects. Increasingly, system components have significant inherent complexity, requiring them to be modeled as systems themselves. Thus, this leads to systems-of-systems with a high degree of complexity. To address this challenge, a novel methodology is derived by extending the previously introduced resilience framework to multidimensional use cases and synergistically merging it with an established concept from reliability theory, the survival signature. The new approach combines the advantages of both original components: a direct comparison of different resilience-enhancing measures from a multidimensional search space leading to an optimal trade-off in terms of system resilience, and a significant reduction in computational effort due to the separation property of the survival signature. It enables that once a subsystem structure has been computed -- a typically computational expensive process -- any characterization of the probabilistic failure behavior of components can be validated without having to recompute the structure. In reality, measurements, expert knowledge, and other sources of information are loaded with multiple uncertainties. For this purpose, an efficient method based on the combination of survival signature, fuzzy probability theory, and non-intrusive stochastic simulation (NISS) is proposed. This results in an efficient approach to quantify the reliability of complex systems, taking into account the entire uncertainty spectrum. The new approach, which synergizes the advantageous properties of its original components, achieves a significant decrease in computational effort due to the separation property of the survival signature. In addition, it attains a dramatic reduction in sample size due to the adapted NISS method: only a single stochastic simulation is required to account for uncertainties. The novel methodology not only represents an innovation in the field of reliability analysis, but can also be integrated into the resilience framework. For a resilience analysis of existing systems, the consideration of continuous component functionality is essential. This is addressed in a further novel development. By introducing the continuous survival function and the concept of the Diagonal Approximated Signature as a corresponding surrogate model, the existing resilience framework can be usefully extended without compromising its fundamental advantages. In the context of the regeneration of complex capital goods, a comprehensive analytical framework is presented to demonstrate the transferability and applicability of all developed methods to complex systems of any type. The framework integrates the previously developed resilience, reliability, and uncertainty analysis methods. It provides decision-makers with the basis for identifying resilient regeneration paths in two ways: first, in terms of regeneration paths with inherent resilience, and second, regeneration paths that lead to maximum system resilience, taking into account technical and monetary factors affecting the complex capital good under analysis. In summary, this dissertation offers innovative contributions to efficient resilience analysis and decision-making for complex engineering systems. It presents universally applicable methods and frameworks that are flexible enough to consider system types and performance measures of any kind. This is demonstrated in numerous case studies ranging from arbitrary flow networks, functional models of axial compressors to substructured infrastructure systems with several thousand individual components.Moderne Gesellschaften sind weltweit zunehmend von der reibungslosen Funktionalität immer komplexer werdender Systeme, wie beispielsweise Infrastruktursysteme, digitale Systeme wie das Internet oder hochentwickelten Maschinen, abhängig. Sie bilden die Eckpfeiler unserer technologisch fortgeschrittenen Welt, und ihre Effizienz steht in direktem Zusammenhang mit unserem Wohlbefinden sowie dem Fortschritt der Gesellschaft. Diese wichtigen Systeme sind jedoch einer ständigen und breiten Palette von Bedrohungen natürlichen, technischen und anthropogenen Ursprungs ausgesetzt. Das Auftreten globaler Krisen wie die COVID-19-Pandemie und die anhaltende Bedrohung durch den Klimawandel haben die Anfälligkeit der weit verzweigten und voneinander abhängigen Systeme sowie die Unmöglichkeit einer Gefahrenvorhersage in voller Gänze eindrücklich verdeutlicht. Die Pandemie mit ihren weitreichenden und unerwarteten Auswirkungen hat gezeigt, wie ein externer Schock selbst die fortschrittlichsten Systeme zum Stillstand bringen kann, während der anhaltende Klimawandel immer wieder beispiellose Risiken für die Systemstabilität und -leistung hervorbringt. Diese globalen Krisen unterstreichen den Bedarf an Systemen, die nicht nur Störungen standhalten, sondern sich auch schnell und effizient von ihnen erholen können. Das Konzept der Resilienz und die damit verbundenen Entwicklungen umfassen diese Anforderungen: Analyse, Abwägung und Optimierung der Zuverlässigkeit, Robustheit, Redundanz, Anpassungsfähigkeit und Wiederherstellbarkeit von Systemen -- sowohl aus technischer als auch aus wirtschaftlicher Sicht. In dieser kumulativen Dissertation steht daher die Entwicklung umfassender und effizienter Instrumente für die Resilienz-basierte Analyse und Entscheidungsfindung von komplexen Systemen im Mittelpunkt. Das neu entwickelte Resilienz-Entscheidungsfindungsverfahren steht im Kern dieser Entwicklungen. Es basiert auf einem adaptierten systemischen Risikomaß, einer zeitabhängigen, probabilistischen Resilienzmetrik sowie einem Gittersuchalgorithmus und stellt eine bedeutende Innovation dar, da es Entscheidungsträgern ermöglicht, ein optimales Gleichgewicht zwischen verschiedenen Arten von Resilienz-steigernden Maßnahmen unter Berücksichtigung monetärer Aspekte zu identifizieren. Zunehmend weisen Systemkomponenten eine erhebliche Eigenkomplexität auf, was dazu führt, dass sie selbst als Systeme modelliert werden müssen. Hieraus ergeben sich Systeme aus Systemen mit hoher Komplexität. Um diese Herausforderung zu adressieren, wird eine neue Methodik abgeleitet, indem das zuvor eingeführte Resilienzrahmenwerk auf multidimensionale Anwendungsfälle erweitert und synergetisch mit einem etablierten Konzept aus der Zuverlässigkeitstheorie, der Überlebenssignatur, zusammengeführt wird. Der neue Ansatz kombiniert die Vorteile beider ursprünglichen Komponenten: Einerseits ermöglicht er einen direkten Vergleich verschiedener Resilienz-steigernder Maßnahmen aus einem mehrdimensionalen Suchraum, der zu einem optimalen Kompromiss in Bezug auf die Systemresilienz führt. Andererseits ermöglicht er durch die Separationseigenschaft der Überlebenssignatur eine signifikante Reduktion des Rechenaufwands. Sobald eine Subsystemstruktur berechnet wurde -- ein typischerweise rechenintensiver Prozess -- kann jede Charakterisierung des probabilistischen Ausfallverhaltens von Komponenten validiert werden, ohne dass die Struktur erneut berechnet werden muss. In der Realität sind Messungen, Expertenwissen sowie weitere Informationsquellen mit vielfältigen Unsicherheiten belastet. Hierfür wird eine effiziente Methode vorgeschlagen, die auf der Kombination von Überlebenssignatur, unscharfer Wahrscheinlichkeitstheorie und nicht-intrusiver stochastischer Simulation (NISS) basiert. Dadurch entsteht ein effizienter Ansatz zur Quantifizierung der Zuverlässigkeit komplexer Systeme unter Berücksichtigung des gesamten Unsicherheitsspektrums. Der neue Ansatz, der die vorteilhaften Eigenschaften seiner ursprünglichen Komponenten synergetisch zusammenführt, erreicht eine bedeutende Verringerung des Rechenaufwands aufgrund der Separationseigenschaft der Überlebenssignatur. Er erzielt zudem eine drastische Reduzierung der Stichprobengröße aufgrund der adaptierten NISS-Methode: Es wird nur eine einzige stochastische Simulation benötigt, um Unsicherheiten zu berücksichtigen. Die neue Methodik stellt nicht nur eine Neuerung auf dem Gebiet der Zuverlässigkeitsanalyse dar, sondern kann auch in das Resilienzrahmenwerk integriert werden. Für eine Resilienzanalyse von real existierenden Systemen ist die Berücksichtigung kontinuierlicher Komponentenfunktionalität unerlässlich. Diese wird in einer weiteren Neuentwicklung adressiert. Durch die Einführung der kontinuierlichen Überlebensfunktion und dem Konzept der Diagonal Approximated Signature als entsprechendes Ersatzmodell kann das bestehende Resilienzrahmenwerk sinnvoll erweitert werden, ohne seine grundlegenden Vorteile zu beeinträchtigen. Im Kontext der Regeneration komplexer Investitionsgüter wird ein umfassendes Analyserahmenwerk vorgestellt, um die Übertragbarkeit und Anwendbarkeit aller entwickelten Methoden auf komplexe Systeme jeglicher Art zu demonstrieren. Das Rahmenwerk integriert die zuvor entwickelten Methoden der Resilienz-, Zuverlässigkeits- und Unsicherheitsanalyse. Es bietet Entscheidungsträgern die Basis für die Identifikation resilienter Regenerationspfade in zweierlei Hinsicht: Zum einen im Sinne von Regenerationspfaden mit inhärenter Resilienz und zum anderen Regenerationspfade, die zu einer maximalen Systemresilienz unter Berücksichtigung technischer und monetärer Einflussgrößen des zu analysierenden komplexen Investitionsgutes führen. Zusammenfassend bietet diese Dissertation innovative Beiträge zur effizienten Resilienzanalyse und Entscheidungsfindung für komplexe Ingenieursysteme. Sie präsentiert universell anwendbare Methoden und Rahmenwerke, die flexibel genug sind, um beliebige Systemtypen und Leistungsmaße zu berücksichtigen. Dies wird in zahlreichen Fallstudien von willkürlichen Flussnetzwerken, funktionalen Modellen von Axialkompressoren bis hin zu substrukturierten Infrastruktursystemen mit mehreren tausend Einzelkomponenten demonstriert

    The Efficacy of Analgesic Subdissociative Dose Ketamine in Trauma Casualties Treated by U.S. Military Special Operations Medical Professionals in a Prehospital Environment

    Get PDF
    Research Focus. This study’s main objective was to determine the efficacy of sub-dissociative ketamine to reduce the pain of trauma casualties treated by U.S. military medical professionals in a prehospital environment evidenced by the 0–10 numeric rating scale (NRS) for pain. Research Methods. This quantitative study was accomplished using a pragmatic approach integrating social cognitive theory complemented by mixing methods using qualitative phenomenological influence through narrative inquiry. This exploratory retrospective, cross-sectional study, utilizing a quasi-experimental pretest-posttest design, used deidentified sample data (N = 47) for secondary analysis from U.S. Special Operations medical providers and were included in a casualty data collection tool. Quantitative study inclusion criteria were adult casualties treated by U.S. military medical professionals with ketamine in a prehospital environment, had documented injury data, and had both pre- and post-ketamine pain scores. Descriptive statistics, followed by inferential statistical analyses using Shapiro-Wilkes, Wilcoxon Signed Rank, Spearman rho, and Kruskal Wallis tests were used. Additionally, phenomenology guided the analysis of two (n = 2) case studies. In vivo coding was used to develop themes and subthemes. Case studies collected from U.S. military medical professionals provided qualitative insight that reinforced the quantitative data and provided clinical validity to the study. Research Results/Findings. The study showed safe, efficacious use of analgesic sub-disociative ketamine use in prehospital trauma casualties relative to the 0–10 NRS for pain. The median reported pre-ketamine pain scale for casualties was 9.0 (IQR 2). The median post-ketamine pain scale was 0.0 (IQR 3). The mean total dosage of ketamine administered was 98.19 mg (SE = 9.545). There were 6 (12.8%) casualties who experienced side effects from ketamine that were neither permanent nor life-threatening. The case studies provided the human aspect of the study, reinforced the quantitative data, and provided clinical validity. Post-ketamine pain scores were better than pre-ketamine pain scores. Higher dosages of ketamine provided greater pain relief. No life threatening nor adverse drug reactions were found in this study. Conclusions From Research. This study demonstrated a safe, efficacious analgesic ketamine use in prehospital trauma casualties used by U.S. military special operations medical professionals relative to the 0–10 NRS for pain. The results of this study may inform medical practitioners and policymakers regarding the efficacy of analgesic ketamine in a prehospital environment, aid in making informed treatment decisions regarding trauma casualties, and provide facts for updating and improving clinical practice guidelines and policies focused on the U.S. military. Advancing the understanding to promote better prehospital pain management guidelines, procedures, and practices is essential. Education efforts will make medical professionals aware of the importance of analgesic ketamine for trauma casualties in a prehospital environment. @font-face {font-family: Cambria Math ; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536859905 -1073732485 9 0 511 0;}@font-face {font-family: Calibri HeadingsHeadings ; panose-1:2 11 6 4 2 2 2 2 2 4; mso-font-alt:Calibri; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:auto; mso-font-signature:0 0 0 0 0 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent: ; margin:0in; mso-pagination:widow-orphan; font-size:11.0pt; font-family: Calibri ,sans-serif; mso-fareast-font-family:Arial; mso-bidi-font-family: Calibri HeadingsHeadings ; color:black; mso-themecolor:text1;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-size:11.0pt; mso-ansi-font-size:11.0pt; mso-bidi-font-size:11.0pt; font-family: Calibri ,sans-serif; mso-ascii-font-family:Calibri; mso-fareast-font-family:Arial; mso-hansi-font-family:Calibri; mso-bidi-font-family: Calibri HeadingsHeadings ; color:black; mso-themecolor:text1; mso-font-kerning:0pt; mso-ligatures:none;}div.WordSection1 {page:WordSection1;

    Geometric Data Analysis: Advancements of the Statistical Methodology and Applications

    Get PDF
    Data analysis has become fundamental to our society and comes in multiple facets and approaches. Nevertheless, in research and applications, the focus was primarily on data from Euclidean vector spaces. Consequently, the majority of methods that are applied today are not suited for more general data types. Driven by needs from fields like image processing, (medical) shape analysis, and network analysis, more and more attention has recently been given to data from non-Euclidean spaces–particularly (curved) manifolds. It has led to the field of geometric data analysis whose methods explicitly take the structure (for example, the topology and geometry) of the underlying space into account. This thesis contributes to the methodology of geometric data analysis by generalizing several fundamental notions from multivariate statistics to manifolds. We thereby focus on two different viewpoints. First, we use Riemannian structures to derive a novel regression scheme for general manifolds that relies on splines of generalized Bézier curves. It can accurately model non-geodesic relationships, for example, time-dependent trends with saturation effects or cyclic trends. Since Bézier curves can be evaluated with the constructive de Casteljau algorithm, working with data from manifolds of high dimensions (for example, a hundred thousand or more) is feasible. Relying on the regression, we further develop a hierarchical statistical model for an adequate analysis of longitudinal data in manifolds, and a method to control for confounding variables. We secondly focus on data that is not only manifold- but even Lie group-valued, which is frequently the case in applications. We can only achieve this by endowing the group with an affine connection structure that is generally not Riemannian. Utilizing it, we derive generalizations of several well-known dissimilarity measures between data distributions that can be used for various tasks, including hypothesis testing. Invariance under data translations is proven, and a connection to continuous distributions is given for one measure. A further central contribution of this thesis is that it shows use cases for all notions in real-world applications, particularly in problems from shape analysis in medical imaging and archaeology. We can replicate or further quantify several known findings for shape changes of the femur and the right hippocampus under osteoarthritis and Alzheimer's, respectively. Furthermore, in an archaeological application, we obtain new insights into the construction principles of ancient sundials. Last but not least, we use the geometric structure underlying human brain connectomes to predict cognitive scores. Utilizing a sample selection procedure, we obtain state-of-the-art results

    Outcome Measurement in Functional Neurological Symptom Disorder

    Get PDF
    Outcome measurement in Functional Neurological Symptom Disorders (FNSDs) is particularly complex. Pressing questions include what kind of measure is more accurate or meaningful, or how to achieve standardisation in a clinically heterogenous group where subjective and objective observations of the same construct may deviate. This project aimed to build on the limited knowledge of measuring outcomes in FNSDs and attempts to address one of its inherent complexities; where clinical aspects of the disorder confound the usual prioritisation of "objective" over "subjective" (or patient-rated) measures. This PhD comprised a literature review and three research studies, each using different measures to assess the current status and (potential) outcomes in FNSD patients. A narrative description of systematically identified literature on stress, distress, and arousal measures in FNSD presents an overarching profile of the relationships between subjective and objective study measures. Eighteen studies (12 functional seizures, six other FNSD) capturing 396 FNSD patients were included. Eleven reported no correlation between subjective and objective measures. Only four studies reported significant correlations (r's=-0.74-0.59, p's <0.05). The small number of studies and diverse methodologies limit the conclusions of this review. However, the review's findings underscore the importance of validating outcome measures in patients with FNSD, carefully selecting the most appropriate measures for the research objectives, and possibly combining different measures optimally to triangulate a patient's current state, level of functioning or disability. Study One used factor analysis and Rasch modelling to investigate the psychometric properties of a novel FNSD-specific resource-based measure developed as an outcome measure for psychological therapies (The sElf-efficacy, assertiveness, Social support, self-awareness and helpful thinking (EASE) questionnaire). A 4-factor model identified self-efficacy (SE), self-awareness/assertiveness (SA), social support (SS) and interpersonal illness burden (IIB) as relevant domains. Each latent scale fits the Rasch model after refinement of the category responses and removing two items. With further improvement, the EASE-F has the potential to reliably measure self-reported SE, SA, SS, and IIB constructs which were found to be meaningful to patients with FNSD. This can identify patients with strengths and deficits in these constructs, allowing therapists to individualise interventions. Recommendations for refinement of future instrument versions, using the measure in clinical practice, and research in FNSDs are discussed. Study Two sought to understand the urgent and emergency care (UEC) service usage patterns among FNSD patients. Retrospective FNSD patient data from 2013 to 2016 UEC records (including NHS 111 calls, ambulance services, A&E visits, and acute admissions) were used to compare FNSD UEC usage rates with those of the general population and to model rates before and after psychotherapy. FNSD patients displayed 23 to 60 times higher UEC usage than the general population. Emergency service usage rates showed a significant reduction in level (rate level change = -0.90--0.70, p's <0.05) immediately after psychotherapy. While this study was uncontrolled, and a causal relationship between psychotherapy and reduced UEC service use cannot be proven by its design, the decrease in pre-treatment service usage among FNSD patients mirrors treatment-related improvements in health status and functioning previously documented using self-reported outcome measures. Further research is warranted to elucidate features of emergency care service use by patients with FNSD, assess interventions' cost-effectiveness, and help to optimise limited health care resource allocation. Study Three utilised a delay discounting and emotional bias task to assess if these measures could indicate the health state of FNSD patients and to compare findings in patients with those in healthy controls. This online-based study collected data on cognitive-affective functioning, decision-making and, indirectly, emotion regulation, alongside self-reported health data and indicators of mood while completing the tasks. Delay discounting (DD) was steeper in patients with FNSD, indicating a preference for less subjectively valuable immediate rewards. Patients displayed priming and interference effects for angry and happy facial expressions, which differed from the interference effects observed in healthy controls [F(1,76) = 3.5, p = 0.037, η2p = 0.084]. Modest associations (r's =0.26-0.33, p's <0.05) were found between the DD estimates and self-reported generalised anxiety, but not current feelings of anxiety in FNSD. There were no correlations with indices for negative affective priming or interference. These measures did not show predictive ability for self-reported difficulty regulating emotions, anxiety, depression or coping in FNSD. However, the fact that the DD task and self-reported constructs failed to correlate does not invalidate this objective test. The findings underscore the importance of using a combined approach to outcome measurement. This project highlights the importance of a more comprehensive understanding of outcomes and measures that capture clinically valid and meaningful health information. Given that subjective and objective measures capture different aspects of health state or function, a combination of measurement approaches will likely produce the most comprehensive understanding of patients' current state or treatment outcome. Because of the attentional, emotional, and perceptual alterations implicated in FNSD and the variable external representations of these, the difference between objective and subjective measures represents an interesting observation in its own right. The size of the discrepancy between subjective and objective measures may provide additional valuable insights into the underlying pathology. Nonetheless, there is still a need for standardisation and consistency in FNSD outcome measurement and reporting. Several important factors, such as the timeframe of measures, the influence of confounding factors, and the variety of presentation of any aspect of the disorder (e.g., physiological, cognitive, social, or behavioural presentations of arousal/stress), will need to be considered when designing and interpreting measurements for research or clinical analysis of the patient group

    Gut-brain interactions affecting metabolic health and central appetite regulation in diabetes, obesity and aging

    Get PDF
    The central aim of this thesis was to study the effects of gut microbiota on host energy metabolism and central regulation of appetite. We specifically studied the interaction between gut microbiota-derived short-chain fatty acids (SCFAs), postprandial glucose metabolism and central regulation of appetite. In addition, we studied probable determinants that affect this interaction, specifically: host genetics, bariatric surgery, dietary intake and hypoglycemic medication.First, we studied the involvement of microbiota-derived short-chain fatty acids in glucose tolerance. In an observational study we found an association of intestinal availability of SCFAs acetate and butyrate with postprandial insulin and glucose responses. Hereafter, we performed a clinical trial, administering acetate intravenously at a constant rate and studied the effects on glucose tolerance and central regulation of appetite. The acetate intervention did not have a significant effect on these outcome measures, suggesting the association between increased gastrointestinal SCFAs and metabolic health, as observed in the observational study, is not paralleled when inducing acute plasma elevations.Second, we looked at other determinants affecting gut-brain interactions in metabolic health and central appetite signaling. Therefore, we studied the relation between the microbiota and central appetite regulation in identical twin pairs discordant for BMI. Second, we studied the relation between microbial composition and post-surgery gastrointestinal symptoms upon bariatric surgery. Third, we report the effects of increased protein intake on host microbiota composition and central regulation of appetite. Finally, we explored the effects of combination therapy with GLP-1 agonist exenatide and SGLT2 inhibitor dapagliflozin on brain responses to food stimuli

    Monitoring Additive Manufacturing Machine Health

    Get PDF
    Additive manufacturing (AM) allows the production of parts and goods with many benefits over more conventional manufacturing methods. AM permits more geometrically complex designs, custom and low-volume production runs, and the flexibility to produce a wide variety of parts on a single machine with reduced pre-production cost and time requirements. However, it can be difficult to determine the condition, or health, of an AM machine since complex designs can increase the variability of part quality. With fewer parts produced, destructive testing is less desirable and statistical methods of tracking part quality may be less informative. Combined with the relatively more complex nature of AM machines, qualifying AM machines and monitoring their health to perform maintenance or repairs is a challenging task. We first present a case study that demonstrates the difficulty of monitoring the qualification of an AM machine. We then discuss some unique challenges AM presents when calibrating and taking measurements of laser power, and we demonstrate the relative insufficiency of this method in tracking the qualification status of an AM machine and the quality of the parts produced. Next, we present a framework that reverses the directionality of monitoring AM machine health. Rather than monitoring machine subsystems and intermediate metrics reflective of part quality, we instead directly monitor part quality through a combination of witness builds and witness parts that provide observational data to define the health status of a machine. Witness builds provide more accurate data separated from the noisy influence of parts and parameter settings, while witness artifacts provide more timely data but with less accuracy. Finally, machine health is modeled as a partially observed Markov decision process using the witness parts framework to maximize the long-term expected value per build. We show the superiority of this model by comparison to two less complex models, one that uses no use no witness parts and another that uses only witness builds. A case study shows the benefits of implementing the model, and a sensitivity analysis is performed to show relevant insights and considerations
    corecore