441 research outputs found
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (âAIâ) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics â and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the CatĂłlica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
La traduzione specializzata allâopera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.
Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The âLanguage Toolkit â Le lingue straniere al servizio dellâinternazionalizzazione dellâimpresaâ project, promoted by the Department of Interpreting and Translation (ForlĂŹ Campus) in collaboration with the Romagna Chamber of Commerce (ForlĂŹ-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices
Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning
Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and
direct surgical procedures, and to track the development of bone-related diseases. This often
involves radiologists who have to annotate bones manually or in a semi-automatic way, which is
a time consuming task. Their annotation workload can be reduced by automated segmentation
and detection of individual bones. This automation of distinct bone segmentation not only has
the potential to accelerate current workflows but also opens up new possibilities for processing
and presenting medical data for planning, navigation, and education.
In this thesis, we explored the use of deep learning for automating the segmentation of all
individual bones within an upper-body CT scan. To do so, we had to find a network architec-
ture that provides a good trade-off between the problemâs high computational demands and the
resultsâ accuracy. After finding a baseline method and having enlarged the dataset, we set out
to eliminate the most prevalent types of error. To do so, we introduced an novel method called
binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin-
guishing bone from non-bone is conducted separately from identifying the individual bones.
Both predictions are then merged, which leads to superior results. Another type of error is tack-
led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger
fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input
into the network while keeping the growth of additional pixels in check.
Overall, we present a deep-learning-based method that reliably segments most of the over
one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter
quickly enough to be used in interactive software. Our algorithm has been included in our
groups virtual reality medical image visualisation software SpectoVR with the plan to be used
as one of the puzzle piece in surgical planning and navigation, as well as in the education of
future doctors
Weakly Supervised Learning for Breast Cancer Prediction on Mammograms in Realistic Settings
Automatic methods for early detection of breast cancer on mammography can
significantly decrease mortality. Broad uptake of those methods in hospitals is
currently hindered because the methods have too many constraints. They assume
annotations available for single images or even regions-of-interest (ROIs), and
a fixed number of images per patient. Both assumptions do not hold in a general
hospital setting. Relaxing those assumptions results in a weakly supervised
learning setting, where labels are available per case, but not for individual
images or ROIs. Not all images taken for a patient contain malignant regions
and the malignant ROIs cover only a tiny part of an image, whereas most image
regions represent benign tissue. In this work, we investigate a two-level
multi-instance learning (MIL) approach for case-level breast cancer prediction
on two public datasets (1.6k and 5k cases) and an in-house dataset of 21k
cases. Observing that breast cancer is usually only present in one side, while
images of both breasts are taken as a precaution, we propose a domain-specific
MIL pooling variant. We show that two-level MIL can be applied in realistic
clinical settings where only case labels, and a variable number of images per
patient are available. Data in realistic settings scales with continuous
patient intake, while manual annotation efforts do not. Hence, research should
focus in particular on unsupervised ROI extraction, in order to improve breast
cancer prediction for all patients.Comment: 10 pages, 5 figures, 5 table
Adaptive novelty detection with false discovery rate guarantee
This paper studies the semi-supervised novelty detection problem where a set
of "typical" measurements is available to the researcher. Motivated by recent
advances in multiple testing and conformal inference, we propose AdaDetect, a
flexible method that is able to wrap around any probabilistic classification
algorithm and control the false discovery rate (FDR) on detected novelties in
finite samples without any distributional assumption other than
exchangeability. In contrast to classical FDR-controlling procedures that are
often committed to a pre-specified p-value function, AdaDetect learns the
transformation in a data-adaptive manner to focus the power on the directions
that distinguish between inliers and outliers. Inspired by the multiple testing
literature, we further propose variants of AdaDetect that are adaptive to the
proportion of nulls while maintaining the finite-sample FDR control. The
methods are illustrated on synthetic datasets and real-world datasets,
including an application in astrophysics
Are Natural Domain Foundation Models Useful for Medical Image Classification?
The deep learning field is converging towards the use of general foundation
models that can be easily adapted for diverse tasks. While this paradigm shift
has become common practice within the field of natural language processing,
progress has been slower in computer vision. In this paper we attempt to
address this issue by investigating the transferability of various
state-of-the-art foundation models to medical image classification tasks.
Specifically, we evaluate the performance of five foundation models, namely
SAM, SEEM, DINOv2, BLIP, and OpenCLIP across four well-established medical
imaging datasets. We explore different training settings to fully harness the
potential of these models. Our study shows mixed results. DINOv2 consistently
outperforms the standard practice of ImageNet pretraining. However, other
foundation models failed to consistently beat this established baseline
indicating limitations in their transferability to medical image classification
tasks.Comment: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV
2024
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Anwendungen maschinellen Lernens fĂŒr datengetriebene PrĂ€vention auf Populationsebene
Healthcare costs are systematically rising, and current therapy-focused healthcare systems are not sustainable in the long run. While disease prevention is a viable instrument for reducing costs and suffering, it requires risk modeling to stratify populations, identify high- risk individuals and enable personalized interventions. In current clinical practice, however, systematic risk stratification is limited: on the one hand, for the vast majority of endpoints, no risk models exist. On the other hand, available models focus on predicting a single disease at a time, rendering predictor collection burdensome. At the same time, the den- sity of individual patient data is constantly increasing. Especially complex data modalities, such as -omics measurements or images, may contain systemic information on future health trajectories relevant for multiple endpoints simultaneously. However, to date, this data is inaccessible for risk modeling as no dedicated methods exist to extract clinically relevant information. This study built on recent advances in machine learning to investigate the ap- plicability of four distinct data modalities not yet leveraged for risk modeling in primary prevention. For each data modality, a neural network-based survival model was developed to extract predictive information, scrutinize performance gains over commonly collected covariates, and pinpoint potential clinical utility. Notably, the developed methodology was able to integrate polygenic risk scores for cardiovascular prevention, outperforming existing approaches and identifying benefiting subpopulations. Investigating NMR metabolomics, the developed methodology allowed the prediction of future disease onset for many common diseases at once, indicating potential applicability as a drop-in replacement for commonly collected covariates. Extending the methodology to phenome-wide risk modeling, elec- tronic health records were found to be a general source of predictive information with high systemic relevance for thousands of endpoints. Assessing retinal fundus photographs, the developed methodology identified diseases where retinal information most impacted health trajectories. In summary, the results demonstrate the capability of neural survival models to integrate complex data modalities for multi-disease risk modeling in primary prevention and illustrate the tremendous potential of machine learning models to disrupt medical practice toward data-driven prevention at population scale.Die Kosten im Gesundheitswesen steigen systematisch und derzeitige therapieorientierte Gesundheitssysteme sind nicht nachhaltig. Angesichts vieler verhinderbarer Krankheiten stellt die PrĂ€vention ein veritables Instrument zur Verringerung von Kosten und Leiden dar. Risikostratifizierung ist die grundlegende Voraussetzung fĂŒr ein prĂ€ventionszentri- ertes Gesundheitswesen um Personen mit hohem Risiko zu identifizieren und MaĂnah- men einzuleiten. Heute ist eine systematische Risikostratifizierung jedoch nur begrenzt möglich, da fĂŒr die meisten Krankheiten keine Risikomodelle existieren und sich verfĂŒg- bare Modelle auf einzelne Krankheiten beschrĂ€nken. Weil fĂŒr deren Berechnung jeweils spezielle Sets an PrĂ€diktoren zu erheben sind werden in Praxis oft nur wenige Modelle angewandt. Gleichzeitig versprechen komplexe DatenmodalitĂ€ten, wie Bilder oder -omics- Messungen, systemische Informationen ĂŒber zukĂŒnftige GesundheitsverlĂ€ufe, mit poten- tieller Relevanz fĂŒr viele Endpunkte gleichzeitig. Da es an dedizierten Methoden zur Ex- traktion klinisch relevanter Informationen fehlt, sind diese Daten jedoch fĂŒr die Risikomod- ellierung unzugĂ€nglich, und ihr Potenzial blieb bislang unbewertet. Diese Studie nutzt ma- chinelles Lernen, um die Anwendbarkeit von vier DatenmodalitĂ€ten in der PrimĂ€rprĂ€ven- tion zu untersuchen: polygene Risikoscores fĂŒr die kardiovaskulĂ€re PrĂ€vention, NMR Meta- bolomicsdaten, elektronische Gesundheitsakten und Netzhautfundusfotos. Pro Datenmodal- itĂ€t wurde ein neuronales Risikomodell entwickelt, um relevante Informationen zu extra- hieren, additive Information gegenĂŒber ĂŒblicherweise erfassten Kovariaten zu quantifizieren und den potenziellen klinischen Nutzen der DatenmodalitĂ€t zu ermitteln. Die entwickelte Me-thodik konnte polygene Risikoscores fĂŒr die kardiovaskulĂ€re PrĂ€vention integrieren. Im Falle der NMR-Metabolomik erschloss die entwickelte Methodik wertvolle Informa- tionen ĂŒber den zukĂŒnftigen Ausbruch von Krankheiten. Unter Einsatz einer phĂ€nomen- weiten Risikomodellierung erwiesen sich elektronische Gesundheitsakten als Quelle prĂ€dik- tiver Information mit hoher systemischer Relevanz. Bei der Analyse von Fundusfotografien der Netzhaut wurden Krankheiten identifiziert fĂŒr deren Vorhersage Netzhautinformationen genutzt werden könnten. Zusammengefasst zeigten die Ergebnisse das Potential neuronaler Risikomodelle die medizinische Praxis in Richtung einer datengesteuerten, prĂ€ventionsori- entierten Medizin zu verĂ€ndern
Artificial Intelligence at the Service of Medical Imaging in the Detection of Breast Tumors
Artificial intelligence is currently capable of imitating clinical reasoning in order to make a diagnosis, in particular that of breast cancer. This is possible, thanks to the exponential increase in medical images. Indeed, artificial intelligence systems are used to assist doctors and not replace them. Breast cancer is a cancerous tumor that can invade and destroy nearby tissue. Therefore, early and reliable detection of this disease is a great asset for the medical field. Some people use medical imaging techniques to diagnose this disease. Given the drawbacks of these techniques, diagnostic errors of doctors related to fatigue or inexperience, this work consists of showing how artificial intelligence methods, in particular artificial neural networks (ANN), deep learning (DL), support vector machines (SVM), expert systems, fuzzy logic can be applied on breast imaging, with the aim of improving the detection of this global scourge. Finally, the proposed system is composed of two (2) essential steps: the tumor detection phase and the diagnostic phase allowing the latter to decide whether the tumor is benign or malignant
Visualizing a Task Performerâs Gaze to Foster Observersâ Performance and Learning : a Systematic Literature Review on Eye Movement Modeling Examples
Eye movement modeling examples (EMMEs) are instructional videos (e.g., tutorials) that visualize another personâs gaze location while they demonstrate how to perform a task. This systematic literature review provides a detailed overview of studies on the effects of EMME to foster observersâ performance and learning and highlights their differences in EMME designs. Through a broad, systematic search on four relevant databases, we identified 72 EMME studies (78 experiments). First, we created an overview of the different study backgrounds. Studies most often taught tasks from the domains of sports/physical education, medicine, aviation, and STEM areas and had different rationales for displaying EMME. Next, we outlined how studies differed in terms of participant characteristics, task types, and the design of the EMME materials, which makes it hard to infer how these differences affect performance and learning. Third, we concluded that the vast majority of the experiments showed at least some positive effects of EMME during learning, on tests directly after learning, and tests after a delay. Finally, our results provide a first indication of which EMME characteristics may positively influence learning. Future research should start to more systematically examine the effects of specific EMME design choices for specific participant populations and task types
- âŠ