881 research outputs found
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (âAIâ) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics â and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the CatĂłlica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
One stone, two birds: A lightweight multidimensional learned index with cardinality support
Innovative learning based structures have recently been proposed to tackle
index and cardinality estimation tasks, specifically learned indexes and data
driven cardinality estimators. These structures exhibit excellent performance
in capturing data distribution, making them promising for integration into AI
driven database kernels. However, accurate estimation for corner case queries
requires a large number of network parameters, resulting in higher computing
resources on expensive GPUs and more storage overhead. Additionally, the
separate implementation for CE and learned index result in a redundancy waste
by storage of single table distribution twice. These present challenges for
designing AI driven database kernels. As in real database scenarios, a compact
kernel is necessary to process queries within a limited storage and time
budget. Directly integrating these two AI approaches would result in a heavy
and complex kernel due to a large number of network parameters and repeated
storage of data distribution parameters. Our proposed CardIndex structure
effectively killed two birds with one stone. It is a fast multidim learned
index that also serves as a lightweight cardinality estimator with parameters
scaled at the KB level. Due to its special structure and small parameter size,
it can obtain both CDF and PDF information for tuples with an incredibly low
latency of 1 to 10 microseconds. For tasks with low selectivity estimation, we
did not increase the model's parameters to obtain fine grained point density.
Instead, we fully utilized our structure's characteristics and proposed a
hybrid estimation algorithm in providing fast and exact results
Uncertainty Quantification for Molecular Property Predictions with Graph Neural Architecture Search
Graph Neural Networks (GNNs) have emerged as a prominent class of data-driven
methods for molecular property prediction. However, a key limitation of typical
GNN models is their inability to quantify uncertainties in the predictions.
This capability is crucial for ensuring the trustworthy use and deployment of
models in downstream tasks. To that end, we introduce AutoGNNUQ, an automated
uncertainty quantification (UQ) approach for molecular property prediction.
AutoGNNUQ leverages architecture search to generate an ensemble of
high-performing GNNs, enabling the estimation of predictive uncertainties. Our
approach employs variance decomposition to separate data (aleatoric) and model
(epistemic) uncertainties, providing valuable insights for reducing them. In
our computational experiments, we demonstrate that AutoGNNUQ outperforms
existing UQ methods in terms of both prediction accuracy and UQ performance on
multiple benchmark datasets. Additionally, we utilize t-SNE visualization to
explore correlations between molecular features and uncertainty, offering
insight for dataset improvement. AutoGNNUQ has broad applicability in domains
such as drug discovery and materials science, where accurate uncertainty
quantification is crucial for decision-making
Applying machine learning: a multi-role perspective
Machine (and deep) learning technologies are more and more present in several fields. It is undeniable that many aspects of our society are empowered by such technologies: web searches, content filtering on social networks, recommendations on e-commerce websites, mobile applications, etc., in addition to academic research. Moreover, mobile devices and internet sites, e.g., social networks, support the collection and sharing of information in real time. The pervasive deployment of the aforementioned technological instruments, both hardware and software, has led to the production of huge amounts of data. Such data has become more and more unmanageable, posing challenges to conventional computing platforms, and paving the way to the development and widespread use of the machine and deep learning. Nevertheless, machine learning is not only a technology. Given a task, machine learning is a way of proceeding (a way of thinking), and as such can be approached from different perspectives (points of view). This, in particular, will be the focus of this research. The entire work concentrates on machine learning, starting from different sources of data, e.g., signals and images, applied to different domains, e.g., Sport Science and Social History, and analyzed from different perspectives: from a non-data scientist point of view through tools and platforms; setting a problem stage from scratch; implementing an effective application for classification tasks; improving user interface experience through Data Visualization and eXtended Reality. In essence, not only in a quantitative task, not only in a scientific environment, and not only from a data-scientist perspective, machine (and deep) learning can do the difference
Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor Decomposition
Neural Network designs are quite diverse, from VGG-style to ResNet-style, and
from Convolutional Neural Networks to Transformers. Towards the design of
efficient accelerators, many works have adopted a dataflow-based, inter-layer
pipelined architecture, with a customised hardware towards each layer,
achieving ultra high throughput and low latency. The deployment of neural
networks to such dataflow architecture accelerators is usually hindered by the
available on-chip memory as it is desirable to preload the weights of neural
networks on-chip to maximise the system performance. To address this, networks
are usually compressed before the deployment through methods such as pruning,
quantization and tensor decomposition. In this paper, a framework for mapping
CNNs onto FPGAs based on a novel tensor decomposition method called Mixed-TD is
proposed. The proposed method applies layer-specific Singular Value
Decomposition (SVD) and Canonical Polyadic Decomposition (CPD) in a mixed
manner, achieving 1.73x to 10.29x throughput per DSP to state-of-the-art CNNs.
Our work is open-sourced: https://github.com/Yu-Zhewen/Mixed-TDComment: accepted by FPL202
Capability-Based Routes for Autonomous Vehicles
The pursuit of vehicle automation is an ongoing trend in the automotive industry. Particularly challenging is the goal of introducing driverless autonomous vehicles (AVs) into road traffic. To realize this vision, a targeted development of autonomous driving functions is essential. However, a targeted development process is only possible if the driving functions are tailored as appropriately and completely as possible to the operational design domain (ODD). Regardless of use case, all AVs have one thing in common: driving at least one route from A to B - whether simple or complex. For operational purposes, it is therefore necessary to ensure that the driving requirements (DRs) of the potential routes within the ODD do not exceed the driving capabilities (DCs) of the AVs. Currently, there is no approach that accomplishes the identification of exceeded capabilities.
This work presents a method for route-based specification of DRs and DCs for AVs. It addresses the core research question of how to identify routes with DRs that do not exceed the DCs of AVs. An initial analysis reveals the dependencies between route and DRs. Thereby, the scenery defined in the ODD is found to be a fundamental basis for the specification of behavioral requirements as part of the DRs. In combination with the applicable traffic rules, the scenery elements define the behavioral limits for AVs. These limits are specifically extracted and classified as behavioral demands from the scenery using an analysis of these combinations. To enable a route-based specification of DRs, the behavioral demands are modeled as behavior spaces and transformed into a generic map representation - the Behavior-Semantic Scenery Description (BSSD).
Based on the BSSD, a method is developed that generates behavioral requirements based on the route-constrained concatenation of behavior spaces. As a result, in addition to the method itself, the associated behavioral requirements are available as a basis for the route-based specification of DRs and DCs. Constraints for the specification are defined by the developed concept for the matching of DRs and DCs. It is shown that the DRs are strongly dependent on the geometry and property of the scenery elements, so that equal behavioral requirements do not necessarily imply equal DRs. These dependencies are used for the specification enabling the definition of matching criteria for a selection of DRs and corresponding DCs. To realize the matching, a capability-based route search is developed and implemented. The route search incorporates all elaborated results of the work enabling the whole approach to be evaluated by applying it to a real road network. The evaluation shows that the identification of feasible routes for AVs based on the scenery is possible and which hurdles based on identified deficits still have to be overcome
Does the definition of a novel environment affect the ability to detect cryptic genetic variation?
AbstractAnthropogenic change exposes populations to environments that have been rare or entirely absent from their evolutionary past. Such novel environments are hypothesized to release cryptic genetic variation, a hidden store of variance that can fuel evolution. However, support for this hypothesis is mixed. One possible reason is a lack of clarity in what is meant by ânovel environmentâ, an umbrella term encompassing conditions with potentially contrasting effects on the exposure or concealment of cryptic variation. Here, we use a metaâanalysis approach to investigate changes in the total genetic variance of multivariate traits in ancestral versus novel environments. To determine whether the definition of a novel environment could explain the mixed support for a release of cryptic genetic variation, we compared absolute novel environments, those not represented in a population's evolutionary past, to extreme novel environments, those involving frequency or magnitude changes to environments present in a population's ancestry. Despite sufficient statistical power, we detected no broadâscale pattern of increased genetic variance in novel environments, and finding the type of novel environment did not explain any significant variation in effect sizes. When effect sizes were partitioned by experimental design, we found increased genetic variation in studies based on broadâsense measures of variance, and decreased variation in narrowâsense studies, in support of previous research. Therefore, the source of genetic variance, not the definition of a novel environment, was key to understanding environmentâdependant genetic variation, highlighting nonâadditive genetic variance as an important component of cryptic genetic variation and avenue for future research.</jats:p
- âŠ