186 research outputs found
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Optimizing Program Efficiency with Loop Unroll Factor Prediction
Loop unrolling is a well-established code transformation technique that can improve the performance of a program at runtime. The key benefit of unrolling a loop is that it often requires fewer instruction executions than the original loop. However, determining the optimal number of loop unrolling is a critical concern. This paper presents a novel method for predicting the optimal unroll factor for a given program. Specifically, a dataset is constructed that includes the execution times of several programs with varying loop unroll factors. The programs are sourced from different benchmarks, such as Ploybench, Shooutout, and other programs. Similarity measures between the unseen program and the existing programs are computed, and the three most similar programs are identified. The unroll factor that led to the greatest reduction in execution time for the most similar programs is selected as the candidate for the unseen program. Experimental results demonstrate that the proposed method can enhance the performance of training programs for unroll factors of 2, 4, 6, and 8 by approximately 13%, 18%, 19%, and 21%, respectively. For the unseen programs, the speedup rate is approximately 37.7% for five programs
Closed-loop control system and hardware-aware compilation protocols for quantum simulation with neutral atoms in optical trap arrays
Quantum materials offer tremendous potential for advancing electronic devices beyond traditional semiconductor-based technologies. Understanding the dynamics of these materials requires the use of quantum simulators. Quantum simulators are controlled many-body quantum systems that mimic the dynamics of a targeted quantum system. The three key
features of a quantum simulator are controllability, scalability, and interactability. Controllability denotes the ability to address an individual quantum system. Scalability refers to extending this control to multiple quantum systems while maintaining their interconnectivity with a polynomial increase in resources. Interactability, on the other hand, denotes the capability to establish strong tunable interactions between a pair of quantum systems.
This thesis addresses the challenges of attaining controllability and scalability within the current Noisy Intermediate-Scale Quantum (NISQ) era, characterized by limited and error-prone qubits, for a neutral atom-based quantum simulator.
The constraints in qubit interconnectivity necessitate the use of additional swap gates for operations between non-adjacent qubits, increasing errors. To reduce these gate-based errors, we improve qubit interconnectivity by displacing atoms during simulation, thus enhancing our simulator’s scalability. We compare approaches with and without atom displacement analytically and numerically, employing metrics like circuit fidelity and quantum volume. Our analysis introduces a novel metric, denoted as , for comparing compilation protocols incorporating atom displacement. Additionally, we establish an inequality involving the metric to compare operational protocols with and without atom displacement. We conclude from our quantum volume study that protocols assisted by atom displacement can achieve a quantum volume of 2^7, a significant improvement over the 2^6 attainable without atom displacement with the state-of-the-art two-qubit gate infidelity of 5e-3 and atom displacement infidelity of 1.8e-4.
Implementing a dedicated closed-loop control and acquisition system showcases our simulator’s controllability. The system integrates machine learning techniques to automate experiment composition, execution, and analysis, resulting in faster and automated control parameter optimization. A practical demonstration of this optimization is conducted through imaging an atomic cloud composed of Rb-87 atoms, the first step in undertaking quantum simulations with neutral atom arrays.
The research presented in this thesis contributes to the understanding and advancement of quantum simulators, paving the way for developing new devices with quantum materials
Self-Supervised Shape and Appearance Modeling via Neural Differentiable Graphics
Inferring 3D shape and appearance from natural images is a fundamental challenge in computer vision. Despite recent progress using deep learning methods, a key limitation is the availability of annotated training data, as acquisition is often very challenging and expensive, especially at a large scale. This thesis proposes to incorporate physical priors into neural networks that allow for self-supervised learning.
As a result, easy-to-access unlabeled data can be used for model training. In particular, novel algorithms in the context of 3D reconstruction and texture/material synthesis are introduced, where only image data is available as supervisory signal.
First, a method that learns to reason about 3D shape and appearance solely from unstructured 2D images, achieved via differentiable rendering in an adversarial fashion, is proposed.
As shown next, learning from videos significantly improves 3D reconstruction quality. To this end, a novel ray-conditioned warp embedding is proposed that aggregates pixel-wise features from multiple source images.
Addressing the challenging task of disentangling shape and appearance, first a method that enables 3D texture synthesis independent of shape or resolution is presented. For this purpose, 3D noise fields of different scales are transformed into stationary textures. The method is able to produce 3D textures, despite only requiring 2D textures for training.
Lastly, the surface characteristics of textures under different illumination conditions are modeled in the form of material parameters. Therefore, a self-supervised approach is proposed that has no access to material parameters but only flash images. Similar to the previous method, random noise fields are reshaped to material parameters, which are conditioned to replicate the visual appearance of the input under matching light
GPU-based Architecture Modeling and Instruction Set Extension for Signal Processing Applications
The modeling of embedded systems attempts to estimate the performance and costs prior to the implementation. The early stage predictions for performance and power dissipation reduces the more costly late stage design modifications. Workload modeling is an approach where an abstract application is evaluated against an abstract architecture. The challenge in modeling is the balance between fidelity and simplicity, where fidelity refers to the correctness of the predictions and the simplicity relates to the simulation time of the model and its ease of comprehension for the developer. A model named GSLA for performance and power modeling is presented, which extends existing architecture modeling by including GPUs as parallel processing elements. The performance model showed an average fidelity of 93% and the power model demonstrated an average fidelity of 84% between the models and several application measurements. The GSLA model is very simple: only 2 parameters that can be obtained by automated scripts.
Besides the modeling, this thesis addresses lower level signal processing system improvements by proposing Instruction Set Architecture (ISA) extensions for RISC-V processors. A vehicle classifier neural network model was used as a case study, in which the benefit of Bit Manipulation Instructions (BMI) is shown. The result is a new PopCount instruction extension that is verified in ETISS simulator. The PopCount extension of RISC-V ISA showed a performance improvement of more than double for the vehicle classifier application. In addition, the design flow for adding a new instruction extension for a re-configurable platform is presented.
The GPU modeling and the RISC-V ISA extension added new features to the state of the art. They improve the modeling features as well as reduce the execution costs in signal processing platforms
GPT-4 Technical Report
We report the development of GPT-4, a large-scale, multimodal model which can
accept image and text inputs and produce text outputs. While less capable than
humans in many real-world scenarios, GPT-4 exhibits human-level performance on
various professional and academic benchmarks, including passing a simulated bar
exam with a score around the top 10% of test takers. GPT-4 is a
Transformer-based model pre-trained to predict the next token in a document.
The post-training alignment process results in improved performance on measures
of factuality and adherence to desired behavior. A core component of this
project was developing infrastructure and optimization methods that behave
predictably across a wide range of scales. This allowed us to accurately
predict some aspects of GPT-4's performance based on models trained with no
more than 1/1,000th the compute of GPT-4.Comment: 100 page
Design, Implementation, and Automation of a Risk Management Approach for Man-at-the-End Software Protection
The last years have seen an increase in Man-at-the-End (MATE) attacks against
software applications, both in number and severity. However, software
protection, which aims at mitigating MATE attacks, is dominated by fuzzy
concepts and security-through-obscurity. This paper presents a rationale for
adopting and standardizing the protection of software as a risk management
process according to the NIST SP800-39 approach. We examine the relevant
constructs, models, and methods needed for formalizing and automating the
activities in this process in the context of MATE software protection. We
highlight the open issues that the research community still has to address. We
discuss the benefits that such an approach can bring to all stakeholders. In
addition, we present a Proof of Concept (PoC) decision support system that
instantiates many of the discussed construct, models, and methods and automates
many activities in the risk analysis methodology for the protection of
software. Despite being a prototype, the PoC's validation with industry experts
indicated that several aspects of the proposed risk management process can
already be formalized and automated with our existing toolbox and that it can
actually assist decision-making in industrially relevant settings.Comment: Preprint submitted to Computers & Security. arXiv admin note:
substantial text overlap with arXiv:2011.0726
CP-BCS: Binary Code Summarization Guided by Control Flow Graph and Pseudo Code
Automatically generating function summaries for binaries is an extremely
valuable but challenging task, since it involves translating the execution
behavior and semantics of the low-level language (assembly code) into
human-readable natural language. However, most current works on understanding
assembly code are oriented towards generating function names, which involve
numerous abbreviations that make them still confusing. To bridge this gap, we
focus on generating complete summaries for binary functions, especially for
stripped binary (no symbol table and debug information in reality). To fully
exploit the semantics of assembly code, we present a control flow graph and
pseudo code guided binary code summarization framework called CP-BCS. CP-BCS
utilizes a bidirectional instruction-level control flow graph and pseudo code
that incorporates expert knowledge to learn the comprehensive binary function
execution behavior and logic semantics. We evaluate CP-BCS on 3 different
binary optimization levels (O1, O2, and O3) for 3 different computer
architectures (X86, X64, and ARM). The evaluation results demonstrate CP-BCS is
superior and significantly improves the efficiency of reverse engineering.Comment: EMNLP 2023 Main Conferenc
Understanding Quantum Technologies 2022
Understanding Quantum Technologies 2022 is a creative-commons ebook that
provides a unique 360 degrees overview of quantum technologies from science and
technology to geopolitical and societal issues. It covers quantum physics
history, quantum physics 101, gate-based quantum computing, quantum computing
engineering (including quantum error corrections and quantum computing
energetics), quantum computing hardware (all qubit types, including quantum
annealing and quantum simulation paradigms, history, science, research,
implementation and vendors), quantum enabling technologies (cryogenics, control
electronics, photonics, components fabs, raw materials), quantum computing
algorithms, software development tools and use cases, unconventional computing
(potential alternatives to quantum and classical computing), quantum
telecommunications and cryptography, quantum sensing, quantum technologies
around the world, quantum technologies societal impact and even quantum fake
sciences. The main audience are computer science engineers, developers and IT
specialists as well as quantum scientists and students who want to acquire a
global view of how quantum technologies work, and particularly quantum
computing. This version is an extensive update to the 2021 edition published in
October 2021.Comment: 1132 pages, 920 figures, Letter forma
Quantum Simulation for High Energy Physics
It is for the first time that Quantum Simulation for High Energy Physics
(HEP) is studied in the U.S. decadal particle-physics community planning, and
in fact until recently, this was not considered a mainstream topic in the
community. This fact speaks of a remarkable rate of growth of this subfield
over the past few years, stimulated by the impressive advancements in Quantum
Information Sciences (QIS) and associated technologies over the past decade,
and the significant investment in this area by the government and private
sectors in the U.S. and other countries. High-energy physicists have quickly
identified problems of importance to our understanding of nature at the most
fundamental level, from tiniest distances to cosmological extents, that are
intractable with classical computers but may benefit from quantum advantage.
They have initiated, and continue to carry out, a vigorous program in theory,
algorithm, and hardware co-design for simulations of relevance to the HEP
mission. This community whitepaper is an attempt to bring this exciting and yet
challenging area of research to the spotlight, and to elaborate on what the
promises, requirements, challenges, and potential solutions are over the next
decade and beyond.Comment: This is a whitepaper prepared for the topical groups CompF6 (Quantum
computing), TF05 (Lattice Gauge Theory), and TF10 (Quantum Information
Science) within the Computational Frontier and Theory Frontier of the U.S.
Community Study on the Future of Particle Physics (Snowmass 2021). 103 pages
and 1 figur
- …