1,128 research outputs found

    Applications of Deep Learning Models in Financial Forecasting

    Get PDF
    In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting. The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data. The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Guided rewriting and constraint satisfaction for parallel GPU code generation

    Get PDF
    Graphics Processing Units (GPUs) are notoriously hard to optimise for manually due to their scheduling and memory hierarchies. What is needed are good automatic code generators and optimisers for such parallel hardware. Functional approaches such as Accelerate, Futhark and LIFT leverage a high-level algorithmic Intermediate Representation (IR) to expose parallelism and abstract the implementation details away from the user. However, producing efficient code for a given accelerator remains challenging. Existing code generators depend on the user input to choose a subset of hard-coded optimizations or automated exploration of implementation search space. The former suffers from the lack of extensibility, while the latter is too costly due to the size of the search space. A hybrid approach is needed, where a space of valid implementations is built automatically and explored with the aid of human expertise. This thesis presents a solution combining user-guided rewriting and automatically generated constraints to produce high-performance code. The first contribution is an automatic tuning technique to find a balance between performance and memory consumption. Leveraging its functional patterns, the LIFT compiler is empowered to infer tuning constraints and limit the search to valid tuning combinations only. Next, the thesis reframes parallelisation as a constraint satisfaction problem. Parallelisation constraints are extracted automatically from the input expression, and a solver is used to identify valid rewriting. The constraints truncate the search space to valid parallel mappings only by capturing the scheduling restrictions of the GPU in the context of a given program. A synchronisation barrier insertion technique is proposed to prevent data races and improve the efficiency of the generated parallel mappings. The final contribution of this thesis is the guided rewriting method, where the user encodes a design space of structural transformations using high-level IR nodes called rewrite points. These strongly typed pragmas express macro rewrites and expose design choices as explorable parameters. The thesis proposes a small set of reusable rewrite points to achieve tiling, cache locality, data reuse and memory optimisation. A comparison with the vendor-provided handwritten kernel ARM Compute Library and the TVM code generator demonstrates the effectiveness of this thesis' contributions. With convolution as a use case, LIFT-generated direct and GEMM-based convolution implementations are shown to perform on par with the state-of-the-art solutions on a mobile GPU. Overall, this thesis demonstrates that a functional IR yields well to user-guided and automatic rewriting for high-performance code generation

    New perspectives on A.I. in sentencing. Human decision-making between risk assessment tools and protection of humans rights.

    Get PDF
    The aim of this thesis is to investigate a field that until a few years ago was foreign to and distant from the penal system. The purpose of this undertaking is to account for the role that technology could plays in the Italian Criminal Law system. More specifically, this thesis attempts to scrutinize a very intricate phase of adjudication. After deciding on the type of an individual's liability, a judge must decide on the severity of the penalty. This type of decision implies a prognostic assessment that looks to the future. It is precisely in this field and in prognostic assessments that, as has already been anticipated in the United, instruments and processes are inserted in the pre-trial but also in the decision-making phase. In this contribution, we attempt to describe the current state of this field, trying, as a matter of method, to select the most relevant or most used tools. Using comparative and qualitative methods, the uses of some of these instruments in the supranational legal system are analyzed. Focusing attention on the Italian system, an attempt was made to investigate the nature of the element of an individual's ‘social dangerousness’ (pericolosità sociale) and capacity to commit offences, types of assessments that are fundamental in our system because they are part of various types of decisions, including the choice of the best sanctioning treatment. It was decided to turn our attention to this latter field because it is believed that the judge does not always have the time, the means and the ability to assess all the elements of a subject and identify the best 'individualizing' treatment in order to fully realize the function of Article 27, paragraph 3 of the Constitution

    Transfer Learning of Deep Learning Models for Cloud Masking in Optical Satellite Images

    Get PDF
    Los satélites de observación de la Tierra proporcionan una oportunidad sin precedentes para monitorizar nuestro planeta a alta resolución tanto espacial como temporal. Sin embargo, para procesar toda esta cantidad creciente de datos, necesitamos desarrollar modelos rápidos y precisos adaptados a las características específicas de los datos de cada sensor. Para los sensores ópticos, detectar las nubes en la imagen es un primer paso inevitable en la mayoría de aplicaciones tanto terrestres como oceánicas. Aunque detectar nubes brillantes y opacas es relativamente fácil, identificar automáticamente nubes delgadas semitransparentes o diferenciar nubes de nieve o superficies brillantes es mucho más difícil. Además, en el escenario actual, donde el número de sensores en el espacio crece constantemente, desarrollar metodologías para transferir modelos que funcionen con datos de nuevos satélites es una necesidad urgente. Por tanto, los objetivos de esta tesis son desarrollar modelos precisos de detección de nubes que exploten las diferentes propiedades de las imágenes de satélite y desarrollar metodologías para transferir esos modelos a otros sensores. La tesis está basada en cuatro trabajos los cuales proponen soluciones a estos problemas. En la primera contribución, "Multitemporal cloud masking in the Google Earth Engine", implementamos un modelo de detección de nubes multitemporal que se ejecuta en la plataforma Google Earth Engine y que supera los modelos operativos de Landsat-8. La segunda contribución, "Transferring deep learning models for Cloud Detection between Landsat-8 and Proba-V", es un caso de estudio de transferencia de un algoritmo de detección de nubes basado en aprendizaje profundo de Landsat-8 (resolución 30m, 12 bandas espectrales y muy buena calidad radiométrica) a Proba-V, que tiene una resolución de 333m, solo cuatro bandas y una calidad radiométrica peor. El tercer artículo, "Cross sensor adversarial domain adaptation of Landsat-8 and Proba-V images for cloud detection", propone aprender una transformación de adaptación de dominios que haga que las imágenes de Proba-V se parezcan a las tomadas por Landsat-8 con el objetivo de transferir productos diseñados con datos de Landsat-8 a Proba-V. Finalmente, la cuarta contribución, "Towards global flood mapping onboard low cost satellites with machine learning", aborda simultáneamente la detección de inundaciones y nubes con un único modelo de aprendizaje profundo, implementado para que pueda ejecutarse a bordo de un CubeSat (ϕSat-I) con un chip acelerador de aplicaciones de inteligencia artificial. El modelo está entrenado en imágenes Sentinel-2 y demostramos cómo transferir este modelo a la cámara del ϕSat-I. Este modelo se lanzó en junio de 2021 a bordo de la misión WildRide de D-Orbit para probar su funcionamiento en el espacio.Remote sensing sensors onboard Earth observation satellites provide a great opportunity to monitor our planet at high spatial and temporal resolutions. Nevertheless, to process all this ever-growing amount of data, we need to develop fast and accurate models adapted to the specific characteristics of the data acquired by each sensor. For optical sensors, detecting the clouds present in the image is an unavoidable first step for most of the land and ocean applications. Although detecting bright and opaque clouds is relatively easy, automatically identifying thin semi-transparent clouds or distinguishing clouds from snow or bright surfaces is much more challenging. In addition, in the current scenario where the number of sensors in orbit is constantly growing, developing methodologies to transfer models across different satellite data is a pressing need. Henceforth, the overreaching goal of this Thesis is to develop accurate cloud detection models that exploit the different properties of the satellite images, and to develop methodologies to transfer those models across different sensors. The four contributions of this Thesis are stepping stones in that direction. In the first contribution,"Multitemporal cloud masking in the Google Earth Engine", we implemented a lightweight multitemporal cloud detection model that runs on the Google Earth Engine platform and which outperforms the operational models for Landsat-8. The second contribution, "Transferring deep learning models for Cloud Detection between Landsat-8 and Proba-V", is a case-study of transferring a deep learning based cloud detection algorithm from Landsat-8 (30m resolution, 12 spectral bands and very good radiometric quality) to Proba-V, which has a lower{333m resolution, only four bands and a less accurate radiometric quality. The third paper, "Cross sensor adversarial domain adaptation of Landsat-8 and Proba-V images for cloud detection", proposes a learning-based domain adaptation transformation of Proba-V images to resemble those taken by Landsat-8, with the objective of transferring products designed on Landsat-8 to Proba-V. Finally, the fourth contribution, "Towards global flood mapping onboard low cost satellites with machine learning", tackles simultaneously cloud and flood water detection with a single deep learning model, which was implemented to run onboard a CubeSat (ϕSat-I) with an AI accelerator chip. In this case, the model is trained on Sentinel-2 and transferred to theϕSat-I camera. This model was launched in June 2021 onboard the Wild Ride D-Orbit mission in order to test its performance in space

    Security and Privacy of Resource Constrained Devices

    Get PDF
    The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current technical landscape by providing a working definition and architecture taxonomy of ‘Internet of Things’ and ‘resource-constrained devices’, coupled with a threat landscape where each specific attack is linked to a layer of the taxonomy. Chapter 2 lays down the theoretical foundations for an interdisciplinary approach and a unified, holistic vision of cybersecurity, safety and privacy justified by the ‘IoT revolution’ through the so-called infraethical perspective. Chapter 3 investigates whether and to what extent the fast-evolving European cybersecurity regulatory framework addresses the security challenges brought about by the IoT by allocating legal responsibilities to the right parties. Chapters 4 and 5 focus, on the other hand, on ‘privacy’ understood by proxy as to include EU data protection. In particular, Chapter 4 addresses three legal challenges brought about by the ubiquitous IoT data and metadata processing to EU privacy and data protection legal frameworks i.e., the ePrivacy Directive and the GDPR. Chapter 5 casts light on the risk management tool enshrined in EU data protection law, that is, Data Protection Impact Assessment (DPIA) and proposes an original DPIA methodology for connected devices, building on the CNIL (French data protection authority) model

    Cornwall's Border: Celtic Frontier or Anglicised Territory?

    Get PDF
    Cornwall has had a long history of difference compared to the experience of other English counties. As landscape and identity have interwoven, the river Tamar has represented a clear divide between Cornwall and the rest of the United Kingdom undoubtedly, an important facet of the Cornish identity. Whilst it has functioned as a historic and symbolic break in the landscape, the ‘borderlands’ of the Tamar have begun to emerge in the Civic Society of the South-West in their own right as part of the evolution of living close to the border has changed and opportunities for investment, protection and prosperity have emerged. This thesis therefore seeks to explore the impact of the bordering and re-bordering process of Cornwall and more specifically, East Cornwall. Thus, though this thesis we can explore the sub-national border, an area of border studies that is far less developed, but in Reflecting on their daily interactions with neighbouring Plymouth and Devon, built on historic connections, we see how life differs in East Cornwall compared to the rest of the county. An interdisciplinary approach considering the political, cultural, and socio-economic history of these communities, particularly focused on post-19th century life, but also drawing on precedence from earlier examples, sees how divergence has grown across parts of the borders. There is the struggle of the voice of local communities on both banks of the River Tamar, some advocated, others challenging the construction and re-organisation of cultural and political borders. Cornish studies has traditionally focused on Cornwall as a whole, defining its distinctive sense of place and identity as a Celtic nation and a constitutional part of the Celtic fringe in the context of the British State. This thesis, building on the growing body of more micro-historical, localised histories within Cornwall, seeks to challenge the orthodox narrative that has found West Cornwall, which has been the subject of most of these intra-Cornwall studies, to be ‘more Cornish’. Unearthing new narratives about the ‘forgotten Corner’ of Cornwall amongst other parts of East Cornwall not only disputes the homogeneity of Cornwall and Cornish identity, but also the brings to light the shared heritage amongst these more rural communities. Through Border studies, we can explore how competitive territory, overlapping jurisdictions and implications of social mobility have changed over time and in doing so reshaped perceptions of the border. The field also recognizes that border politics will continue to be reshaped, and in doing so, alter the relationships and territories they define. Looking towards Cornwall’s future, this thesis reflects as to how it is evolving amidst a backdrop of devolution, de-centralization, and threats to the British constitution. This has implications for Cornish identity, which may be multiple identities, in a more globalized world, changing rapidly for those living near borders.Awarded degree of Master of Philosophy (MPhil

    A Process for the Restoration of Performances from Musical Errors on Live Progressive Rock Albums

    Get PDF
    In the course of my practice of producing live progressive rock albums, a significant challenge has emerged: how to repair performance errors while retaining the intended expressive performance. Using a practice as research methodology, I develop a novel process, Error Analysis and Performance Restoration (EAPR), to restore a performer’s intention where an error was assessed to have been made. In developing this process, within the context of my practice, I investigate: the nature of live albums and the groups to which I am accountable, a definition of performance errors, an examination of their causes, and the existing literature on these topics. In presenting EAPR, I demonstrate, drawing from existing research, a mechanism by which originally intended performances can be extracted from recorded errors. The EAPR process exists as a conceptual model; each album has a specific implementation to address the needs of that album, and the currently available technology. Restoration techniques are developed as part of this implementation. EAPR is developed and demonstrated through my work restoring performances on a front-line commercial live release, the Creative Submission Album. The specific EAPR implementation I design for it is laid out, and detailed examples of its techniques demonstrated
    • …
    corecore