5,394 research outputs found

    Alʔilbīrī’s Book of the rational conclusions. Introduction, Critical Edition of the Arabic Text and Materials for the History of the Ḫawāṣṣic Genre in Early Andalus

    Full text link
    [eng] The Book of the rational conclusions, written perhaps somewhen in the 10th c. by a physician from Ilbīrah (Andalus), is a multi-section medical pandect. The author brings together, from a diversity of sources, materials dealing with matters related to drug-handling, natural philosophy, therapeutics, medical applications of the specific properties of things, a regimen, and a dispensatory. This dissertation includes three different parts. First the transmission of the text, its contents, and its possible context are discussed. Then a critical edition of the Arabic text is offered. Last, but certainly not least, the subject of the specific properties is approached from several points of view. The analysis of Section III of the original book leads to an exploration of the early Andalusī assimilation of this epistemic tradition and to the establishment of a well-defined textual family in which our text must be inscribed. On the other hand, the concept itself of ‘specific property’ is often misconstrued and it is usually made synonymous to magic and superstition. Upon closer inspection, however, the alleged irrationality of the knowledge of these properties appears to be largely the result of anachronistic interpretation. As a complement of this particular research and as an illustration of the genre, a sample from an ongoing integral commentary on this section of the book is presented.[cat] El Llibre de les conclusions racionals d’un desconegut metge d’Ilbīrah (l’Àndalus) va ser compilat probablement durant la segona meitat del s. X. Es tracta d’un rudimentari però notablement complet kunnaix (un gènere epistèmic que és definit sovint com a ‘enciclopèdia mèdica’) en què l’autor aplega materials manllevats (sovint de manera literal i no-explícita) de diversos gèneres. El llibre obre amb una secció sobre apoteconomia (una mena de manual d’apotecaris) però se centra després en les diferents branques de la medicina. A continuació d’uns prolegòmens filosòfics l’autor copia, amb mínima adaptació lingüística, un tractat sencer de terapèutica, després un altre sobre les aplicacions mèdiques de les propietats específiques de les coses, una sèrie de fragments relacionats amb la dietètica (un règim en termes tradicionals) i, finalment, una col·lecció de receptes mèdiques. Cadascuna d’aquestes seccions mostren evidents lligams d’intertextualitat que apunten cap a una intensa activitat sintetitzadora de diverses tradicions aliades a la medicina a l’Àndalus califal. El text és, de fet, un magnífic objecte sobre el qual aplicar la metodologia de la crítica textual i de fonts. L’edició crítica del text incorpora la dimensió cronològica dins l’aparat, que esdevé així un element contextualitzador. Quant l’estudi de les fonts, si tot al llarg de la primera part d’aquesta tesi és només secundari, aquesta disciplina pren un protagonisme gairebé absolut en la tercera part, especialment en el capítol dedicat a l’anàlisi individual de cada passatge recollit en la secció sobre les propietats específiques de les coses

    Advances in machine learning algorithms for financial risk management

    Get PDF
    In this thesis, three novel machine learning techniques are introduced to address distinct yet interrelated challenges involved in financial risk management tasks. These approaches collectively offer a comprehensive strategy, beginning with the precise classification of credit risks, advancing through the nuanced forecasting of financial asset volatility, and ending with the strategic optimisation of financial asset portfolios. Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression model is then applied to predict the probability of default using the heuristically balanced datasets. The results underscore the effectiveness of our proposed technique, with superior performance observed in comparison to other imbalanced preprocessing approaches. This advancement in credit risk classification lays a solid foundation for understanding individual financial behaviours, a crucial first step in the broader context of financial risk management. Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a Triple Discriminator Generative Adversarial Network with a continuous wavelet transform is proposed. The proposed model has the ability to decompose volatility time series into signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a Generative Adversarial Network consisting of triple Discriminator and Generator networks. The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised loss and reconstruction loss as part of its framework. Data from nine financial assets are employed to demonstrate the effectiveness of the proposed model. This approach not only enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis. Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio optimisation using historical Low, High, and Close prices of assets as input with weights of assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return on investment based on deep reinforcement learning. To provide more learning stability in an online training process, a Markov Differential Sharpe Ratio reward function has been proposed as the reinforcement learning objective function. Additionally, a Multi-Memory Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout a specified trading period. The use of the insights gained from volatility forecasting into this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving superior results based on risk-adjusted reward performance measures. In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the accuracy of credit risk classification, through the improvement and understanding of market volatility, to optimisation of investment strategies. These methodologies collectively show the potential of the use of machine learning to improve financial risk management

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    A Critical Review Of Post-Secondary Education Writing During A 21st Century Education Revolution

    Get PDF
    Educational materials are effective instruments which provide information and report new discoveries uncovered by researchers in specific areas of academia. Higher education, like other education institutions, rely on instructional materials to inform its practice of educating adult learners. In post-secondary education, developmental English programs are tasked with meeting the needs of dynamic populations, thus there is a continuous need for research in this area to support its changing landscape. However, the majority of scholarly thought in this area centers on K-12 reading and writing. This paucity presents a phenomenon to the post-secondary community. This research study uses a qualitative content analysis to examine peer-reviewed journals from 2003-2017, developmental online websites, and a government issued document directed toward reforming post-secondary developmental education programs. These highly relevant sources aid educators in discovering informational support to apply best practices for student success. Developmental education serves the purpose of addressing literacy gaps for students transitioning to college-level work. The findings here illuminate the dearth of material offered to developmental educators. This study suggests the field of literacy research is fragmented and highlights an apparent blind spot in scholarly literature with regard to English writing instruction. This poses a quandary for post-secondary literacy researchers in the 21st century and establishes the necessity for the literacy research community to commit future scholarship toward equipping college educators teaching writing instruction to underprepared adult learners

    Further Improvements in Decoding Performance for 5G LDPC Codes Based on Modified Check-Node Unit

    Get PDF
    One of the most important units of Low-Density Parity-Check (LDPC) decoders is the Check-Node Unit. Its main task is to find the first two minimum values among incoming variable-to-check messages and return check-to-variable messages. This block significantly affects the decoding performance, as well as the hardware implementation complexity. In this paper, we first propose a modification to the check-node update rule by introducing two optimal offset factors applied to the check-to-variable messages. Then, we present the Check-Node Unit hardware architecture which performs the proposed algorithm. The main objective of this work aims to improve further the decoding performance for 5th Generation (5G) LDPC codes. The simulation results show that the proposed algorithm achieves essential improvements in terms of error correction performance. More precisely, the error-floor does not appear within Bit-Error-Rate (BER) of 10^(-8), while the decoding gain increases up to 0.21 dB compared to the baseline Normalized Min-Sum, as well as several state-of-the-art LDPC-based Min-Sum decoders

    Information Encoding for Flow Watermarking and Binding Keys to Biometric Data

    Get PDF
    Due to the current level of telecommunications development, fifth-generation (5G) communication systems are expected to provide higher data rates, lower latency, and improved scalability. To ensure the security and reliability of data traffic generated from wireless sources, 5G networks must be designed to support security protocols and reliable communication applications. The operations of coding and processing of information during the transmission of both binary and non-binary data in nonstandard communication channels are described. A subclass of linear binary codes is considered, which are both Varshamov-Tenengolz codes and are used for channels with insertions and deletions of symbols. The use of these codes is compared with Hidden Markov Model (HMM)-based systems for detecting intrusions in networks using flow watermarking, which provide high true positive rate in both cases. The principles of using Bose-Chadhuri-Hocquenhgem (BCH) codes, non-binary Reed-Solomon codes, and turbo codes, as well as concatenated code structures to ensure noise immunity when reproducing information in Helper-Data Systems are considered. Examples of biometric systems organization based on the use of these codes, operating on the basis of the Fuzzy Commitment Scheme (FCS) and providing FRR < 1% for authentication, are given
    corecore