1,089 research outputs found

    Malcontented agents : from the novellas to Much Ado about Nothing and The Duchess of Malfi

    Get PDF
    Shakespeare’s Much Ado about Nothing (c.1598) and Webster’s The Duchess of Malfi (c. 1613) are two plays in which Matteo Bandello’s portrayal of evil agents in his novellas exert a constant, even if not immediately obvious, influence. Remote from each other chronologically and generically, Shakespeare’s comedy and Webster’s tragedy make common use of a distinctive character-type, which has an equivalent in the Bandello source: the melancholy, embittered, and vindictive outsider known at the time, as well as by modern critics, as the malcontent (Nigri, The Origin of Malcontent). Comparing how and to what purpose each dramatist duplicated, altered or expanded the figures he found in the source story provides an insight into his way of working and informs our understanding of the plays

    Quality data assessment and improvement in pre-processing pipeline to minimize impact of spurious signals in functional magnetic imaging (fMRI)

    Get PDF
    In the recent years, the field of quality data assessment and signal denoising in functional magnetic resonance imaging (fMRI) is rapidly evolving and the identification and reduction of spurious signal with pre-processing pipeline is one of the most discussed topic. In particular, subject motion or physiological signals, such as respiratory or/and cardiac pulsatility, were showed to introduce false-positive activations in subsequent statistical analyses. Different measures for the evaluation of the impact of motion related artefacts, such as frame-wise displacement and root mean square of movement parameters, and the reduction of these artefacts with different approaches, such as linear regression of nuisance signals and scrubbing or censoring procedure, were introduced. However, we identify two main drawbacks: i) the different measures used for the evaluation of motion artefacts were based on user-dependent thresholds, and ii) each study described and applied their own pre-processing pipeline. Few studies analysed the effect of these different pipelines on subsequent analyses methods in task-based fMRI.The first aim of the study is to obtain a tool for motion fMRI data assessment, based on auto-calibrated procedures, to detect outlier subjects and outliers volumes, targeted on each investigated sample to ensure homogeneity of data for motion. The second aim is to compare the impact of different pre-processing pipelines on task-based fMRI using GLM based on recent advances in resting state fMRI preprocessing pipelines. Different output measures based on signal variability and task strength were used for the assessment

    Sceptical responses in early modern plays : from self-knowledge to self-doubt in Marston’s The Malcontent and Middleton’s The Revenger’s Tragedy

    Get PDF
    Defined for the first time by Sir Thomas Elyot as a «secte of Phylosophers, whiche affirmed nothynge» (1538), the term ‘scepticism’ appears in all its variants only too rarely in the drama of the period. Chadwyck Healey databases (Early English Books Online and Literature Online) record only four different occurrences (Tomkis 1607; Jonson 1640; Cartwright 1651; Massinger 1655), in a span of time which runs from 1550 to 1655, although scepticism as a way of participating in, and responding to, life is registered on the London stages as an increasingly popular critical attitude. Vindice’s «I’m in doubt whether I’m myself or no» is evidence of that suspension of judgment which is envisaged by the sceptics as the only viable answer in a world governed by the relativism of human knowledge. Against a theoretical and philosophical background which investigates the relationship between self-knowledge and scepticism, the article looks at how this early modern revival of scepticism – so profoundly influenced by the translation of Montaigne’s essays – can couple with, and go beyond, an emergent awareness of inwardness, as the one hinted at in Marston’s The Malcontent and Middleton’s The Revenger’s Tragedy. In particular, the essay examines the interplay between the revengers’ responses to the adoption of different masks and the dictum of a philosophy which demands the deferment of any epistemological verdict. A discussion of the rhetorical strategies which better testify to the contradictions at the heart of this ontological impasse will then follow

    High Level Synthesis of Neural Network Chips

    Get PDF
    This thesis investigates the development of a silicon compiler dedicated to generate Application-Specific Neural Network Chips (ASNNCs) from a high level C-based behavioural specification language. The aim is to fully integrate the silicon compiler with the ESPRIT II Pygmalion neural programming environment. The integration of these two tools permits the translation of a neural network application specified in nC, the Pygmalion's C-based neural programming language, into either binary (for simulation) or silicon (for execution in hardware). Several applications benefit from this approach, in particular the ones that require real-time execution, for which a true neural computer is required. This research comprises two major parts: extension of the Pygmalion neural programming environment, to support automatic generation of neural network chips from the nC specification language; and implementation of the high level synthesis part of the neural silicon compiler. The extension of the neural programming environment has been developed to adapt the nC language to hardware constraints, and to provide the environment with a simulation tool to test in advance the performance of the neural chips. Firstly, new hardware-specific requisites have been incorporated to nC. However, special attention has been taken to avoid transforming nC into a hardware-oriented language, since the system assumes minimum (or even no) knowledge of VLSI design from the application developer. Secondly, a simulator for neural network hardware has been developed, which assesses how well the generated circuit will perform the neural computation. Lastly, a hardware library of neural network models associated with a target VLSI architecture has been built. The development of the neural silicon compiler focuses on the high level synthesis part of the process. The goal of the silicon compiler is to take nC as the input language and automatically translate it into one or more identical integrated circuits, which are specified in VHDL (the IEEE standard hardware description language) at the register transfer level. The development of the high level synthesis comprises four major parts: firstly, compilation and software-like optimisations of nC; secondly, transformation of the compiled code into a graph-based internal representation, which has been designed to be the basis for the hardware synthesis; thirdly, further transformations and hardware-like optimisations on the internal representation; and finally, creation of the neural chip's data path and control unit that implement the behaviour specified in nC. Special attention has been devoted to the creation of optimised hardware structures for the ASNNCs employing both phases of neural computing on-chip: recall and learning. This is achieved through the data path and control synthesis algorithms, which adopt a heuristic approach that targets the generated hardware structure of the neural chip in a specific VLSI architecture, namely the Generic Neuron. The viability, concerning the effective use of silicon area versus speed, has been evaluated through the automatic generation of a VHDL description for the neural chip employing the Back Propagation neural network model. This description is compared with the one created manually by a hardware designer

    A deep learning integrated Lee-Carter model

    Get PDF
    In the field of mortality, the Lee–Carter based approach can be considered the milestone to forecast mortality rates among stochastic models. We could define a “Lee–Carter model family” that embraces all developments of this model, including its first formulation (1992) that remains the benchmark for comparing the performance of future models. In the Lee–Carter model, the kt parameter, describing the mortality trend over time, plays an important role about the future mortality behavior. The traditional ARIMA process usually used to model kt shows evident limitations to describe the future mortality shape. Concerning forecasting phase, academics should approach a more plausible way in order to think a nonlinear shape of the projected mortality rates. Therefore, we propose an alternative approach the ARIMA processes based on a deep learning technique. More precisely, in order to catch the pattern of kt series over time more accurately, we apply a Recurrent Neural Network with a Long Short-Term Memory architecture and integrate the Lee–Carter model to improve its predictive capacity. The proposed approach provides significant performance in terms of predictive accuracy and also allow for avoiding the time-chunks’ a priori selection. Indeed, it is a common practice among academics to delete the time in which the noise is overflowing or the data quality is insufficient. The strength of the Long Short-Term Memory network lies in its ability to treat this noise and adequately reproduce it into the forecasted trend, due to its own architecture enabling to take into account significant long-term patterns

    Machine learning based detection of Kepler objects of interest

    Get PDF
    The authors would like to thank CNPq-Brazil and the University of St Andrews for their kind supportPostprintPeer reviewe

    Light curve analysis from Kepler spacecraft collected data

    Get PDF
    The authors would like to thank CNPq-Brazil and the University of St Andrews for their kind support.Although scarce, previous work on the application of machine learning and data mining techniques on large corpora of astronomical data has produced promising results. For example, on the task of detecting so-called Kepler objects of interest (KOIs), a range of different ‘off the shelf’ classifiers has demonstrated outstanding performance. These rather preliminary research efforts motivate further exploration of this data domain. In the present work we focus on the analysis of threshold crossing events (TCEs) extracted from photometric data acquired by the Kepler spacecraft. We show that the task of classifying TCEs as being erected by actual planetary transits as opposed to confounding astrophysical phenomena is significantly more challenging than that of KOI detection, with different classifiers exhibiting vastly different performances. Nevertheless,the best performing classifier type, the random forest, achieved excellent accuracy, correctly predicting in approximately 96% of the cases. Our results and analysis should illuminate further efforts into the development of more sophisticated, automatic techniques, and encourage additional work in the area.Postprin

    A random forest algorithm to improve the Lee–Carter mortality forecasting: impact on q-forward

    Get PDF
    Increased life expectancy in developed countries has led researchers to pay more attention to mortality projection to anticipate changes in mortality rates. Following the scheme proposed in Deprez et al. (Eur Actuar J 7(2):337–352, 2017) and extended by Levantesi and Pizzorusso (Risks 7(1):26, 2019), we propose a novel approach based on the combination of random forest and two-dimensional P-spline, allowing for accurate mortality forecasting. This approach firstly provides a diagnosis of the limits of the Lee–Carter mortality model through the application of the random forest estimator to the ratio between the observed deaths and their estimated values given by a certain model, while the two-dimensional P-spline are used to smooth and project the random forest estimator in the forecasting phase. Further considerations are devoted to assessing the demographic consistency of the results. The model accuracy is evaluated by an out-of-sample test. Finally, we analyze the impact of our model on the pricing of q-forward contracts. All the analyses have been carried out on several countries by using data from the Human Mortality Database and considering the Lee–Carter model

    Minority Report: o caminho para uma teoria determinista para a filosofia do direito penal

    Get PDF
    In this article we start from the analysis of the film Minority Report (Steven Spielberg, 2002) in order to study the consequences of the use of Artificial Intelligence as a crime prevention tool. We explore the ethical issues of these technologies as accessories - and protagonists - of contemporary law and raise the challenges for criminal law theory in the face of a possible total prevention of crime. Finally, we conclude with the prospects that these changes may have for governments, their justice administration policies and power relations in the near future of “algorithmic republics”.En este artĂ­culo partimos del anĂĄlisis de la pelĂ­cula Minority Report (Steven Spielberg, 2002) para estudiar las consecuencias del uso de la Inteligencia Artificial como herramienta de prevenciĂłn del delito. Exploramos las cuestiones Ă©ticas de estas tecnologĂ­as como accesorias -y protagonistas- del derecho contemporĂĄneo y planteamos los retos para la teorĂ­a del derecho penal ante una posible prevenciĂłn total del delito. Finalmente, concluimos con las perspectivas que estos cambios pueden tener para los gobiernos, sus polĂ­ticas de administraciĂłn de justicia y las relaciones de poder en el futuro prĂłximo de las “repĂșblicas algorĂ­tmicas”.Neste artigo partimos da anĂĄlise do filme Minority Report (Steven Spielberg, 2002) para estudar as consequĂȘncias do uso da InteligĂȘncia Artificial como instrumento de prevenção ao crime. Exploramos os problemas Ă©ticos dessa estratĂ©gia como assessoria – e protagonistas – do direito contemporĂąneo e propomos os desafios para a teoria do direito penal ante uma possĂ­vel prevenção total do crime. Finalmente, concluĂ­mos com as perspectivas que estas alteraçÔes podem ter para os governos, suas polĂ­ticas de administração de justiça e as relaçÔes de poder em um futuro prĂłximo das “repĂșblicas algorĂ­tmica”
    • 

    corecore