2,726 research outputs found

    Finding Temporal Patterns in Noisy Longitudinal Data: A Study in Diabetic Retinopathy

    Get PDF
    This paper describes an approach to temporal pattern mining using the concept of user defined temporal prototypes to define the nature of the trends of interests. The temporal patterns are defined in terms of sequences of support values associated with identified frequent patterns. The prototypes are defined mathematically so that they can be mapped onto the temporal patterns. The focus for the advocated temporal pattern mining process is a large longitudinal patient database collected as part of a diabetic retinopathy screening programme, The data set is, in itself, also of interest as it is very noisy (in common with other similar medical datasets) and does not feature a clear association between specific time stamps and subsets of the data. The diabetic retinopathy application, the data warehousing and cleaning process, and the frequent pattern mining procedure (together with the application of the prototype concept) are all described in the paper. An evaluation of the frequent pattern mining process is also presented

    Quasi-Simultaneous Viscous-Inviscid Interaction for Three-Dimensional Turbulent Wing Flow

    Get PDF

    Quasi-Simultaneous Viscous-Inviscid Interaction for Three-Dimensional Turbulent Wing Flow

    Get PDF

    New media and culture. Summary

    Get PDF

    Internet and Democracy. Summary

    Get PDF

    Probabilistic modeling of noise transfer characteristics in digital circuits

    Get PDF
    Device scaling, the driving force of CMOS technology, led to continuous decrease in the energy level representing logic states. The resulting small noise margins in combination with increasing problems regarding the supply voltage stability and process variability creates a design conflict between efficiency and reliability. This conflict is expected to rise more in future technologies. Current research approaches on fault-tolerance architectures and countermeasures at circuit level, unfortunately, cause a significant area and energy penalty without guaranteeing absence of errors. To overcome this problem, it seems to be attractive to tolerate bit errors at circuit level and employ error handling methods at higher system levels. To do this, an estimate of the bit error rate (BER) at circuit level is necessary. Due to the size of the circuits, Monte Carlo simulation suffers from impractical runtimes. Therefore the needed modeling scheme is proposed. The model allows a probabilistic estimation of error rates at circuit level taking into account statistical effects ranging from supply noise and electromagnetic coupling to process variability within reasonable runtimes

    Money in monetary policy design: monetary cross-checking in the New-Keynesian model

    Get PDF
    In the New-Keynesian model, optimal interest rate policy under uncertainty is formulated without reference to monetary aggregates as long as certain standard assumptions on the distributions of unobservables are satisfied. The model has been criticized for failing to explain common trends in money growth and inflation, and that therefore money should be used as a cross-check in policy formulation (see Lucas (2007)). We show that the New-Keynesian model can explain such trends if one allows for the possibility of persistent central bank misperceptions. Such misperceptions motivate the search for policies that include additional robustness checks. In earlier work, we proposed an interest rate rule that is near-optimal in normal times but includes a cross-check with monetary information. In case of unusual monetary trends, interest rates are adjusted. In this paper, we show in detail how to derive the appropriate magnitude of the interest rate adjustment following a significant cross-check with monetary information, when the New-Keynesian model is the central bank’s preferred model. The cross-check is shown to be effective in offsetting persistent deviations of inflation due to central bank misperceptions. Keywords: Monetary Policy, New-Keynesian Model, Money, Quantity Theory, European Central Bank, Policy Under Uncertaint

    Functioning and disability in multiple sclerosis from the patient perspective

    Get PDF
    Multiple sclerosis (MS) has a great impact on functioning and disability. The perspective of those who experience the health problem has to be taken into account to obtain an in-depth understanding of functioning and disability. The objective was to describe the areas of functioning and disability and relevant contextual factors in MS from the patient perspective. A qualitative study using focus group methodology was performed. The sample size was determined by saturation. The focus groups were digitally recorded and transcribed verbatim. The meaning condensation procedure was used for data analysis. Identified concepts were linked to International Classification of Functioning, Disability and Health (ICF) categories according to established linking rules. Six focus groups with a total of 27 participants were performed. In total, 1327 concepts were identified and linked to 106 ICF categories of the ICF components Body Functions, Activities and Participation and Environmental Factors. This qualitative study reports on the impact of MS on functioning and disability from the patient perspective. The participants in this study provided information about all physical aspects and areas of daily life affected by the disease, as well as the environmental factors influencing their lives

    Quantification of the performance of iterative and non-iterative computational methods of locating partial discharges using RF measurement techniques

    Get PDF
    Partial discharge (PD) is an electrical discharge phenomenon that occurs when the insulation materialof high voltage equipment is subjected to high electric field stress. Its occurrence can be an indication ofincipient failure within power equipment such as power transformers, underground transmission cableor switchgear. Radio frequency measurement methods can be used to detect and locate discharge sourcesby measuring the propagated electromagnetic wave arising as a result of ionic charge acceleration. Anarray of at least four receiving antennas may be employed to detect any radiated discharge signals, thenthe three dimensional position of the discharge source can be calculated using different algorithms. These algorithms fall into two categories; iterative or non-iterative. This paper evaluates, through simulation, the location performance of an iterative method (the standardleast squares method) and a non-iterative method (the Bancroft algorithm). Simulations were carried outusing (i) a "Y" shaped antenna array and (ii) a square shaped antenna array, each consisting of a four-antennas. The results show that PD location accuracy is influenced by the algorithm's error bound, thenumber of iterations and the initial values for the iterative algorithms, as well as the antenna arrangement for both the non-iterative and iterative algorithms. Furthermore, this research proposes a novel approachfor selecting adequate error bounds and number of iterations using results of the non-iterative method, thus solving some of the iterative method dependencies
    • …
    corecore