11,416 research outputs found

    Error detection and control for nonlinear shell analysis

    Get PDF
    A problem-adaptive solution procedure for improving the reliability of finite element solutions to geometrically nonlinear shell-type problem is presented. The strategy incorporates automatic error detection and control and includes an iterative procedure which utilizes the solution at the same load step on a more refined model. Representative nonlinear shell problem are solved

    Segmentation of the left ventricle of the heart in 3-D+t MRI data using an optimized nonrigid temporal model

    Get PDF
    Modern medical imaging modalities provide large amounts of information in both the spatial and temporal domains and the incorporation of this information in a coherent algorithmic framework is a significant challenge. In this paper, we present a novel and intuitive approach to combine 3-D spatial and temporal (3-D + time) magnetic resonance imaging (MRI) data in an integrated segmentation algorithm to extract the myocardium of the left ventricle. A novel level-set segmentation process is developed that simultaneously delineates and tracks the boundaries of the left ventricle muscle. By encoding prior knowledge about cardiac temporal evolution in a parametric framework, an expectation-maximization algorithm optimally tracks the myocardial deformation over the cardiac cycle. The expectation step deforms the level-set function while the maximization step updates the prior temporal model parameters to perform the segmentation in a nonrigid sense

    An empirical study of neighbourhood decay in Kohonen\u27s self organizing map

    Full text link
    In this paper, empirical results are presented which suggest that size and rate of decay of region size plays a much more significant role in the learning, and especially the development, of topographic feature maps. Using these results as a basis, a scheme for decaying region size during SOM training is proposed. The proposed technique provides near optimal training time. This scheme avoids the need for sophisticated learning gain decay schemes, and precludes the need for a priori knowledge of likely training times. This scheme also has some potential uses for continuous learning

    A dynamic network model with persistent links and node-specific latent variables, with an application to the interbank market

    Get PDF
    We propose a dynamic network model where two mechanisms control the probability of a link between two nodes: (i) the existence or absence of this link in the past, and (ii) node-specific latent variables (dynamic fitnesses) describing the propensity of each node to create links. Assuming a Markov dynamics for both mechanisms, we propose an Expectation-Maximization algorithm for model estimation and inference of the latent variables. The estimated parameters and fitnesses can be used to forecast the presence of a link in the future. We apply our methodology to the e-MID interbank network for which the two linkage mechanisms are associated with two different trading behaviors in the process of network formation, namely preferential trading and trading driven by node-specific characteristics. The empirical results allow to recognise preferential lending in the interbank market and indicate how a method that does not account for time-varying network topologies tends to overestimate preferential linkage.Comment: 19 pages, 6 figure

    Double Whammy - How ICT Projects are Fooled by Randomness and Screwed by Political Intent

    Get PDF
    The cost-benefit analysis formulates the holy trinity of objectives of project management - cost, schedule, and benefits. As our previous research has shown, ICT projects deviate from their initial cost estimate by more than 10% in 8 out of 10 cases. Academic research has argued that Optimism Bias and Black Swan Blindness cause forecasts to fall short of actual costs. Firstly, optimism bias has been linked to effects of deception and delusion, which is caused by taking the inside-view and ignoring distributional information when making decisions. Secondly, we argued before that Black Swan Blindness makes decision-makers ignore outlying events even if decisions and judgements are based on the outside view. Using a sample of 1,471 ICT projects with a total value of USD 241 billion - we answer the question: Can we show the different effects of Normal Performance, Delusion, and Deception? We calculated the cumulative distribution function (CDF) of (actual-forecast)/forecast. Our results show that the CDF changes at two tipping points - the first one transforms an exponential function into a Gaussian bell curve. The second tipping point transforms the bell curve into a power law distribution with the power of 2. We argue that these results show that project performance up to the first tipping point is politically motivated and project performance above the second tipping point indicates that project managers and decision-makers are fooled by random outliers, because they are blind to thick tails. We then show that Black Swan ICT projects are a significant source of uncertainty to an organisation and that management needs to be aware of

    Deformable face ensemble alignment with robust grouped-L1 anchors

    Get PDF
    Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped L1\mathcal{L}1-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors"

    Active Learning Strategies for Technology Assisted Sensitivity Review

    Get PDF
    Government documents must be reviewed to identify and protect any sensitive information, such as personal information, before the documents can be released to the public. However, in the era of digital government documents, such as e-mail, traditional sensitivity review procedures are no longer practical, for example due to the volume of documents to be reviewed. Therefore, there is a need for new technology assisted review protocols to integrate automatic sensitivity classification into the sensitivity review process. Moreover, to effectively assist sensitivity review, such assistive technologies must incorporate reviewer feedback to enable sensitivity classifiers to quickly learn and adapt to the sensitivities within a collection, when the types of sensitivity are not known a priori. In this work, we present a thorough evaluation of active learning strategies for sensitivity review. Moreover, we present an active learning strategy that integrates reviewer feedback, from sensitive text annotations, to identify features of sensitivity that enable us to learn an effective sensitivity classifier (0.7 Balanced Accuracy) using significantly less reviewer effort, according to the sign test (p < 0.01 ). Moreover, this approach results in a 51% reduction in the number of documents required to be reviewed to achieve the same level of classification accuracy, compared to when the approach is deployed without annotation features
    corecore