30 research outputs found

    Duration Consistency Filtering for Qualitative Simulation

    Full text link
    We present two new qualitative reasoning formalisms, and use them in the construction of a new type of filtering mechanism for qualitative simulators. Our new sign algebra, SR1*, facilitates reasoning about relationships among the signs of collections of real numbers. The comparison calculus , built on top of SR1*, is a general framework that can be used to qualitatively compare the behaviors of two dynamic systems or two excerpts of the behavior of a single dynamic system at different situations. These tools enable us to improve the predictive performance of qualitative simulation algorithms. We show that qualitative simulators can make better use of their input to deduce significant amounts of qualitative information about the relative lengths of the time intervals in their output behavior predictions. Simple techniques employing concepts like symmetry, periodicity, and comparison of the circumstances during multiple traversals of the same region can be used to build a list of facts representing the deduced information about relative durations. The duration consistency filter eliminates spurious behaviors leading to inconsistent combinations of these facts. Surviving behaviors are annotated with richer qualitative descriptions. Used in conjunction with other spurious behavior elimination methods, this approach would increase the ability of qualitative simulators to handle more complex systems.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41773/1/10472_2004_Article_5118515.pd

    Quantitative PET image reconstruction employing nested expectation-maximization deconvolution for motion compensation

    Get PDF
    Bulk body motion may randomly occur during PET acquisitions introducing blurring, attenuation-emission mismatches and, in dynamic PET, discontinuities in the measured time activity curves between consecutive frames. Meanwhile, dynamic PET scans are longer, thus increasing the probability of bulk motion. In this study, we propose a streamlined 3D PET motion-compensated image reconstruction (3D-MCIR) framework, capable of robustly deconvolving intra-frame motion from a static or dynamic 3D sinogram. The presented 3D-MCIR methods need not partition the data into multiple gates, such as 4D MCIR algorithms, or access list-mode (LM) data, such as LM MCIR methods, both associated with increased computation or memory resources. The proposed algorithms can support compensation for any periodic and non-periodic motion, such as cardio-respiratory or bulk motion, the latter including rolling, twisting or drifting. Inspired from the widely adopted point-spread function (PSF) deconvolution 3D PET reconstruction techniques, here we introduce an image-based 3D generalized motion deconvolution method within the standard 3D maximum-likelihood expectation-maximization (ML-EM) reconstruction framework. In particular, we initially integrate a motion blurring kernel, accounting for every tracked motion within a frame, as an additional MLEM modeling component in the image space (integrated 3D-MCIR). Subsequently, we replaced the integrated model component with a nested iterative Richardson-Lucy (RL) image-based deconvolution method to accelerate the MLEM algorithm convergence rate (RL-3D-MCIR). The final method was evaluated with realistic simulations of whole-body dynamic PET data employing the XCAT phantom and real human bulk motion profiles, the latter estimated from volunteer dynamic MRI scans. In addition, metabolic uptake rate Ki parametric images were generated with the standard Patlak method. Our results demonstrate significant improvement in contrast-to-noise ratio (CNR) and noise-bias performance in both dynamic and parametric images. The proposed nested RL-3D-MCIR method is implemented on the Software for Tomographic Image Reconstruction (STIR) open-source platform and is scheduled for public release

    Achieving far transfer in an integrated cognitive architecture

    No full text
    Transfer is the ability to employ knowledge acquired in one task to improve performance in another. We study transfer in the context of the ICARUS cognitive architecture, which supplies diverse capabilities for execution, inference, planning, and learning. We report on an extension to ICARUS called representation mapping that transfers structured skills and concepts between disparate tasks that may not even be expressed with the same symbol set. We show that representation mapping is naturally integrated into ICARUS ’ cognitive processing loop, resulting in a system that addresses a qualitatively new class of problems by considering the relevance of past experience to current goals

    and expert annotations

    No full text
    Learning goal hierarchies from structured observation
    corecore