586,151 research outputs found

    Distributed-memory large deformation diffeomorphic 3D image registration

    Full text link
    We present a parallel distributed-memory algorithm for large deformation diffeomorphic registration of volumetric images that produces large isochoric deformations (locally volume preserving). Image registration is a key technology in medical image analysis. Our algorithm uses a partial differential equation constrained optimal control formulation. Finding the optimal deformation map requires the solution of a highly nonlinear problem that involves pseudo-differential operators, biharmonic operators, and pure advection operators both forward and back- ward in time. A key issue is the time to solution, which poses the demand for efficient optimization methods as well as an effective utilization of high performance computing resources. To address this problem we use a preconditioned, inexact, Gauss-Newton- Krylov solver. Our algorithm integrates several components: a spectral discretization in space, a semi-Lagrangian formulation in time, analytic adjoints, different regularization functionals (including volume-preserving ones), a spectral preconditioner, a highly optimized distributed Fast Fourier Transform, and a cubic interpolation scheme for the semi-Lagrangian time-stepping. We demonstrate the scalability of our algorithm on images with resolution of up to 102431024^3 on the "Maverick" and "Stampede" systems at the Texas Advanced Computing Center (TACC). The critical problem in the medical imaging application domain is strong scaling, that is, solving registration problems of a moderate size of 2563256^3---a typical resolution for medical images. We are able to solve the registration problem for images of this size in less than five seconds on 64 x86 nodes of TACC's "Maverick" system.Comment: accepted for publication at SC16 in Salt Lake City, Utah, USA; November 201

    WebFlow - A Visual Programming Paradigm for Web/Java Based Coarse Grain Distributed Computing

    Get PDF
    We present here the recent work at NPAC aimed at developing WebFlow---a general purpose Web based visual interactive programming environment for coarse grain distributed computing. We follow the 3-tier architecture with the central control and integration WebVM layer in tier-2, interacting with the visual graph editor applets in tier-1 (front-end) and the legacy systems in tier-3. WebVM is given by a mesh of Java Web servers such as Jeeves from JavaSoft or Jigsaw from MIT/W3C. All system control structures are implemented as URL-addressable servlets which enable Web browser-based authoring, monitoring, publication, documentation and software distribution tools for distributed computing. We view WebFlow/WEbVM as a promising programming paradigm and coordination model for the exploding volume of Web/Java software, and we illustrate it in a set of ongoing application development activities

    On the Move to Meaningful Internet Systems: OTM 2015 Workshops: Confederated International Workshops: OTM Academy, OTM Industry Case Studies Program, EI2N, FBM, INBAST, ISDE, META4eS, and MSC 2015, Rhodes, Greece, October 26-30, 2015. Proceedings

    Get PDF
    International audienceThis volume constitutes the refereed proceedings of the following 8 International Workshops: OTM Academy; OTM Industry Case Studies Program; Enterprise Integration, Interoperability, and Networking, EI2N; International Workshop on Fact Based Modeling 2015, FBM; Industrial and Business Applications of Semantic Web Technologies, INBAST; Information Systems, om Distributed Environment, ISDE; Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society, META4eS; and Mobile and Social Computing for collaborative interactions, MSC 2015. These workshops were held as associated events at OTM 2015, the federated conferences "On The Move Towards Meaningful Internet Systems and Ubiquitous Computing", in Rhodes, Greece, in October 2015.The 55 full papers presented together with 3 short papers and 2 popsters were carefully reviewed and selected from a total of 100 submissions. The workshops share the distributed aspects of modern computing systems, they experience the application pull created by the Internet and by the so-called Semantic Web, in particular developments of Big Data, increased importance of security issues, and the globalization of mobile-based technologies

    High performance data analysis via coordinated caches

    Get PDF
    With the second run period of the LHC, high energy physics collaborations will have to face increasing computing infrastructural needs. Opportunistic resources are expected to absorb many computationally expensive tasks, such as Monte Carlo event simulation. This leaves dedicated HEP infrastructure with an increased load of analysis tasks that in turn will need to process an increased volume of data. In addition to storage capacities, a key factor for future computing infrastructure is therefore input bandwidth available per core. Modern data analysis infrastructure relies on one of two paradigms: data is kept on dedicated storage and accessed via network or distributed over all compute nodes and accessed locally. Dedicated storage allows data volume to grow independently of processing capacities, whereas local access allows processing capacities to scale linearly. However, with the growing data volume and processing requirements, HEP will require both of these features. For enabling adequate user analyses in the future, the KIT CMS group is merging both paradigms: popular data is spread over a local disk layer on compute nodes, while any data is available from an arbitrarily sized background storage. This concept is implemented as a pool of distributed caches, which are loosely coordinated by a central service. A Tier 3 prototype cluster is currently being set up for performant user analyses of both local and remote data

    OPENMENDEL: A Cooperative Programming Project for Statistical Genetics

    Full text link
    Statistical methods for genomewide association studies (GWAS) continue to improve. However, the increasing volume and variety of genetic and genomic data make computational speed and ease of data manipulation mandatory in future software. In our view, a collaborative effort of statistical geneticists is required to develop open source software targeted to genetic epidemiology. Our attempt to meet this need is called the OPENMENDELproject (https://openmendel.github.io). It aims to (1) enable interactive and reproducible analyses with informative intermediate results, (2) scale to big data analytics, (3) embrace parallel and distributed computing, (4) adapt to rapid hardware evolution, (5) allow cloud computing, (6) allow integration of varied genetic data types, and (7) foster easy communication between clinicians, geneticists, statisticians, and computer scientists. This article reviews and makes recommendations to the genetic epidemiology community in the context of the OPENMENDEL project.Comment: 16 pages, 2 figures, 2 table

    An Efficient Approach for Cost Optimization of the Movement of Big Data

    Get PDF
    With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement

    'NoSQL' and electronic patient record systems: opportunities and challenges

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Research into electronic health record systems can be traced back over four decades however the penetration of records which incorporate more than simply basic information into healthcare organizations is relatively limited. There is a great (and largely unsatisfied) demand for effective health record systems, such systems are very difficult to build with data generally stored in highly distributed states in a diverse range of formats as unstructured data with access and updating achieved over online systems. Internet application design must reflect three trends in the computing landscape: (1) growing numbers of users applications must support (along with growing user performance expectations), (2) growth in the volume and range and diversity in the data that developers accommodate, and (3) and the rise of Cloud Computing (which relies on a distributed three-tier Internet architecture). The traditional approach to data storage has generally employed Relational Database Systems however to address the evolving paradigm interest has been shown in alternative database systems including 'NoSQL' technologies which are gaining traction in Internet based enterprise systems. This paper considers the requirements of distributed health record systems in online applications and database systems. The analysis supports the conclusion that 'NoSQL' database systems provide a potentially useful approach to the implementation of HR systems in online applications.Peer ReviewedPostprint (author's final draft

    Improved workflow for quantification of left ventricular volumes and mass using free-breathing motion corrected cine imaging.

    Get PDF
    BACKGROUND: Traditional cine imaging for cardiac functional assessment requires breath-holding, which can be problematic in some situations. Free-breathing techniques have relied on multiple averages or real-time imaging, producing images that can be spatially and/or temporally blurred. To overcome this, methods have been developed to acquire real-time images over multiple cardiac cycles, which are subsequently motion corrected and reformatted to yield a single image series displaying one cardiac cycle with high temporal and spatial resolution. Application of these algorithms has required significant additional reconstruction time. The use of distributed computing was recently proposed as a way to improve clinical workflow with such algorithms. In this study, we have deployed a distributed computing version of motion corrected re-binning reconstruction for free-breathing evaluation of cardiac function. METHODS: Twenty five patients and 25 volunteers underwent cardiovascular magnetic resonance (CMR) for evaluation of left ventricular end-systolic volume (ESV), end-diastolic volume (EDV), and end-diastolic mass. Measurements using motion corrected re-binning were compared to those using breath-held SSFP and to free-breathing SSFP with multiple averages, and were performed by two independent observers. Pearson correlation coefficients and Bland-Altman plots tested agreement across techniques. Concordance correlation coefficient and Bland-Altman analysis tested inter-observer variability. Total scan plus reconstruction times were tested for significant differences using paired t-test. RESULTS: Measured volumes and mass obtained by motion corrected re-binning and by averaged free-breathing SSFP compared favorably to those obtained by breath-held SSFP (r = 0.9863/0.9813 for EDV, 0.9550/0.9685 for ESV, 0.9952/0.9771 for mass). Inter-observer variability was good with concordance correlation coefficients between observers across all acquisition types suggesting substantial agreement. Both motion corrected re-binning and averaged free-breathing SSFP acquisition and reconstruction times were shorter than breath-held SSFP techniques (p \u3c 0.0001). On average, motion corrected re-binning required 3 min less than breath-held SSFP imaging, a 37 % reduction in acquisition and reconstruction time. CONCLUSIONS: The motion corrected re-binning image reconstruction technique provides robust cardiac imaging that can be used for quantification that compares favorably to breath-held SSFP as well as multiple average free-breathing SSFP, but can be obtained in a fraction of the time when using cloud-based distributed computing reconstruction
    • …
    corecore