27,370 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Dynamic finite-strain modelling of the human left ventricle in health and disease using an immersed boundary-finite element method

    Get PDF
    Detailed models of the biomechanics of the heart are important both for developing improved interventions for patients with heart disease and also for patient risk stratification and treatment planning. For instance, stress distributions in the heart affect cardiac remodelling, but such distributions are not presently accessible in patients. Biomechanical models of the heart offer detailed three-dimensional deformation, stress and strain fields that can supplement conventional clinical data. In this work, we introduce dynamic computational models of the human left ventricle (LV) that are derived from clinical imaging data obtained from a healthy subject and from a patient with a myocardial infarction (MI). Both models incorporate a detailed invariant-based orthotropic description of the passive elasticity of the ventricular myocardium along with a detailed biophysical model of active tension generation in the ventricular muscle. These constitutive models are employed within a dynamic simulation framework that accounts for the inertia of the ventricular muscle and the blood that is based on an immersed boundary (IB) method with a finite element description of the structural mechanics. The geometry of the models is based on data obtained non-invasively by cardiac magnetic resonance (CMR). CMR imaging data are also used to estimate the parameters of the passive and active constitutive models, which are determined so that the simulated end-diastolic and end-systolic volumes agree with the corresponding volumes determined from the CMR imaging studies. Using these models, we simulate LV dynamics from end-diastole to end-systole. The results of our simulations are shown to be in good agreement with subject-specific CMR-derived strain measurements and also with earlier clinical studies on human LV strain distributions

    Sequential circuit design in quantum-dot cellular automata

    Get PDF
    In this work we present a novel probabilistic modeling scheme for sequential circuit design in quantum-dot cellular automata(QCA) technology. Clocked QCA circuits possess an inherent direction for flow of information which can be effectively modeled using Bayesian networks (BN). In sequential circuit design this presents a problem due to the presence of feedback cycles since BN are direct acyclic graphs (DAG). The model presented in this work can be constructed from a logic design layout in QCA and is shown to be a dynamic Bayesian Network (DBN). DBN are very powerful in modeling higher order spatial and temporal correlations that are present in most of the sequential circuits. The attractive feature of this graphical probabilistic model is that that it not only makes the dependency relationships amongst node explicit, but it also serves as a computational mechanism for probabilistic inference. We analyze our work by modeling clocked QCA circuits for SR F/F, JK F/F and RAM designs

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Causal hierarchy within the thalamo-cortical network in spike and wave discharges

    Get PDF
    Background: Generalised spike wave (GSW) discharges are the electroencephalographic (EEG) hallmark of absence seizures, clinically characterised by a transitory interruption of ongoing activities and impaired consciousness, occurring during states of reduced awareness. Several theories have been proposed to explain the pathophysiology of GSW discharges and the role of thalamus and cortex as generators. In this work we extend the existing theories by hypothesizing a role for the precuneus, a brain region neglected in previous works on GSW generation but already known to be linked to consciousness and awareness. We analysed fMRI data using dynamic causal modelling (DCM) to investigate the effective connectivity between precuneus, thalamus and prefrontal cortex in patients with GSW discharges. Methodology and Principal Findings: We analysed fMRI data from seven patients affected by Idiopathic Generalized Epilepsy (IGE) with frequent GSW discharges and significant GSW-correlated haemodynamic signal changes in the thalamus, the prefrontal cortex and the precuneus. Using DCM we assessed their effective connectivity, i.e. which region drives another region. Three dynamic causal models were constructed: GSW was modelled as autonomous input to the thalamus (model A), ventromedial prefrontal cortex (model B), and precuneus (model C). Bayesian model comparison revealed Model C (GSW as autonomous input to precuneus), to be the best in 5 patients while model A prevailed in two cases. At the group level model C dominated and at the population-level the p value of model C was ∼1. Conclusion: Our results provide strong evidence that activity in the precuneus gates GSW discharges in the thalamo-(fronto) cortical network. This study is the first demonstration of a causal link between haemodynamic changes in the precuneus - an index of awareness - and the occurrence of pathological discharges in epilepsy. © 2009 Vaudano et al

    Overview of Swallow --- A Scalable 480-core System for Investigating the Performance and Energy Efficiency of Many-core Applications and Operating Systems

    Full text link
    We present Swallow, a scalable many-core architecture, with a current configuration of 480 x 32-bit processors. Swallow is an open-source architecture, designed from the ground up to deliver scalable increases in usable computational power to allow experimentation with many-core applications and the operating systems that support them. Scalability is enabled by the creation of a tile-able system with a low-latency interconnect, featuring an attractive communication-to-computation ratio and the use of a distributed memory configuration. We analyse the energy and computational and communication performances of Swallow. The system provides 240GIPS with each core consuming 71--193mW, dependent on workload. Power consumption per instruction is lower than almost all systems of comparable scale. We also show how the use of a distributed operating system (nOS) allows the easy creation of scalable software to exploit Swallow's potential. Finally, we show two use case studies: modelling neurons and the overlay of shared memory on a distributed memory system.Comment: An open source release of the Swallow system design and code will follow and references to these will be added at a later dat

    Enhancing Compressed Sensing 4D Photoacoustic Tomography by Simultaneous Motion Estimation

    Get PDF
    A crucial limitation of current high-resolution 3D photoacoustic tomography (PAT) devices that employ sequential scanning is their long acquisition time. In previous work, we demonstrated how to use compressed sensing techniques to improve upon this: images with good spatial resolution and contrast can be obtained from suitably sub-sampled PAT data acquired by novel acoustic scanning systems if sparsity-constrained image reconstruction techniques such as total variation regularization are used. Now, we show how a further increase of image quality can be achieved for imaging dynamic processes in living tissue (4D PAT). The key idea is to exploit the additional temporal redundancy of the data by coupling the previously used spatial image reconstruction models with sparsity-constrained motion estimation models. While simulated data from a two-dimensional numerical phantom will be used to illustrate the main properties of this recently developed joint-image-reconstruction-and-motion-estimation framework, measured data from a dynamic experimental phantom will also be used to demonstrate their potential for challenging, large-scale, real-world, three-dimensional scenarios. The latter only becomes feasible if a carefully designed combination of tailored optimization schemes is employed, which we describe and examine in more detail

    Recurrent Fully Convolutional Neural Networks for Multi-slice MRI Cardiac Segmentation

    Full text link
    In cardiac magnetic resonance imaging, fully-automatic segmentation of the heart enables precise structural and functional measurements to be taken, e.g. from short-axis MR images of the left-ventricle. In this work we propose a recurrent fully-convolutional network (RFCN) that learns image representations from the full stack of 2D slices and has the ability to leverage inter-slice spatial dependences through internal memory units. RFCN combines anatomical detection and segmentation into a single architecture that is trained end-to-end thus significantly reducing computational time, simplifying the segmentation pipeline, and potentially enabling real-time applications. We report on an investigation of RFCN using two datasets, including the publicly available MICCAI 2009 Challenge dataset. Comparisons have been carried out between fully convolutional networks and deep restricted Boltzmann machines, including a recurrent version that leverages inter-slice spatial correlation. Our studies suggest that RFCN produces state-of-the-art results and can substantially improve the delineation of contours near the apex of the heart.Comment: MICCAI Workshop RAMBO 201
    corecore