3,921 research outputs found
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Parameter Synthesis for Markov Models
Markov chain analysis is a key technique in reliability engineering. A
practical obstacle is that all probabilities in Markov models need to be known.
However, system quantities such as failure rates or packet loss ratios, etc.
are often not---or only partially---known. This motivates considering
parametric models with transitions labeled with functions over parameters.
Whereas traditional Markov chain analysis evaluates a reliability metric for a
single, fixed set of probabilities, analysing parametric Markov models focuses
on synthesising parameter values that establish a given reliability or
performance specification . Examples are: what component failure rates
ensure the probability of a system breakdown to be below 0.00000001?, or which
failure rates maximise reliability? This paper presents various analysis
algorithms for parametric Markov chains and Markov decision processes. We focus
on three problems: (a) do all parameter values within a given region satisfy
?, (b) which regions satisfy and which ones do not?, and (c)
an approximate version of (b) focusing on covering a large fraction of all
possible parameter values. We give a detailed account of the various
algorithms, present a software tool realising these techniques, and report on
an extensive experimental evaluation on benchmarks that span a wide range of
applications.Comment: 38 page
Structuring Decisions Under Deep Uncertainty
Innovative research on decision making under ‘deep uncertainty’ is underway in applied fields such as engineering and operational research, largely outside the view of normative theorists grounded in decision theory. Applied methods and tools for decision support under deep uncertainty go beyond standard decision theory in the attention that they give to the structuring of decisions. Decision structuring is an important part of a broader philosophy of managing uncertainty in decision making, and normative decision theorists can both learn from, and contribute to, the growing deep uncertainty decision support literature
Advancing Aircraft Operations in a Net-Centric Environment with the Incorporation of Increasingly Autonomous Systems and Human Teaming
NextGen has begun the modernization of the nations air transportation system, with goals to improve system safety, increase operation efficiency and capacity, provide enhanced predictability, resilience and robustness. With these improvements, NextGen is poised to handle significant increases in air traffic operations, more than twice the number recorded in 2016, by 2025.1 NextGen is evolving toward collaborative decision-making across many agents, including automation, by use of a Net-Centric architecture, which in itself creates a very complex environment in which the navigation and operation of aircraft are to take place. An intricate environment such as this, coupled with the expected upsurge of air traffic operations generates concern respecting the ability of the human-agent to both fly and manage aircraft within. Therefore, it is both necessary and practical to begin the process of increasingly autonomous systems within the cockpit that will act independently to assist the human-agent achieve the overall goal of NextGen. However, the straightforward technological development and implementation of intelligent machines into the cockpit is only part of what is necessary to maintain, at minimum, or improve human-agent functionality, as desired, while operating in NextGen. The full integration of Increasingly Autonomous Systems (IAS) within the cockpit can only be accomplished when the IAS works in concert with the human, formulating trust between the two, thereby establishing a team atmosphere. Imperative to cockpit implementation is ensuring the proper performance of the IAS by the development team and the human-agent with which it will be paired when given a specific piloting, navigation, or observational task. Described in this paper are the steps taken, at NASA Langley Research Center, during the second and third phases of the development of an IAS, the Traffic Data Manager (TDM), its verification and validation by human-agents, and the foundational development of Human Autonomy Teaming (HAT) between the two
Parameter Synthesis in Markov Models: A Gentle Survey
This paper surveys the analysis of parametric Markov models whose transitions
are labelled with functions over a finite set of parameters. These models are
symbolic representations of uncountable many concrete probabilistic models,
each obtained by instantiating the parameters. We consider various analysis
problems for a given logical specification : do all parameter
instantiations within a given region of parameter values satisfy ?,
which instantiations satisfy and which ones do not?, and how can all
such instantiations be characterised, either exactly or approximately? We
address theoretical complexity results and describe the main ideas underlying
state-of-the-art algorithms that established an impressive leap over the last
decade enabling the fully automated analysis of models with millions of states
and thousands of parameters
Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure
This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version
QuickCast: Fast and Efficient Inter-Datacenter Transfers using Forwarding Tree Cohorts
Large inter-datacenter transfers are crucial for cloud service efficiency and
are increasingly used by organizations that have dedicated wide area networks
between datacenters. A recent work uses multicast forwarding trees to reduce
the bandwidth needs and improve completion times of point-to-multipoint
transfers. Using a single forwarding tree per transfer, however, leads to poor
performance because the slowest receiver dictates the completion time for all
receivers. Using multiple forwarding trees per transfer alleviates this
concern--the average receiver could finish early; however, if done naively,
bandwidth usage would also increase and it is apriori unclear how best to
partition receivers, how to construct the multiple trees and how to determine
the rate and schedule of flows on these trees. This paper presents QuickCast, a
first solution to these problems. Using simulations on real-world network
topologies, we see that QuickCast can speed up the average receiver's
completion time by as much as while only using more
bandwidth; further, the completion time for all receivers also improves by as
much as faster at high loads.Comment: [Extended Version] Accepted for presentation in IEEE INFOCOM 2018,
Honolulu, H
- …