82,803 research outputs found
Technical Note: Enhancing Soft Tissue Contrast And RadiationâInduced Image Changes With DualâEnergy CT For Radiation Therapy
Purpose
The purpose of this work is to investigate the use of lowâenergy monoenergetic decompositions obtained from dualâenergy CT (DECT) to enhance image contrast and the detection of radiationâinduced changes of CT textures in pancreatic cancer. Methods
The DECT data acquired for 10 consecutive pancreatic cancer patients during routine nongated CTâguided radiation therapy (RT) using an inâroom CT (Definition AS Open, Siemens Healthcare, Malvern, PA) were analyzed. With a sequential DE protocol, the scanner rapidly performs two helical acquisitions, the first at a tube voltage of 80 kVp and the second at a tube voltage of 140 kVp. Virtual monoenergetic images across a range of energies from 40 to 140 keV were reconstructed using an imageâbased material decomposition. Intravenous (IV) bolusâfree contrast enhancement in pancreas patient tumors was measured across a spectrum of monoenergies. For treatment response assessment, the changes in CT histogram features (including mean CT number (MCTN), entropy, kurtosis) in pancreas tumors were measured during treatment. The results from the monoenergetic decompositions were compared to those obtained from the standard 120 kVp CT protocol for the same subjects. Results
Data of monoenergetic decompositions of the 10 patients confirmed the expected enhancement of soft tissue contrast as the energy is decreased. The changes in the selected CT histogram features in the pancreas during RT delivery were amplified with the lowâenergy monoenergetic decompositions, as compared to the changes measured from the 120 kVp CTs. For the patients studied, the average reduction in the MCTN in pancreas from the first to the last (the 28th) treatment fraction was 4.09 HU for the standard 120 kVp and 11.15 HU for the 40 keV monoenergetic decomposition. Conclusions
Lowâenergy monoenergetic decompositions from DECT substantially increase soft tissue contrast and increase the magnitude of radiationâinduced changes in CT histogram textures during RT delivery for pancreatic cancer. Therefore, quantitative DECT may assist the detection of early RT response
Refining SCJ Mission Specifications into Parallel Handler Designs
Safety-Critical Java (SCJ) is a recent technology that restricts the
execution and memory model of Java in such a way that applications can be
statically analysed and certified for their real-time properties and safe use
of memory. Our interest is in the development of comprehensive and sound
techniques for the formal specification, refinement, design, and implementation
of SCJ programs, using a correct-by-construction approach. As part of this
work, we present here an account of laws and patterns that are of general use
for the refinement of SCJ mission specifications into designs of parallel
handlers used in the SCJ programming paradigm. Our notation is a combination of
languages from the Circus family, supporting state-rich reactive models with
the addition of class objects and real-time properties. Our work is a first
step to elicit laws of programming for SCJ and fits into a refinement strategy
that we have developed previously to derive SCJ programs.Comment: In Proceedings Refine 2013, arXiv:1305.563
Joint Detection and Tracking in Videos with Identification Features
Recent works have shown that combining object detection and tracking tasks,
in the case of video data, results in higher performance for both tasks, but
they require a high frame-rate as a strict requirement for performance. This is
assumption is often violated in real-world applications, when models run on
embedded devices, often at only a few frames per second.
Videos at low frame-rate suffer from large object displacements. Here
re-identification features may support to match large-displaced object
detections, but current joint detection and re-identification formulations
degrade the detector performance, as these two are contrasting tasks. In the
real-world application having separate detector and re-id models is often not
feasible, as both the memory and runtime effectively double.
Towards robust long-term tracking applicable to reduced-computational-power
devices, we propose the first joint optimization of detection, tracking and
re-identification features for videos. Notably, our joint optimization
maintains the detector performance, a typical multi-task challenge. At
inference time, we leverage detections for tracking (tracking-by-detection)
when the objects are visible, detectable and slowly moving in the image. We
leverage instead re-identification features to match objects which disappeared
(e.g. due to occlusion) for several frames or were not tracked due to fast
motion (or low-frame-rate videos). Our proposed method reaches the
state-of-the-art on MOT, it ranks 1st in the UA-DETRAC'18 tracking challenge
among online trackers, and 3rd overall.Comment: Accepted at Image and Vision Computing Journa
On the future of astrostatistics: statistical foundations and statistical practice
This paper summarizes a presentation for a panel discussion on "The Future of
Astrostatistics" held at the Statistical Challenges in Modern Astronomy V
conference at Pennsylvania State University in June 2011. I argue that the
emerging needs of astrostatistics may both motivate and benefit from
fundamental developments in statistics. I highlight some recent work within
statistics on fundamental topics relevant to astrostatistical practice,
including the Bayesian/frequentist debate (and ideas for a synthesis),
multilevel models, and multiple testing. As an important direction for future
work in statistics, I emphasize that astronomers need a statistical framework
that explicitly supports unfolding chains of discovery, with acquisition,
cataloging, and modeling of data not seen as isolated tasks, but rather as
parts of an ongoing, integrated sequence of analyses, with information and
uncertainty propagating forward and backward through the chain. A prototypical
example is surveying of astronomical populations, where source detection,
demographic modeling, and the design of survey instruments and strategies all
interact.Comment: 8 pp, 2 figures. To appear in "Statistical Challenges in Modern
Astronomy V," (Lecture Notes in Statistics, Vol. 209), ed. Eric D. Feigelson
and G. Jogesh Babu; publication planned for Sep 2012; see
http://www.springer.com/statistics/book/978-1-4614-3519-
P4CEP: Towards In-Network Complex Event Processing
In-network computing using programmable networking hardware is a strong trend
in networking that promises to reduce latency and consumption of server
resources through offloading to network elements (programmable switches and
smart NICs). In particular, the data plane programming language P4 together
with powerful P4 networking hardware has spawned projects offloading services
into the network, e.g., consensus services or caching services. In this paper,
we present a novel case for in-network computing, namely, Complex Event
Processing (CEP). CEP processes streams of basic events, e.g., stemming from
networked sensors, into meaningful complex events. Traditionally, CEP
processing has been performed on servers or overlay networks. However, we argue
in this paper that CEP is a good candidate for in-network computing along the
communication path avoiding detouring streams to distant servers to minimize
communication latency while also exploiting processing capabilities of novel
networking hardware. We show that it is feasible to express CEP operations in
P4 and also present a tool to compile CEP operations, formulated in our P4CEP
rule specification language, to P4 code. Moreover, we identify challenges and
problems that we have encountered to show future research directions for
implementing full-fledged in-network CEP systems.Comment: 6 pages. Author's versio
Accelerated High-Resolution Photoacoustic Tomography via Compressed Sensing
Current 3D photoacoustic tomography (PAT) systems offer either high image
quality or high frame rates but are not able to deliver high spatial and
temporal resolution simultaneously, which limits their ability to image dynamic
processes in living tissue. A particular example is the planar Fabry-Perot (FP)
scanner, which yields high-resolution images but takes several minutes to
sequentially map the photoacoustic field on the sensor plane, point-by-point.
However, as the spatio-temporal complexity of many absorbing tissue structures
is rather low, the data recorded in such a conventional, regularly sampled
fashion is often highly redundant. We demonstrate that combining variational
image reconstruction methods using spatial sparsity constraints with the
development of novel PAT acquisition systems capable of sub-sampling the
acoustic wave field can dramatically increase the acquisition speed while
maintaining a good spatial resolution: First, we describe and model two general
spatial sub-sampling schemes. Then, we discuss how to implement them using the
FP scanner and demonstrate the potential of these novel compressed sensing PAT
devices through simulated data from a realistic numerical phantom and through
measured data from a dynamic experimental phantom as well as from in-vivo
experiments. Our results show that images with good spatial resolution and
contrast can be obtained from highly sub-sampled PAT data if variational image
reconstruction methods that describe the tissues structures with suitable
sparsity-constraints are used. In particular, we examine the use of total
variation regularization enhanced by Bregman iterations. These novel
reconstruction strategies offer new opportunities to dramatically increase the
acquisition speed of PAT scanners that employ point-by-point sequential
scanning as well as reducing the channel count of parallelized schemes that use
detector arrays.Comment: submitted to "Physics in Medicine and Biology
- âŠ