102 research outputs found
PCA-based lung motion model
Organ motion induced by respiration may cause clinically significant
targeting errors and greatly degrade the effectiveness of conformal
radiotherapy. It is therefore crucial to be able to model respiratory motion
accurately. A recently proposed lung motion model based on principal component
analysis (PCA) has been shown to be promising on a few patients. However, there
is still a need to understand the underlying reason why it works. In this
paper, we present a much deeper and detailed analysis of the PCA-based lung
motion model. We provide the theoretical justification of the effectiveness of
PCA in modeling lung motion. We also prove that under certain conditions, the
PCA motion model is equivalent to 5D motion model, which is based on physiology
and anatomy of the lung. The modeling power of PCA model was tested on clinical
data and the average 3D error was found to be below 1 mm.Comment: 4 pages, 1 figure. submitted to International Conference on the use
of Computers in Radiation Therapy 201
Parallelizing the Chambolle Algorithm for Performance-Optimized Mapping on FPGA Devices
The performance and the efficiency of recent computing platforms have been deeply influenced by the widespread adoption of hardware accelerators, such as Graphics Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), which are often employed to support the tasks of General Purpose Processors (GPP). One of the main advantages of these accelerators over their sequential counterparts (GPPs) is their ability of performing massive parallel computation. However, in order to exploit this competitive edge, it is necessary to extract the parallelism from the target algorithm to be executed, which is in general a very challenging task.
This concept is demonstrated, for instance, by the poor performance achieved on relevant multimedia algorithms, such as Chambolle, which is a well-known algorithm employed for the optical flow estimation. The implementations of this algorithm that can be found in the state of the art are generally based on GPUs, but barely improve the performance that can be obtained with a powerful GPP. In this paper, we propose a novel approach to extract the parallelism from computation-intensive multimedia algorithms, which includes an analysis of their dependency schema and an assessment of their data reuse. We then perform a thorough analysis of the Chambolle algorithm, providing a formal proof of its inner data dependencies and locality properties. Then, we exploit the considerations drawn from this analysis by proposing an architectural template that takes advantage of the fine-grained parallelism of FPGA devices. Moreover, since the proposed template can be instantiated with different parameters, we also propose a design metric, the expansion rate, to help the designer in the estimation of the efficiency and performance of the different instances, making it possible to select the right one before the implementation phase. We finally show, by means of experimental results, how the proposed analysis and parallelization approach leads to the design of efficient and high-performance FPGA-based implementations that are orders of magnitude faster than the state-of-the-art ones
Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm
The primal-dual optimization algorithm developed in Chambolle and Pock (CP),
2011 is applied to various convex optimization problems of interest in computed
tomography (CT) image reconstruction. This algorithm allows for rapid
prototyping of optimization problems for the purpose of designing iterative
image reconstruction algorithms for CT. The primal-dual algorithm is briefly
summarized in the article, and its potential for prototyping is demonstrated by
explicitly deriving CP algorithm instances for many optimization problems
relevant to CT. An example application modeling breast CT with low-intensity
X-ray illumination is presented.Comment: Resubmitted to Physics in Medicine and Biology. Text has been
modified according to referee comments, and typos in the equations have been
correcte
Human polyomavirus 6 and 7 are associated with pruritic and dyskeratotic dermatoses
ABSTRACT
Background: Human Polyomavirus 6 (HPyV6) and Human Polyomavirus 7 (HPyV7) are shed chronically from human skin. HPyV7, but not HPyV6, has been linked to a pruritic skin eruption of immunosuppression.
Objective: We determined whether biopsies showing a characteristic pattern of dyskeratosis and parakeratosis might be associated with polyomavirus infection.
Methods: We screened biopsies showing "peacock plumage" histology by PCR for human polyomaviruses. Cases positive for HPyV 6 or 7 were then analyzed by immunohistochemistry, electron microscopy (EM), immunofluorescence, quantitative PCR, and complete sequencing, including unbiased, next generation sequencing (NGS).
Results: We identified three additional cases of HPyV6 or 7 skin infections. Expression of T antigen and viral capsid was abundant in lesional skin. Dual immunofluorescence staining experiments confirmed that HPyV7 primarily infects keratinocytes. High viral loads in lesional skin compared to normal skin and the identification of intact virions by both EM and NGS support a role for active viral infections in these skin diseases.
Limitation: This was a small case-series of archived materials.
Conclusion: We have found that HPyV6 and HPyV7 are associated with rare, pruritic skin eruptions with a unique histologic pattern and describe this entity as "HPyV6- and HPyV7-associated pruritic and dyskeratotic dermatosis (H6PD and H7PD).
A combined first and second order variational approach for image reconstruction
In this paper we study a variational problem in the space of functions of
bounded Hessian. Our model constitutes a straightforward higher-order extension
of the well known ROF functional (total variation minimisation) to which we add
a non-smooth second order regulariser. It combines convex functions of the
total variation and the total variation of the first derivatives. In what
follows, we prove existence and uniqueness of minimisers of the combined model
and present the numerical solution of the corresponding discretised problem by
employing the split Bregman method. The paper is furnished with applications of
our model to image denoising, deblurring as well as image inpainting. The
obtained numerical results are compared with results obtained from total
generalised variation (TGV), infimal convolution and Euler's elastica, three
other state of the art higher-order models. The numerical discussion confirms
that the proposed higher-order model competes with models of its kind in
avoiding the creation of undesirable artifacts and blocky-like structures in
the reconstructed images -- a known disadvantage of the ROF model -- while
being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure
3D Fluid Flow Estimation with Integrated Particle Reconstruction
The standard approach to densely reconstruct the motion in a volume of fluid
is to inject high-contrast tracer particles and record their motion with
multiple high-speed cameras. Almost all existing work processes the acquired
multi-view video in two separate steps, utilizing either a pure Eulerian or
pure Lagrangian approach. Eulerian methods perform a voxel-based reconstruction
of particles per time step, followed by 3D motion estimation, with some form of
dense matching between the precomputed voxel grids from different time steps.
In this sequential procedure, the first step cannot use temporal consistency
considerations to support the reconstruction, while the second step has no
access to the original, high-resolution image data. Alternatively, Lagrangian
methods reconstruct an explicit, sparse set of particles and track the
individual particles over time. Physical constraints can only be incorporated
in a post-processing step when interpolating the particle tracks to a dense
motion field. We show, for the first time, how to jointly reconstruct both the
individual tracer particles and a dense 3D fluid motion field from the image
data, using an integrated energy minimization. Our hybrid Lagrangian/Eulerian
model reconstructs individual particles, and at the same time recovers a dense
3D motion field in the entire domain. Making particles explicit greatly reduces
the memory consumption and allows one to use the high-res input images for
matching. Whereas the dense motion field makes it possible to include physical
a-priori constraints and account for the incompressibility and viscosity of the
fluid. The method exhibits greatly (~70%) improved results over our recently
published baseline with two separate steps for 3D reconstruction and motion
estimation. Our results with only two time steps are comparable to those of
sota tracking-based methods that require much longer sequences.Comment: To appear in International Journal of Computer Vision (IJCV
Didactic Sequence for Teaching Exponential Function
This paper presents a methodological proposal for the teaching of exponential function, resulting from the application of a didactic sequence involving exponential function, where evidence of learning and the consolidation and application of mathematical concepts in problem solving were identified and analyzed. The Didactic Engineering of Michèle Artigue (1988) was used as a research methodology. As theoretical contributions that guided and enabled the development of the research, we chose the use of Mathematical Investigation in the classroom; Didactic Sequence in the conception of Zabala (1999); the Articulated Units of Conceptual Reconstruction proposed by Cabral (2017) and assumptions of Vygotsky\u27s theory. A didactic sequence composed of five UARC\u27s was elaborated to work the exponential function, with a view to minimizing the difficulties naturally imposed by the content to be explained. Microgenetic analysis of verbal interactions between teacher and students was used to analyze the results of the application. The results show that the students participating in the experiment showed evidence of learning, recorded during the process, and began to have a good understanding of the concepts and properties related to the topic, in addition to a good performance in carrying out the activities, facts that corroborate the potential of the didactic sequence proposed herein
- …