9,259 research outputs found
STV-based Video Feature Processing for Action Recognition
In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end
INFORMATION TECHNOLOGY FOR NEXT-GENERATION OF SURGICAL ENVIRONMENTS
Minimally invasive surgeries (MIS) are fundamentally constrained by image quality,access to the operative field, and the visualization environment on which thesurgeon relies for real-time information. Although invasive access benefits the patient,it also leads to more challenging procedures, which require better skills andtraining. Endoscopic surgeries rely heavily on 2D interfaces, introducing additionalchallenges due to the loss of depth perception, the lack of 3-Dimensional imaging,and the reduction of degrees of freedom.By using state-of-the-art technology within a distributed computational architecture,it is possible to incorporate multiple sensors, hybrid display devices, and3D visualization algorithms within a exible surgical environment. Such environmentscan assist the surgeon with valuable information that goes far beyond what iscurrently available. In this thesis, we will discuss how 3D visualization and reconstruction,stereo displays, high-resolution display devices, and tracking techniques arekey elements in the next-generation of surgical environments
Ubiquitous volume rendering in the web platform
176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
07291 Abstracts Collection -- Scientific Visualization
From 15.07. to 20.07.07, the Dagstuhl Seminar 07291 ``Scientific Visualization\u27\u27 was held in the International Conference and Research Center (IBFI),Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Segmentation Assisted Object Distinction For Direct Volume Rendering
Ray casting is a direct volume rendering technique for visualizing 3D arrays
of sampled data. It has vital applications in medical and biological imaging.
Nevertheless, it is inherently open to cluttered classification results. It suffers from
overlapping transfer function values and lacks a sufficiently powerful voxel parsing
mechanism for object distinction. In this research work, we are proposing an image
processing based approach towards enhancing ray casting technique’s object
distinction process. The ray casting architecture is modified to accommodate object
membership information generated by a K-means based hybrid segmentation
algorithm. Object membership information is assigned to cubical vertices in the form
of ID tags. An intra-object buffer is devised and coordinated with inter-object buffer,
allowing the otherwise global rendering module to embed multiple local (secondary)
rendering processes. A local rendering process adds two advantageous aspects to
global rendering module. First, depth oriented manipulation of interpolation and
composition operations that lead to freedom of interpolation method choice based on
the number of available objects in various volumetric depths, improvement of LOD
(level of details) for desired objects and reduced number of required mathematical
computations. Second, localization of transfer function design that enables the
utilization of binary (non-overlapping) transfer functions for color and opacity
assignment. A set of image processing techniques are creatively employed in the
design of K-means based hybrid segmentation algorithm
Femtosecond-laser-irradiation-induced structural organization and crystallinity of Bi2WO6
Controlling the structural organization and crystallinity of functional oxides is key to enhancing
their performance in technological applications. In this work, we report a strong enhancement of
the structural organization and crystallinity of Bi2WO6 samples synthetized by a microwave-assisted
hydrothermal method after exposing them to femtosecond laser irradiation. X-ray difraction, UVvis and Raman spectroscopies, photoluminescence emissions, energy dispersive spectroscopy, feld
emission scanning electron microscopy, and transmission electron microscopy were employed to
characterize the as-synthetized samples. To complement and rationalize the experimental results, frstprinciples calculations were employed to study the efects of femtosecond laser irradiation. Structural
and electronic efects induced by femtosecond laser irradiation enhance the long-range crystallinity
while decreasing the free carrier density, as it takes place in the amorphous and liquid states. These
efects can be considered a clear cut case of surface-enhanced Raman scattering
The IceCube Neutrino Observatory V: Future Developments
Proposed enhancements of the IceCube observatory. Submitted papers to the
32nd International Cosmic Ray Conference, Beijing 2011.Comment: Papers submitted by the IceCube Collaboration to the 32nd
International Cosmic Ray Conference, Beijing 2011; part
A Computationally Efficient Hybrid Neural Network Architecture for Porous Media: Integrating CNNs and GNNs for Improved Permeability Prediction
Subsurface fluid flow, essential in various natural and engineered processes,
is largely governed by a rock's permeability, which describes its ability to
allow fluid passage. While convolutional neural networks (CNNs) have been
employed to estimate permeability from high-resolution 3D rock images, our
novel visualization technology reveals that they occasionally miss higher-level
characteristics, such as nuanced connectivity and flow paths, within porous
media. To address this, we propose a novel fusion model to integrate CNN with
the graph neural network (GNN), which capitalizes on graph representations
derived from pore network model to capture intricate relational data between
pores. The permeability prediction accuracy of the fusion model is superior to
the standalone CNN, whereas its total parameter number is nearly two orders of
magnitude lower than the latter. This innovative approach not only heralds a
new frontier in the research of digital rock property predictions, but also
demonstrates remarkable improvements in prediction accuracy and efficiency,
emphasizing the transformative potential of hybrid neural network architectures
in subsurface fluid flow research
ImageJ2: ImageJ for the next generation of scientific image data
ImageJ is an image analysis program extensively used in the biological
sciences and beyond. Due to its ease of use, recordable macro language, and
extensible plug-in architecture, ImageJ enjoys contributions from
non-programmers, amateur programmers, and professional developers alike.
Enabling such a diversity of contributors has resulted in a large community
that spans the biological and physical sciences. However, a rapidly growing
user base, diverging plugin suites, and technical limitations have revealed a
clear need for a concerted software engineering effort to support emerging
imaging paradigms, to ensure the software's ability to handle the requirements
of modern science. Due to these new and emerging challenges in scientific
imaging, ImageJ is at a critical development crossroads.
We present ImageJ2, a total redesign of ImageJ offering a host of new
functionality. It separates concerns, fully decoupling the data model from the
user interface. It emphasizes integration with external applications to
maximize interoperability. Its robust new plugin framework allows everything
from image formats, to scripting languages, to visualization to be extended by
the community. The redesigned data model supports arbitrarily large,
N-dimensional datasets, which are increasingly common in modern image
acquisition. Despite the scope of these changes, backwards compatibility is
maintained such that this new functionality can be seamlessly integrated with
the classic ImageJ interface, allowing users and developers to migrate to these
new methods at their own pace. ImageJ2 provides a framework engineered for
flexibility, intended to support these requirements as well as accommodate
future needs
Ubiquitous volume rendering in the web platform
176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
- …