11,956 research outputs found
DPP-PMRF: Rethinking Optimization for a Probabilistic Graphical Model Using Data-Parallel Primitives
We present a new parallel algorithm for probabilistic graphical model
optimization. The algorithm relies on data-parallel primitives (DPPs), which
provide portable performance over hardware architecture. We evaluate results on
CPUs and GPUs for an image segmentation problem. Compared to a serial baseline,
we observe runtime speedups of up to 13X (CPU) and 44X (GPU). We also compare
our performance to a reference, OpenMP-based algorithm, and find speedups of up
to 7X (CPU).Comment: LDAV 2018, October 201
Computing and data processing
The applications of computers and data processing to astronomy are discussed. Among the topics covered are the emerging national information infrastructure, workstations and supercomputers, supertelescopes, digital astronomy, astrophysics in a numerical laboratory, community software, archiving of ground-based observations, dynamical simulations of complex systems, plasma astrophysics, and the remote control of fourth dimension supercomputers
Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented
LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing
LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft
Three-dimensional scanning as a means of archiving sculptures
Thesis (M. Tech. Design technology) -- Central University of Technology, Free State, 2011This dissertation outlines a procedural scanning process using the portable ZCorporation ZScanner® 700 and provides an overview of the developments surrounding 3D scanning technologies; specifically their application for archiving Cultural Heritage sites and projects. The procedural scanning process is structured around the identification of 3D data recording variables applicable to the digital archiving of an art museum’s collection of sculptures. The outlining of a procedural 3D scanning environment supports the developing technology of 3D digital archiving in view of artefact preservation and interactive digital accessibility. Presented in this paper are several case studies that record 3D scanning variables such as texture, scale, surface detail, light and data conversion applicable to varied sculptural surfaces and form. Emphasis is placed on the procedural documentation and the anomalies associated with the physical object, equipment used, and the scanning environment.
In support of the above, the Cultural Heritage projects that are analyzed prove that 3D portable scanning could provide digital longevity and access to previously inaccessible arenas for a diverse range of digital data archiving infrastructures. The development of 3D data acquisition via scanning, CAD modelling and 2D to 3D data file conversion technologies as well as the aesthetic effect and standards of digital archiving in terms of the artwork – viewer relationship and international practices or criterions of 3D digitizing are analysed. These projects indicate the significant use of optical 3D scanning techniques and their employ on renowned historical artefacts thus emphasizing their importance, safety and effectiveness. The aim with this research is to establish that the innovation and future implications of 3D scanning could be instrumental to future technological advancement in an interdisciplinary capacity to further data capture and processing in various Cultural Heritage diagnostic applications
Improving Big Data Visual Analytics with Interactive Virtual Reality
For decades, the growth and volume of digital data collection has made it
challenging to digest large volumes of information and extract underlying
structure. Coined 'Big Data', massive amounts of information has quite often
been gathered inconsistently (e.g from many sources, of various forms, at
different rates, etc.). These factors impede the practices of not only
processing data, but also analyzing and displaying it in an efficient manner to
the user. Many efforts have been completed in the data mining and visual
analytics community to create effective ways to further improve analysis and
achieve the knowledge desired for better understanding. Our approach for
improved big data visual analytics is two-fold, focusing on both visualization
and interaction. Given geo-tagged information, we are exploring the benefits of
visualizing datasets in the original geospatial domain by utilizing a virtual
reality platform. After running proven analytics on the data, we intend to
represent the information in a more realistic 3D setting, where analysts can
achieve an enhanced situational awareness and rely on familiar perceptions to
draw in-depth conclusions on the dataset. In addition, developing a
human-computer interface that responds to natural user actions and inputs
creates a more intuitive environment. Tasks can be performed to manipulate the
dataset and allow users to dive deeper upon request, adhering to desired
demands and intentions. Due to the volume and popularity of social media, we
developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing
emerging technologies of today to create a fully immersive tool that promotes
visualization and interaction can help ease the process of understanding and
representing big data.Comment: 6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing
Conference (HPEC '15); corrected typo
Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital Fossa
Assessing the condition and visibility of veins is a crucial step before
obtaining intravenous access in the antecubital fossa, which is a common
procedure to draw blood or administer intravenous therapies (IV therapies).
Even though medical practitioners are highly skilled at intravenous
cannulation, they usually struggle to perform the procedure in patients with
low visible veins due to fluid retention, age, overweight, dark skin tone, or
diabetes. Recently, several investigations proposed combining Near Infrared
(NIR) imaging and deep learning (DL) techniques for forearm vein segmentation.
Although they have demonstrated compelling results, their use has been rather
limited owing to the portability and precision requirements to perform
venipuncture. In this paper, we aim to contribute to bridging this gap using
three strategies. First, we introduce a new NIR-based forearm vein segmentation
dataset of 2,016 labelled images collected from 1,008 subjects with low visible
veins. Second, we propose a modified U-Net architecture that locates veins
specifically in the antecubital fossa region of the examined patient. Finally,
a compressed version of the proposed architecture was deployed inside a
bespoke, portable vein finder device after testing four common embedded
microcomputers and four common quantization modalities. Experimental results
showed that the model compressed with Dynamic Range Quantization and deployed
on a Raspberry Pi 4B card produced the best execution time and precision
balance, with 5.14 FPS and 0.957 of latency and Intersection over Union (IoU),
respectively. These results show promising performance inside a
resource-restricted low-cost device.Comment: Accepted for publication in MICAI 2023, Part II, LNCS 1439
- …