166,965 research outputs found
Interactive Visualization of the Largest Radioastronomy Cubes
3D visualization is an important data analysis and knowledge discovery tool,
however, interactive visualization of large 3D astronomical datasets poses a
challenge for many existing data visualization packages. We present a solution
to interactively visualize larger-than-memory 3D astronomical data cubes by
utilizing a heterogeneous cluster of CPUs and GPUs. The system partitions the
data volume into smaller sub-volumes that are distributed over the rendering
workstations. A GPU-based ray casting volume rendering is performed to generate
images for each sub-volume, which are composited to generate the whole volume
output, and returned to the user. Datasets including the HI Parkes All Sky
Survey (HIPASS - 12 GB) southern sky and the Galactic All Sky Survey (GASS - 26
GB) data cubes were used to demonstrate our framework's performance. The
framework can render the GASS data cube with a maximum render time < 0.3 second
with 1024 x 1024 pixels output resolution using 3 rendering workstations and 8
GPUs. Our framework will scale to visualize larger datasets, even of Terabyte
order, if proper hardware infrastructure is available.Comment: 15 pages, 12 figures, Accepted New Astronomy July 201
PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies
The current landscape of scientific research is widely based on modeling and
simulation, typically with complexity in the simulation's flow of execution and
parameterization properties. Execution flows are not necessarily
straightforward since they may need multiple processing tasks and iterations.
Furthermore, parameter and performance studies are common approaches used to
characterize a simulation, often requiring traversal of a large parameter
space. High-performance computers offer practical resources at the expense of
users handling the setup, submission, and management of jobs. This work
presents the design of PaPaS, a portable, lightweight, and generic workflow
framework for conducting parallel parameter and performance studies. Workflows
are defined using parameter files based on keyword-value pairs syntax, thus
removing from the user the overhead of creating complex scripts to manage the
workflow. A parameter set consists of any combination of environment variables,
files, partial file contents, and command line arguments. PaPaS is being
developed in Python 3 with support for distributed parallelization using SSH,
batch systems, and C++ MPI. The PaPaS framework will run as user processes, and
can be used in single/multi-node and multi-tenant computing systems. An example
simulation using the BehaviorSpace tool from NetLogo and a matrix multiply
using OpenMP are presented as parameter and performance studies, respectively.
The results demonstrate that the PaPaS framework offers a simple method for
defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced
Research Computing, July 22--26, 2018, Pittsburgh, PA, US
A case study using ECHO(Extraction and Classification of Homogeneous Objects) for analysis of multispectral scanner data
There are no author-identified significant results in this report
A forestry application simulation of man-machine techniques for analyzing remotely sensed data
The typical steps in the analysis of remotely sensed data for a forestry applications example are simulated. The example uses numerically-oriented pattern recognition techniques and emphasizes man-machine interaction
Application of k Means Clustering algorithm for prediction of Students Academic Performance
The ability to monitor the progress of students academic performance is a
critical issue to the academic community of higher learning. A system for
analyzing students results based on cluster analysis and uses standard
statistical algorithms to arrange their scores data according to the level of
their performance is described. In this paper, we also implemented k mean
clustering algorithm for analyzing students result data. The model was combined
with the deterministic model to analyze the students results of a private
Institution in Nigeria which is a good benchmark to monitor the progression of
academic performance of students in higher Institution for the purpose of
making an effective decision by the academic planners.Comment: IEEE format, International Journal of Computer Science and
Information Security, IJCSIS January 2010, ISSN 1947 5500,
http://sites.google.com/site/ijcsis
Recommended from our members
A dubiety-determining based model for database cumulated anomaly intrusion
The concept of Cumulated Anomaly (CA), which describes a new type of database anomalies, is addressed. A
typical CA intrusion is that when a user who is authorized to modify data records under certain constraints deliberately
hides his/her intentions to change data beyond constraints in different operations and different transactions. It happens
when some appearing to be authorized and normal transactions lead to certain accumulated results out of given thresholds.
The existing intrusion techniques are unable to deal with CAs. This paper proposes a detection model,
Dubiety-Determining Model (DDM), for Cumulated Anomaly. This model is mainly based on statistical theories and fuzzy
set theories. It measures the dubiety degree, which is presented by a real number between 0 and 1, for each database
transaction, to show the likelihood of a transaction to be intrusive. The algorithms used in the DDM are introduced. A
DDM-based software architecture has been designed and implemented for monitoring database transactions. The
experimental results show that the DDM method is feasible and effective
- …