34 research outputs found
Live GPU Forensics: The Process of Recovering Video Frames from NVIDIA GPU
The purpose of this research is to apply a graphics processing unit (GPU) forensics method to recover video artifacts from NVIDIA GPU. The tested video specs are 512 x 512 in resolution for video 1 and 800 x 600 in resolution for video 2. Both videos are mpeg4 video codec. A VLC player was used in the experiment. A special program has been developed using OpenCL to recover 1) patterns that are frames consist of pixel values and 2) dump data from the GPU global memory. The dump data that represent the video frame were located using simple steps. The recovery process was successful. For 512 x 512 resolution video, the frames were partially recovered but it shows enough information for the forensics investigator to determine what was viewed last. The research indicates that it is harder, but not impossible, to obtain a viewable frame from higher-resolution vide
Managing Controlled Unclassified Information in Research Institutions
In order to operate in a regulated world, researchers need to ensure
compliance with ever-evolving landscape of information security regulations and
best practices. This work explains the concept of Controlled Unclassified
Information (CUI) and the challenges it brings to the research institutions.
Survey from the user perceptions showed that most researchers and IT
administrators lack a good understanding of CUI and how it is related to other
regulations, such as HIPAA, ITAR, GLBA, and FERPA. A managed research ecosystem
is introduced in this work. The workflow of this efficient and cost effective
framework is elaborated to demonstrate how controlled research data are
processed to be compliant with one of the highest level of cybersecurity in a
campus environment. Issues beyond the framework itself is also discussed. The
framework serves as a reference model for other institutions to support CUI
research. The awareness and training program developed from this work will be
shared with other institutions to build a bigger CUI ecosystem
Hyper-STTN: Social Group-aware Spatial-Temporal Transformer Network for Human Trajectory Prediction with Hypergraph Reasoning
Predicting crowded intents and trajectories is crucial in varouls real-world
applications, including service robots and autonomous vehicles. Understanding
environmental dynamics is challenging, not only due to the complexities of
modeling pair-wise spatial and temporal interactions but also the diverse
influence of group-wise interactions. To decode the comprehensive pair-wise and
group-wise interactions in crowded scenarios, we introduce Hyper-STTN, a
Hypergraph-based Spatial-Temporal Transformer Network for crowd trajectory
prediction. In Hyper-STTN, crowded group-wise correlations are constructed
using a set of multi-scale hypergraphs with varying group sizes, captured
through random-walk robability-based hypergraph spectral convolution.
Additionally, a spatial-temporal transformer is adapted to capture pedestrians'
pair-wise latent interactions in spatial-temporal dimensions. These
heterogeneous group-wise and pair-wise are then fused and aligned though a
multimodal transformer network. Hyper-STTN outperformes other state-of-the-art
baselines and ablation models on 5 real-world pedestrian motion datasets
Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition
Human state recognition is a critical topic with pervasive and important
applications in human-machine systems.Multi-modal fusion, the combination of
metrics from multiple data sources, has been shown as a sound method for
improving the recognition performance. However, while promising results have
been reported by recent multi-modal-based models, they generally fail to
leverage the sophisticated fusion strategies that would model sufficient
cross-modal interactions when producing the fusion representation; instead,
current methods rely on lengthy and inconsistent data preprocessing and feature
crafting. To address this limitation, we propose an end-to-end multi-modal
transformer framework for multi-modal human state recognition called
Husformer.Specifically, we propose to use cross-modal transformers, which
inspire one modality to reinforce itself through directly attending to latent
relevance revealed in other modalities, to fuse different modalities while
ensuring sufficient awareness of the cross-modal interactions introduced.
Subsequently, we utilize a self-attention transformer to further prioritize
contextual information in the fusion representation. Using two such attention
mechanisms enables effective and adaptive adjustments to noise and
interruptions in multi-modal signals during the fusion process and in relation
to high-level features. Extensive experiments on two human emotion corpora
(DEAP and WESAD) and two cognitive workload datasets (MOCAS and CogLoad)
demonstrate that in the recognition of human state, our Husformer outperforms
both state-of-the-art multi-modal baselines and the use of a single modality by
a large margin, especially when dealing with raw multi-modal signals. We also
conducted an ablation study to show the benefits of each component in
Husformer