6,129 research outputs found
Spread spectrum-based video watermarking algorithms for copyright protection
Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can
now benefit from hardware and software which was considered state-of-the-art several years
ago. The advantages offered by the digital technologies are major but the same digital
technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly
possible and relatively easy, in spite of various forms of protection, but due to the analogue
environment, the subsequent copies had an inherent loss in quality. This was a natural way of
limiting the multiple copying of a video material. With digital technology, this barrier
disappears, being possible to make as many copies as desired, without any loss in quality
whatsoever. Digital watermarking is one of the best available tools for fighting this threat.
The aim of the present work was to develop a digital watermarking system compliant with the
recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark
can be inserted in either spatial domain or transform domain, this aspect was investigated and
led to the conclusion that wavelet transform is one of the best solutions available. Since
watermarking is not an easy task, especially considering the robustness under various attacks
several techniques were employed in order to increase the capacity/robustness of the system:
spread-spectrum and modulation techniques to cast the watermark, powerful error correction
to protect the mark, human visual models to insert a robust mark and to ensure its invisibility.
The combination of these methods led to a major improvement, but yet the system wasn't
robust to several important geometrical attacks. In order to achieve this last milestone, the
system uses two distinct watermarks: a spatial domain reference watermark and the main
watermark embedded in the wavelet domain. By using this reference watermark and techniques
specific to image registration, the system is able to determine the parameters of the attack and
revert it. Once the attack was reverted, the main watermark is recovered. The final result is a
high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen
Implementation of User-Independent Hand Gesture Recognition Classification Models Using IMU and EMG-based Sensor Fusion Techniques
According to the World Health Organization, stroke is the third leading cause of disability. A common consequence of stroke is hemiparesis, which leads to the impairment of one side of the body and affects the performance of activities of daily living. It has been proven that targeting the motor impairments as early as possible while using wearable mechatronic devices as a robot assisted therapy, and letting the patient be in control of the robotic system can improve the rehabilitation outcomes. However, despite the increased progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. This work presents, a user-independent gesture classification method based on a sensor fusion technique that combines surface electromyography (EMG) and an inertial measurement unit (IMU). The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform 10 types of gestures in 4 different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using 4 different classification methods. Finally, for each classification method, a 5-fold cross-validation method was used to test the efficacy of the classification algorithms. Overall classification accuracies in the range of 33.11%-72.1% were obtained. However, following the optimization of the gesture datasets, the overall classification accuracies increased to the range of 45.5%-84.5%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural human machine interface that allows better control of wearable mechatronic devices during robot assisted therapies
Relic neutrino detection through angular correlations in inverse -decay
Neutrino capture on beta-decaying nuclei is currently the only known
potentially viable method of detection of cosmic background neutrinos. It is
based on the idea of separation of the spectra of electrons or positrons
produced in captures of relic neutrinos on unstable nuclei from those from the
usual -decay and requires very high energy resolution of the detector,
comparable to the neutrino mass. In this paper we suggest an alternative method
of discrimination between neutrino capture and -decay, based on periodic
variations of angular correlations in inverse beta decay transitions induced by
relic neutrino capture. The time variations are expected to arise due to the
peculiar motion of the Sun with respect to the CB rest frame and the
rotation of the Earth about its axis and can be observed in experiments with
both polarized and unpolarized nuclear targets. The main advantage of the
suggested method is that it does not depend crucially on the energy resolution
of detection of the produced -particles and can be operative even if
this resolution exceeds the largest neutrino mass.Comment: 24 pages, 1 figure. v2: title changed, section 4 modified, Appendix B
added, references added. v3: section 4 slightly expanded; eq. (B.7)
corrected. Final version to be published in JCA
A dual role for prediction error in associative learning
Confronted with a rich sensory environment, the brain must learn
statistical regularities across sensory domains to construct causal
models of the world. Here, we used functional magnetic resonance
imaging and dynamic causal modeling (DCM) to furnish neurophysiological
evidence that statistical associations are learnt, even when
task-irrelevant. Subjects performed an audio-visual target-detection
task while being exposed to distractor stimuli. Unknown to them,
auditory distractors predicted the presence or absence of subsequent
visual distractors. We modeled incidental learning of these associations
using a Rescorla--Wagner (RW) model. Activity in primary visual
cortex and putamen reflected learning-dependent surprise: these areas
responded progressively more to unpredicted, and progressively less
to predicted visual stimuli. Critically, this prediction-error response
was observed even when the absence of a visual stimulus was
surprising. We investigated the underlying mechanism by embedding
the RW model into a DCM to show that auditory to visual connectivity
changed significantly over time as a function of prediction error. Thus,
consistent with predictive coding models of perception, associative
learning is mediated by prediction-error dependent changes in connectivity.
These results posit a dual role for prediction-error in encoding
surprise and driving associative plasticity
Detection and analysis of single event upsets in noisy digital imagers with small to medium pixels
Camera sensors are shrinking, resulting in more defects seen through image analysis. Due to cosmic radiation, camera experience both permanent defects known as hot pixels and temporal defective spikes which are Single Event Upsets (SEUs). SEUs manifest themselves as temporal random bright areas in sequential dark-frame images that are taken with long exposure times. In the past, it was difficult to separate SEUs from noise in dark-frame images taken with DSLRs at high sensitivity levels (ISO) and cell phone cameras at modest sensitivity levels. However, recent software improvements in this research have enabled the analysis of defect rates in noisy digital imagers – by leveraging local area and pixel address distribution techniques. In addition, multiple experiments were performed to understand the relationship of SEUs and elevation. This study reports data from imagers with pixels ranging from 7 μm (DSLR cameras) down to 1.2 μm (cell phone cameras)
A Two-Tiered Correlation of Dark Matter with Missing Transverse Energy: Reconstructing the Lightest Supersymmetric Particle Mass at the LHC
We suggest that non-trivial correlations between the dark matter particle
mass and collider based probes of missing transverse energy H_T^miss may
facilitate a two tiered approach to the initial discovery of supersymmetry and
the subsequent reconstruction of the LSP mass at the LHC. These correlations
are demonstrated via extensive Monte Carlo simulation of seventeen benchmark
models, each sampled at five distinct LHC center-of-mass beam energies,
spanning the parameter space of No-Scale F-SU(5).This construction is defined
in turn by the union of the Flipped SU(5) Grand Unified Theory, two pairs of
hypothetical TeV scale vector-like supersymmetric multiplets with origins in
F-theory, and the dynamically established boundary conditions of No-Scale
Supergravity. In addition, we consider a control sample comprised of a standard
minimal Supergravity benchmark point. Led by a striking similarity between the
H_T^miss distribution and the familiar power spectrum of a black body radiator
at various temperatures, we implement a broad empirical fit of our simulation
against a Poisson distribution ansatz. We advance the resulting fit as a
theoretical blueprint for deducing the mass of the LSP, utilizing only the
missing transverse energy in a statistical sampling of >= 9 jet events.
Cumulative uncertainties central to the method subsist at a satisfactory 12-15%
level. The fact that supersymmetric particle spectrum of No-Scale F-SU(5) has
thrived the withering onslaught of early LHC data that is steadily decimating
the Constrained Minimal Supersymmetric Standard Model and minimal Supergravity
parameter spaces is a prime motivation for augmenting more conventional LSP
search methodologies with the presently proposed alternative.Comment: JHEP version, 17 pages, 9 Figures, 2 Table
- …