4,321 research outputs found
Digital Color Imaging
This paper surveys current technology and research in the area of digital
color imaging. In order to establish the background and lay down terminology,
fundamental concepts of color perception and measurement are first presented
us-ing vector-space notation and terminology. Present-day color recording and
reproduction systems are reviewed along with the common mathematical models
used for representing these devices. Algorithms for processing color images for
display and communication are surveyed, and a forecast of research trends is
attempted. An extensive bibliography is provided
Multi-Focal Visual Servoing Strategies
Multi-focal vision provides two or more vision devices with different fields of view and measurement accuracies. A main advantage of this concept is a flexible allocation of these sensor resources accounting for the current situational and task performance requirements. Particularly, vision devices with large fields of view and low accuracies can be use
Steganographer Identification
Conventional steganalysis detects the presence of steganography within single
objects. In the real-world, we may face a complex scenario that one or some of
multiple users called actors are guilty of using steganography, which is
typically defined as the Steganographer Identification Problem (SIP). One might
use the conventional steganalysis algorithms to separate stego objects from
cover objects and then identify the guilty actors. However, the guilty actors
may be lost due to a number of false alarms. To deal with the SIP, most of the
state-of-the-arts use unsupervised learning based approaches. In their
solutions, each actor holds multiple digital objects, from which a set of
feature vectors can be extracted. The well-defined distances between these
feature sets are determined to measure the similarity between the corresponding
actors. By applying clustering or outlier detection, the most suspicious
actor(s) will be judged as the steganographer(s). Though the SIP needs further
study, the existing works have good ability to identify the steganographer(s)
when non-adaptive steganographic embedding was applied. In this chapter, we
will present foundational concepts and review advanced methodologies in SIP.
This chapter is self-contained and intended as a tutorial introducing the SIP
in the context of media steganography.Comment: A tutorial with 30 page
Hand-eye calibration for rigid laparoscopes using an invariant point
PURPOSE: Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. METHODS: In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. RESULTS: We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95Â mm for optical tracking and 0.85Â mm for EM tracking, versus 4.13 and 1.00Â mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. CONCLUSION: We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy
Compression of interferometric radio-astronomical data
The volume of radio-astronomical data is a considerable burden in the
processing and storing of radio observations with high time and frequency
resolutions and large bandwidths. Lossy compression of interferometric
radio-astronomical data is considered to reduce the volume of visibility data
and to speed up processing.
A new compression technique named "Dysco" is introduced that consists of two
steps: a normalization step, in which grouped visibilities are normalized to
have a similar distribution; and a quantization and encoding step, which rounds
values to a given quantization scheme using a dithering scheme. Several
non-linear quantization schemes are tested and combined with different methods
for normalizing the data. Four data sets with observations from the LOFAR and
MWA telescopes are processed with different processing strategies and different
combinations of normalization and quantization. The effects of compression are
measured in image plane.
The noise added by the lossy compression technique acts like normal system
noise. The accuracy of Dysco is depending on the signal-to-noise ratio of the
data: noisy data can be compressed with a smaller loss of image quality. Data
with typical correlator time and frequency resolutions can be compressed by a
factor of 6.4 for LOFAR and 5.3 for MWA observations with less than 1% added
system noise. An implementation of the compression technique is released that
provides a Casacore storage manager and allows transparent encoding and
decoding. Encoding and decoding is faster than the read/write speed of typical
disks.
The technique can be used for LOFAR and MWA to reduce the archival space
requirements for storing observed data. Data from SKA-low will likely be
compressible by the same amount as LOFAR. The same technique can be used to
compress data from other telescopes, but a different bit-rate might be
required.Comment: Accepted for publication in A&A. 13 pages, 8 figures. Abstract was
abridge
Review of real brain-controlled wheelchairs
This paper presents a review of the state of the art regarding wheelchairs driven by a brain-computer interface (BCI). Using a brain-controlled wheelchair (BCW), disabled users could handle a wheelchair through their brain activity, granting autonomy to move through an experimental environment. A classification is established, based on the characteristics of the BCW, such as the type of electroencephalographic (EEG) signal used, the navigation system employed by the wheelchair, the task for the participants, or the metrics used to evaluate the performance. Furthermore, these factors are compared according to the type of signal used, in order to clarify the differences among them. Finally, the trend of current research in this field is discussed, as well as the challenges that should be solved in the future
- âŚ