3,471 research outputs found
The rotation and coma profiles of comet C/2004 Q2 (Machholz)
Aims. Rotation periods of cometary nuclei are scarce, though important when
studying the nature and origin of these objects. Our aim is to derive a
rotation period for the nucleus of comet C/2004 Q2 (Machholz). Methods. C/2004
Q2 (Machholz) was monitored using the Merope CCD camera on the Mercator
telescope at La Palma, Spain, in January 2005, during its closest approach to
Earth, implying a high spatial resolution (50km per pixel). One hundred seventy
images were recorded in three different photometric broadband filters, two blue
ones (Geneva U and B) and one red (Cousins I). Magnitudes for the comet's
optocentre were derived with very small apertures to isolate the contribution
of the nucleus to the bright coma, including correction for the seeing. Our CCD
photometry also permitted us to study the coma profile of the inner coma in the
different bands. Results. A rotation period for the nucleus of P = 9.1 +/- 0.2
h was derived. The period is on the short side compared to published periods of
other comets, but still shorter periods are known. Nevertheless, comparing our
results with images obtained in the narrowband CN filter, the possibility that
our method sampled P/2 instead of P cannot be excluded. Coma profiles are also
presented, and a terminal ejection velocity of the grains v_gr = 1609 +/- 48
m/s is found from the continuum profile in the I band.Comment: 11 pages, 9 figures, accepted by A&
The Multiscale Morphology Filter: Identifying and Extracting Spatial Patterns in the Galaxy Distribution
We present here a new method, MMF, for automatically segmenting cosmic
structure into its basic components: clusters, filaments, and walls.
Importantly, the segmentation is scale independent, so all structures are
identified without prejudice as to their size or shape. The method is ideally
suited for extracting catalogues of clusters, walls, and filaments from samples
of galaxies in redshift surveys or from particles in cosmological N-body
simulations: it makes no prior assumptions about the scale or shape of the
structures.}Comment: Replacement with higher resolution figures. 28 pages, 17 figures. For
Full Resolution Version see:
http://www.astro.rug.nl/~weygaert/tim1publication/miguelmmf.pd
Improved line/edge detection and visual reconstruction
Lines and edges provide important information for object categorization and recognition. In addition, one
brightness model is based on a symbolic interpretation of the cortical multi-scale line/edge representation. In
this paper we present an improved scheme for line/edge extraction from simple and complex cells and we illustrate
the multi-scale representation. This representation can be used for visual reconstruction, but also for nonphotorealistic
rendering. Together with keypoints and a new model of disparity estimation, a 3D wireframe representation
of e.g. faces can be obtained in the future
Learning to Extract Motion from Videos in Convolutional Neural Networks
This paper shows how to extract dense optical flow from videos with a
convolutional neural network (CNN). The proposed model constitutes a potential
building block for deeper architectures to allow using motion without resorting
to an external algorithm, \eg for recognition in videos. We derive our network
architecture from signal processing principles to provide desired invariances
to image contrast, phase and texture. We constrain weights within the network
to enforce strict rotation invariance and substantially reduce the number of
parameters to learn. We demonstrate end-to-end training on only 8 sequences of
the Middlebury dataset, orders of magnitude less than competing CNN-based
motion estimation methods, and obtain comparable performance to classical
methods on the Middlebury benchmark. Importantly, our method outputs a
distributed representation of motion that allows representing multiple,
transparent motions, and dynamic textures. Our contributions on network design
and rotation invariance offer insights nonspecific to motion estimation
The visual representation of texture
This research is concerned with texture: a source of visual information, that has motivated a huge amount of psychophysical and computational research. This thesis questions how useful the accepted view of texture perception is. From a theoretical point of view, work to date has largely avoided two critical aspects of a computational theory of texture perception. Firstly, what is texture? Secondly, what is an appropriate representation for texture? This thesis argues that a task dependent definition of texture is necessary, and
proposes a multi-local, statistical scheme for representing texture orientation.
Human performance on a series of psychophysical orientation discrimination tasks are compared to specific predictions from the scheme.
The first set of experiments investigate observers' ability to directly derive statistical estimates from texture. An analogy is reported between the way texture statistics are derived, and the visual processing of spatio-luminance features.
The second set of experiments are concerned with the way texture elements are extracted
from images (an example of the generic grouping problem in vision). The use of
highly constrained experimental tasks, typically texture orientation discriminations, allows for the formulation of simple statistical criteria for setting critical parameters of the model (such as the spatial scale of analysis). It is shown that schemes based on isotropic filtering and symbolic matching do not suffice for performing this grouping, but that the
scheme proposed, base on oriented mechanisms, does.
Taken together these results suggest a view of visual texture processing, not as a
disparate collection of processes, but as a general strategy for deriving statistical representations of images common to a range of visual tasks
- …