9,419 research outputs found
grc4f v1.0: a Four-fermion Event Generator for e+e- Collisions
grc4f is a Monte-Carlo package for generating e+e- to 4-fermion processes in
the standard model. All of the 76 LEP-2 allowed fermionic final state processes
evaluated at tree level are included in version 1.0. grc4f addresses event
simulation requirements at e+e- colliders such as LEP and up-coming linear
colliders. Most of the attractive aspects of grc4f come from its link to the
GRACE system: a Feynman diagram automatic computation system. The GRACE system
has been used to produce the computational code for all final states, giving a
higher level of confidence in the calculation correctness. Based on the
helicity amplitude calculation technique, all fermion masses can be kept finite
and helicity information can be propagated down to the final state particles.
The phase space integration of the matrix element gives the total and
differential cross sections, then unweighted events are Generated. Initial
state radiation (ISR) corrections are implemented in two ways, one is based on
the electron structure function formalism and the second uses the parton shower
algorithm called QEDPS. The latter can also be applied for final state
radiation (FSR) though the interference with the ISR is not yet taken into
account. Parton shower and hadronization of the final quarks are performed
through an interface to JETSET. Coulomb correction between two intermediate
W's, anomalous coupling as well as gluon contributions in the hadronic
processes are also included.Comment: 30 pages, LaTeX, 5 pages postscript figures, uuencode
Recommended from our members
Robust model-based analysis of single-particle tracking experiments with Spot-On.
Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce 'Spot-On', an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants
Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6
A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored
Single camera pose estimation using Bayesian filtering and Kinect motion priors
Traditional approaches to upper body pose estimation using monocular vision
rely on complex body models and a large variety of geometric constraints. We
argue that this is not ideal and somewhat inelegant as it results in large
processing burdens, and instead attempt to incorporate these constraints
through priors obtained directly from training data. A prior distribution
covering the probability of a human pose occurring is used to incorporate
likely human poses. This distribution is obtained offline, by fitting a
Gaussian mixture model to a large dataset of recorded human body poses, tracked
using a Kinect sensor. We combine this prior information with a random walk
transition model to obtain an upper body model, suitable for use within a
recursive Bayesian filtering framework. Our model can be viewed as a mixture of
discrete Ornstein-Uhlenbeck processes, in that states behave as random walks,
but drift towards a set of typically observed poses. This model is combined
with measurements of the human head and hand positions, using recursive
Bayesian estimation to incorporate temporal information. Measurements are
obtained using face detection and a simple skin colour hand detector, trained
using the detected face. The suggested model is designed with analytical
tractability in mind and we show that the pose tracking can be
Rao-Blackwellised using the mixture Kalman filter, allowing for computational
efficiency while still incorporating bio-mechanical properties of the upper
body. In addition, the use of the proposed upper body model allows reliable
three-dimensional pose estimates to be obtained indirectly for a number of
joints that are often difficult to detect using traditional object recognition
strategies. Comparisons with Kinect sensor results and the state of the art in
2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014
conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video:
https://www.youtube.com/watch?v=dJMTSo7-uF
Robust and real-time hand detection and tracking in monocular video
In recent years, personal computing devices such as laptops, tablets and smartphones have become ubiquitous. Moreover, intelligent sensors are being integrated into many consumer devices such as eyeglasses, wristwatches and smart televisions. With the advent of touchscreen technology, a new human-computer interaction (HCI) paradigm arose that allows users to interface with their device in an intuitive manner. Using simple gestures, such as swipe or pinch movements, a touchscreen can be used to directly interact with a virtual environment. Nevertheless, touchscreens still form a physical barrier between the virtual interface and the real world.
An increasingly popular field of research that tries to overcome this limitation, is video based gesture recognition, hand detection and hand tracking. Gesture based interaction allows the user to directly interact with the computer in a natural manner by exploring a virtual reality using nothing but his own body language.
In this dissertation, we investigate how robust hand detection and tracking can be accomplished under real-time constraints. In the context of human-computer interaction, real-time is defined as both low latency and low complexity, such that a complete video frame can be processed before the next one becomes available. Furthermore, for practical applications, the algorithms should be robust to illumination changes, camera motion, and cluttered backgrounds in the scene. Finally, the system should be able to initialize automatically, and to detect and recover from tracking failure. We study a wide variety of existing algorithms, and propose significant improvements and novel methods to build a complete detection and tracking system that meets these requirements.
Hand detection, hand tracking and hand segmentation are related yet technically different challenges. Whereas detection deals with finding an object in a static image, tracking considers temporal information and is used to track the position of an object over time, throughout a video sequence. Hand segmentation is the task of estimating the hand contour, thereby separating the object from its background.
Detection of hands in individual video frames allows us to automatically initialize our tracking algorithm, and to detect and recover from tracking failure. Human hands are highly articulated objects, consisting of finger parts that are connected with joints. As a result, the appearance of a hand can vary greatly, depending on the assumed hand pose. Traditional detection algorithms often assume that the appearance of the object of interest can be described using a rigid model and therefore can not be used to robustly detect human hands. Therefore, we developed an algorithm that detects hands by exploiting their articulated nature. Instead of resorting to a template based approach, we probabilistically model the spatial relations between different hand parts, and the centroid of the hand. Detecting hand parts, such as fingertips, is much easier than detecting a complete hand. Based on our model of the spatial configuration of hand parts, the detected parts can be used to obtain an estimate of the complete hand's position. To comply with the real-time constraints, we developed techniques to speed-up the process by efficiently discarding unimportant information in the image. Experimental results show that our method is competitive with the state-of-the-art in object detection while providing a reduction in computational complexity with a factor 1 000. Furthermore, we showed that our algorithm can also be used to detect other articulated objects such as persons or animals and is therefore not restricted to the task of hand detection.
Once a hand has been detected, a tracking algorithm can be used to continuously track its position in time. We developed a probabilistic tracking method that can cope with uncertainty caused by image noise, incorrect detections, changing illumination, and camera motion. Furthermore, our tracking system automatically determines the number of hands in the scene, and can cope with hands entering or leaving the video canvas. We introduced several novel techniques that greatly increase tracking robustness, and that can also be applied in other domains than hand tracking. To achieve real-time processing, we investigated several techniques to reduce the search space of the problem, and deliberately employ methods that are easily parallelized on modern hardware. Experimental results indicate that our methods outperform the state-of-the-art in hand tracking, while providing a much lower computational complexity.
One of the methods used by our probabilistic tracking algorithm, is optical flow estimation. Optical flow is defined as a 2D vector field describing the apparent velocities of objects in a 3D scene, projected onto the image plane. Optical flow is known to be used by many insects and birds to visually track objects and to estimate their ego-motion. However, most optical flow estimation methods described in literature are either too slow to be used in real-time applications, or are not robust to illumination changes and fast motion. We therefore developed an optical flow algorithm that can cope with large displacements, and that is illumination independent.
Furthermore, we introduce a regularization technique that ensures a smooth flow-field. This regularization scheme effectively reduces the number of noisy and incorrect flow-vector estimates, while maintaining the ability to handle motion discontinuities caused by object boundaries in the scene.
The above methods are combined into a hand tracking framework which can be used for interactive applications in unconstrained environments. To demonstrate the possibilities of gesture based human-computer interaction, we developed a new type of computer display. This display is completely transparent, allowing multiple users to perform collaborative tasks while maintaining eye contact. Furthermore, our display produces an image that seems to float in thin air, such that users can touch the virtual image with their hands. This floating imaging display has been showcased on several national and international events and tradeshows.
The research that is described in this dissertation has been evaluated thoroughly by comparing detection and tracking results with those obtained by state-of-the-art algorithms. These comparisons show that the proposed methods outperform most algorithms in terms of accuracy, while achieving a much lower computational complexity, resulting in a real-time implementation. Results are discussed in depth at the end of each chapter. This research further resulted in an international journal publication; a second journal paper that has been submitted and is under review at the time of writing this dissertation; nine international conference publications; a national conference publication; a commercial license agreement concerning the research results; two hardware prototypes of a new type of computer display; and a software demonstrator
Vaex: Big Data exploration in the era of Gaia
We present a new Python library called vaex, to handle extremely large
tabular datasets, such as astronomical catalogues like the Gaia catalogue,
N-body simulations or any other regular datasets which can be structured in
rows and columns. Fast computations of statistics on regular N-dimensional
grids allows analysis and visualization in the order of a billion rows per
second. We use streaming algorithms, memory mapped files and a zero memory copy
policy to allow exploration of datasets larger than memory, e.g. out-of-core
algorithms. Vaex allows arbitrary (mathematical) transformations using normal
Python expressions and (a subset of) numpy functions which are lazily evaluated
and computed when needed in small chunks, which avoids wasting of RAM. Boolean
expressions (which are also lazily evaluated) can be used to explore subsets of
the data, which we call selections. Vaex uses a similar DataFrame API as
Pandas, a very popular library, which helps migration from Pandas.
Visualization is one of the key points of vaex, and is done using binned
statistics in 1d (e.g. histogram), in 2d (e.g. 2d histograms with colormapping)
and 3d (using volume rendering). Vaex is split in in several packages:
vaex-core for the computational part, vaex-viz for visualization mostly based
on matplotlib, vaex-jupyter for visualization in the Jupyter notebook/lab based
in IPyWidgets, vaex-server for the (optional) client-server communication,
vaex-ui for the Qt based interface, vaex-hdf5 for hdf5 based memory mapped
storage, vaex-astro for astronomy related selections, transformations and
memory mapped (column based) fits storage. Vaex is open source and available
under MIT license on github, documentation and other information can be found
on the main website: https://vaex.io, https://docs.vaex.io or
https://github.com/maartenbreddels/vaexComment: 14 pages, 8 figures, Submitted to A&A, interactive version of Fig 4:
https://vaex.io/paper/fig
Digital pulse-shape discrimination of fast neutrons and gamma rays
Discrimination of the detection of fast neutrons and gamma rays in a liquid
scintillator detector has been investigated using digital pulse-processing
techniques. An experimental setup with a 252Cf source, a BC-501 liquid
scintillator detector, and a BaF2 detector was used to collect waveforms with a
100 Ms/s, 14 bit sampling ADC. Three identical ADC's were combined to increase
the sampling frequency to 300 Ms/s. Four different digital pulse-shape analysis
algorithms were developed and compared to each other and to data obtained with
an analogue neutron-gamma discrimination unit. Two of the digital algorithms
were based on the charge comparison method, while the analogue unit and the
other two digital algorithms were based on the zero-crossover method. Two
different figure-of-merit parameters, which quantify the neutron-gamma
discrimination properties, were evaluated for all four digital algorithms and
for the analogue data set. All of the digital algorithms gave similar or better
figure-of-merit values than what was obtained with the analogue setup. A
detailed study of the discrimination properties as a function of sampling
frequency and bit resolution of the ADC was performed. It was shown that a
sampling ADC with a bit resolution of 12 bits and a sampling frequency of 100
Ms/s is adequate for achieving an optimal neutron-gamma discrimination for
pulses having a dynamic range for deposited neutron energies of 0.3-12 MeV. An
investigation of the influence of the sampling frequency on the time resolution
was made. A FWHM of 1.7 ns was obtained at 100 Ms/s.Comment: 26 pages, 14 figures, submitted to Nuclear Instruments and Methods in
Physics Research
Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct
imaging survey of 600 stars to discover and characterize young Jovian
exoplanets and their environments. We have developed an automated data
architecture to process and index all data related to the survey uniformly. An
automated and flexible data processing framework, which we term the Data
Cruncher, combines multiple data reduction pipelines together to process all
spectroscopic, polarimetric, and calibration data taken with GPIES. With no
human intervention, fully reduced and calibrated data products are available
less than an hour after the data are taken to expedite follow-up on potential
objects of interest. The Data Cruncher can run on a supercomputer to reprocess
all GPIES data in a single day as improvements are made to our data reduction
pipelines. A backend MySQL database indexes all files, which are synced to the
cloud, and a front-end web server allows for easy browsing of all files
associated with GPIES. To help observers, quicklook displays show reduced data
as they are processed in real-time, and chatbots on Slack post observing
information as well as reduced data products. Together, the GPIES automated
data processing architecture reduces our workload, provides real-time data
reduction, optimizes our observing strategy, and maintains a homogeneously
reduced dataset to study planet occurrence and instrument performance.Comment: 21 pages, 3 figures, accepted in JATI
- …