111 research outputs found
Human oocyte-derived sperm chemoattractant is a hydrophobic molecule associated with a carrier protein
Objective: To characterize the nature of the human oocyte-derived chemoattractant.
Design: Laboratory in-vitro study.
Setting: Academic research institute.
Patients: Ten healthy sperm donors. Oocyte-conditioned media from women undergoing IVF treatment due to male factor infertility.
Interventions: Sperm samples were processed by the migration–sedimentation technique. Oocyte-conditioned media were collected 2-3 h after oocyte stripping.
Main Outcome Measure(s): Sperm chemotaxis was assayed in a µ-slide chamber according to the direction of swimming relative to that of the chemical gradient.
Results: Oocyte-conditioned media treated with proteases did not lose their chemotactic activity; on the contrary, they became more active, with the activity shifted to lower concentrations. When oocyte-conditioned media were subjected to hexane extraction, chemotactic activity was found in both the hydrophobic and aqueous phases. Known mammalian sperm chemoattractants were ruled out as oocyte-derived chemoattractants.
Conclusions: Our results suggest that the oocyte-derived chemoattractant is a hydrophobic non-peptide molecule which, in an oocyte-conditioned medium, is associated with a carrier protein that enables its presence in a hydrophilic environment
How Object Information Improves Skeleton-based Human Action Recognition in Assembly Tasks
As the use of collaborative robots (cobots) in industrial manufacturing
continues to grow, human action recognition for effective human-robot
collaboration becomes increasingly important. This ability is crucial for
cobots to act autonomously and assist in assembly tasks. Recently,
skeleton-based approaches are often used as they tend to generalize better to
different people and environments. However, when processing skeletons alone,
information about the objects a human interacts with is lost. Therefore, we
present a novel approach of integrating object information into skeleton-based
action recognition. We enhance two state-of-the-art methods by treating object
centers as further skeleton joints. Our experiments on the assembly dataset
IKEA ASM show that our approach improves the performance of these
state-of-the-art methods to a large extent when combining skeleton joints with
objects predicted by a state-of-the-art instance segmentation model. Our
research sheds light on the benefits of combining skeleton joints with object
information for human action recognition in assembly tasks. We analyze the
effect of the object detector on the combination for action classification and
discuss the important factors that must be taken into account.Comment: IEEE International Joint Conference on Neural Networks (IJCNN) 202
Fusing Hand and Body Skeletons for Human Action Recognition in Assembly
As collaborative robots (cobots) continue to gain popularity in industrial
manufacturing, effective human-robot collaboration becomes crucial. Cobots
should be able to recognize human actions to assist with assembly tasks and act
autonomously. To achieve this, skeleton-based approaches are often used due to
their ability to generalize across various people and environments. Although
body skeleton approaches are widely used for action recognition, they may not
be accurate enough for assembly actions where the worker's fingers and hands
play a significant role. To address this limitation, we propose a method in
which less detailed body skeletons are combined with highly detailed hand
skeletons. We investigate CNNs and transformers, the latter of which are
particularly adept at extracting and combining important information from both
skeleton types using attention. This paper demonstrates the effectiveness of
our proposed approach in enhancing action recognition in assembly scenarios.Comment: International Conference on Artificial Neural Networks (ICANN) 202
A multi-modal person perception framework for socially interactive mobile service robots
In order to meet the increasing demands of mobile service robot applications, a dedicated perception module is an essential requirement for the interaction with users in real-world scenarios. In particular, multi sensor fusion and human re-identification are recognized as active research fronts. Through this paper we contribute to the topic and present a modular detection and tracking system that models position and additional properties of persons in the surroundings of a mobile robot. The proposed system introduces a probability-based data association method that besides the position can incorporate face and color-based appearance features in order to realize a re-identification of persons when tracking gets interrupted. The system combines the results of various state-of-the-art image-based detection systems for person recognition, person identification and attribute estimation. This allows a stable estimate of a mobile robot’s user, even in complex, cluttered environments with long-lasting occlusions. In our benchmark, we introduce a new measure for tracking consistency and show the improvements when face and appearance-based re-identification are combined. The tracking system was applied in a real world application with a mobile rehabilitation assistant robot in a public hospital. The estimated states of persons are used for the user-centered navigation behaviors, e.g., guiding or approaching a person, but also for realizing a socially acceptable navigation in public environments
Mathematics for the exploration of requirements
The exploration of requirements is as complex as it is important in ensuring a successful software production and software life cycle. Increasingly, tool-support is available for aiding such explorations. We use a toy example and a case study of modelling and analysing some requirements of the global assembly cache of .NET to illustrate the opportunities and challenges that mathematically founded exploration of requirements brings to the computer science and software engineering curricula
- …