165 research outputs found
Linkage between ABO Blood Type and Occupation: Evidence from Japanese Politicians and Athletes
In Asian countries, e.g., Japan, South Korea, China and Taiwan, many studies on the relationship between ABO blood type and personality have been conducted. Recently, it has been estimated that more than half of Japanese, Korean and Taiwanese people feel that this relationship is legitimate. Therefore, when data from these countries are used in personality tests, it is theoretically difficult to eliminate the effects of the “contamination of knowledge,” even if differences are found. To avoid this issue, this study examined the linkage between ABO blood type and occupations in Japan. The results showed that personality traits corresponding to blood type appeared in the data of each of the three groups of politicians and athletes, and all differences were statistically significant. We observed a clear and significant relationship between blood type and personality. Additionally, it is also necessary to consider the influence of social background
Recognition of Heat-Induced Food State Changes by Time-Series Use of Vision-Language Model for Cooking Robot
Cooking tasks are characterized by large changes in the state of the food,
which is one of the major challenges in robot execution of cooking tasks. In
particular, cooking using a stove to apply heat to the foodstuff causes many
special state changes that are not seen in other tasks, making it difficult to
design a recognizer. In this study, we propose a unified method for recognizing
changes in the cooking state of robots by using the vision-language model that
can discriminate open-vocabulary objects in a time-series manner. We collected
data on four typical state changes in cooking using a real robot and confirmed
the effectiveness of the proposed method. We also compared the conditions and
discussed the types of natural language prompts and the image regions that are
suitable for recognizing the state changes.Comment: Accepted at IAS18-202
Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors
In recent years, a number of models that learn the relations between vision
and language from large datasets have been released. These models perform a
variety of tasks, such as answering questions about images, retrieving
sentences that best correspond to images, and finding regions in images that
correspond to phrases. Although there are some examples, the connection between
these pre-trained vision-language models and robotics is still weak. If they
are directly connected to robot motions, they lose their versatility due to the
embodiment of the robot and the difficulty of data collection, and become
inapplicable to a wide range of bodies and situations. Therefore, in this
study, we categorize and summarize the methods to utilize the pre-trained
vision-language models flexibly and easily in a way that the robot can
understand, without directly connecting them to robot motions. We discuss how
to use these models for robot motion selection and motion planning without
re-training the models. We consider five types of methods to extract
information understandable for robots, and show the results of state
recognition, object recognition, affordance recognition, relation recognition,
and anomaly detection based on the combination of these five methods. We expect
that this study will add flexibility and ease-of-use, as well as new
applications, to the recognition behavior of existing robots
Binary State Recognition by Robots using Visual Question Answering of Pre-Trained Vision-Language Model
Recognition of the current state is indispensable for the operation of a
robot. There are various states to be recognized, such as whether an elevator
door is open or closed, whether an object has been grasped correctly, and
whether the TV is turned on or off. Until now, these states have been
recognized by programmatically describing the state of a point cloud or raw
image, by annotating and learning images, by using special sensors, etc. In
contrast to these methods, we apply Visual Question Answering (VQA) from a
Pre-Trained Vision-Language Model (PTVLM) trained on a large-scale dataset, to
such binary state recognition. This idea allows us to intuitively describe
state recognition in language without any re-training, thereby improving the
recognition ability of robots in a simple and general way. We summarize various
techniques in questioning methods and image processing, and clarify their
properties through experiments
VQA-based Robotic State Recognition Optimized with Genetic Algorithm
State recognition of objects and environment in robots has been conducted in
various ways. In most cases, this is executed by processing point clouds,
learning images with annotations, and using specialized sensors. In contrast,
in this study, we propose a state recognition method that applies Visual
Question Answering (VQA) in a Pre-Trained Vision-Language Model (PTVLM) trained
from a large-scale dataset. By using VQA, it is possible to intuitively
describe robotic state recognition in the spoken language. On the other hand,
there are various possible ways to ask about the same event, and the
performance of state recognition differs depending on the question. Therefore,
in order to improve the performance of state recognition using VQA, we search
for an appropriate combination of questions using a genetic algorithm. We show
that our system can recognize not only the open/closed of a refrigerator door
and the on/off of a display, but also the open/closed of a transparent door and
the state of water, which have been difficult to recognize.Comment: Accepted at ICRA202
Individualized male dress shirt adjustments using a novel method for measuring shoulder shape
ArticleINTERNATIONAL JOURNAL OF CLOTHING SCIENCE AND TECHNOLOGY. 29(2):215-225 (2017)journal articl
One year of continuous measurements of soil CH4 and CO2 fluxes in a Japanese cypress forest: Temporal and spatial variations associated with Asian monsoon rainfall
We examined the effects of Asian monsoon rainfall on CH[4] absorption of water-unsaturated forest soil. We conducted a 1 year continuous measurement of soil CH[4] and CO[2] fluxes with automated chamber systems in three plots with different soil characteristics and water content to investigate how temporal variations in CH[4] fluxes vary with the soil environment. CH[4] absorption was reduced by the “Baiu” summer rainfall event and peaked during the subsequent hot, dry period. Although CH[4] absorption and CO[2] emission typically increased as soil temperature increased, the temperature dependence of CH[4] varied more than that of CO[2], possibly due to the changing balance of activities between methanotrophs and methanogens occurring over a wide temperature range, which was strongly affected by soil water content. In short time intervals (30 min), the responses of CH[4] and CO[2] fluxes to rainfall were different for each plot. In a dry soil plot with a thick humus layer, both fluxes decreased abruptly at the peak of rainfall intensity. After rainfall, CO[2] emission increased quickly, while CH[4] absorption increased gradually. Release of accumulated CO[2] underground and restriction and recovery of CH[4] and CO[2] exchange between soil and air determined flux responses to rainfall. In a wet soil plot and a dry soil plot with a thinner humus layer, abrupt decreases in CH[4]fluxes were not observed. Consequently, the Asian monsoon rainfall strongly influenced temporal variations in CH[4] fluxes, and the differences in flux responses to environmental factors among plots caused large variability in annual budgets of CH[4] fluxes
Measurement of methane flux over an evergreen coniferous forest canopy using a relaxed eddy accumulation system with tuneable diode laser spectroscopy detection
Very few studies have conducted long-term observations of methane (CH4) flux over forest canopies. In this study, we continuously measured CH4 fluxes over an evergreen coniferous (Japanese cypress) forest canopy throughout 1 year, using a micrometeorological relaxed eddy accumulation (REA) system with tuneable diode laser spectroscopy (TDLS) detection. The Japanese cypress forest, which is a common forest type in warm-temperate Asian monsoon regions with a wet summer, switched seasonally between a sink and source of CH4 probably because of competition by methanogens and methanotrophs, which are both influenced by soil conditions (e.g., soil temperature and soil moisture). At hourly to daily timescales, the CH4 fluxes were sensitive to rainfall, probably because CH4 emission increased and/or absorption decreased during and after rainfall. The observed canopy-scale fluxes showed complex behaviours beyond those expected from previous plot-scale measurements and the CH4 fluxes changed from sink to source and vice versa
Synergistic effect of surface phosphorylation and micro-roughness on enhanced osseointegration ability of poly(ether ether ketone) in the rabbit tibia
This study was aimed to investigate the osseointegration ability of poly(ether ether ketone) (PEEK) implants with modified surface roughness and/or surface chemistry. The roughened surface was prepared by a sandblast method, and the phosphate groups on the substrates were modified by a two-step chemical reaction. The in vitro osteogenic activity of rat mesenchymal stem cells (MSCs) on the developed substrates was assessed by measuring cell proliferation, alkaline phosphatase activity, osteocalcin expression, and bone-like nodule formation. Surface roughening alone did not improve MSC responses. However, phosphorylation of smooth substrates increased cell responses, which were further elevated in combination with surface roughening. Moreover, in a rabbit tibia implantation model, this combined surface modification significantly enhanced the bone-to-implant contact ratio and corresponding bone-to-implant bonding strength at 4 and 8 weeks post-implantation, whereas modification of surface roughness or surface chemistry alone did not. This study demonstrates that combination of surface roughness and chemical modification on PEEK significantly promotes cell responses and osseointegration ability in a synergistic manner both in vitro and in vivo. Therefore, this is a simple and promising technique for improving the poor osseointegration ability of PEEK-based orthopedic/dental implants
Semantic Scene Difference Detection in Daily Life Patroling by Mobile Robots using Pre-Trained Large-Scale Vision-Language Model
It is important for daily life support robots to detect changes in their
environment and perform tasks. In the field of anomaly detection in computer
vision, probabilistic and deep learning methods have been used to calculate the
image distance. These methods calculate distances by focusing on image pixels.
In contrast, this study aims to detect semantic changes in the daily life
environment using the current development of large-scale vision-language
models. Using its Visual Question Answering (VQA) model, we propose a method to
detect semantic changes by applying multiple questions to a reference image and
a current image and obtaining answers in the form of sentences. Unlike deep
learning-based methods in anomaly detection, this method does not require any
training or fine-tuning, is not affected by noise, and is sensitive to semantic
state changes in the real world. In our experiments, we demonstrated the
effectiveness of this method by applying it to a patrol task in a real-life
environment using a mobile robot, Fetch Mobile Manipulator. In the future, it
may be possible to add explanatory power to changes in the daily life
environment through spoken language.Comment: Accepted to 2023 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2023
- …