30 research outputs found
Lifelong Ensemble Learning based on Multiple Representations for Few-Shot Object Recognition
Service robots are integrating more and more into our daily lives to help us
with various tasks. In such environments, robots frequently face new objects
while working in the environment and need to learn them in an open-ended
fashion. Furthermore, such robots must be able to recognize a wide range of
object categories. In this paper, we present a lifelong ensemble learning
approach based on multiple representations to address the few-shot object
recognition problem. In particular, we form ensemble methods based on deep
representations and handcrafted 3D shape descriptors. To facilitate lifelong
learning, each approach is equipped with a memory unit for storing and
retrieving object information instantly. The proposed model is suitable for
open-ended learning scenarios where the number of 3D object categories is not
fixed and can grow over time. We have performed extensive sets of experiments
to assess the performance of the proposed approach in offline, and open-ended
scenarios. For the evaluation purpose, in addition to real object datasets, we
generate a large synthetic household objects dataset consisting of 27000 views
of 90 objects. Experimental results demonstrate the effectiveness of the
proposed method on online few-shot 3D object recognition tasks, as well as its
superior performance over the state-of-the-art open-ended learning approaches.
Furthermore, our results show that while ensemble learning is modestly
beneficial in offline settings, it is significantly beneficial in lifelong
few-shot learning situations. Additionally, we demonstrated the effectiveness
of our approach in both simulated and real-robot settings, where the robot
rapidly learned new categories from limited examples
Chlorine and Bromine Isotope Fractionation of Halogenated Organic Pollutants on Gas Chromatography Columns
Compound-specific chlorine/bromine isotope analysis (CSIA-Cl/Br) has become a
useful approach for degradation pathway investigation and source appointment of
halogenated organic pollutants (HOPs). CSIA-Cl/Br is usually conducted by gas
chromatography-mass spectrometry (GC-MS), which could be negatively impacted by
chlorine and bromine isotope fractionation of HOPs on GC columns. In this
study, 31 organochlorines and 4 organobromines were systematically investigated
in terms of Cl/Br isotope fractionation on GC columns using GC-double focus
magnetic-sector high resolution MS (GC-DFS-HRMS). On-column chlorine/bromine
isotope fractionation behaviors of the HOPs were explored, presenting various
isotope fractionation modes and extents. Twenty-nine HOPs exhibited inverse
isotope fractionation, and only polychlorinated biphenyl-138 (PCB-138) and
PCB-153 presented normal isotope fractionation. And no observable isotope
fractionation was found for the rest four HOPs, i.e., PCB-101,
1,2,3,7,8-pentachlorodibenzofuran, PCB-180 and 2,3,7,8-tetrachlorodibenzofuran.
The isotope fractionation extents of different HOPs varied from below the
observable threshold (0.50%) to 7.31% (PCB-18). The mechanisms of the on-column
chlorine/bromine isotope fractionation were tentatively interpreted with the
Craig-Gordon model and a modified two-film model. Inverse isotope effects and
normal isotope effects might contribute to the total isotope effects together
and thus determine the isotope fractionation directions and extents. Proposals
derived from the main results of this study for CSIA-Cl/Br research were
provided for improving the precision and accuracy of CSIA-Cl/Br results. The
findings of this study will shed light on the development of CSIA-Cl/Br methods
using GC-MS techniques, and help to implement the research using CSIA-Cl/Br to
investigate the environmental behaviors and pollution sources of HOPs.Comment: 30 pages, 5 figure
Lifelong ensemble learning based on multiple representations for few-shot object recognition
Service robots are increasingly integrating into our daily lives to help us with various tasks. In such environments, robots frequently face new objects while working in the environment and need to learn them in an open-ended fashion. Furthermore, such robots must be able to recognize a wide range of object categories. In this paper, we present a lifelong ensemble learning approach based on multiple representations to address the few-shot object recognition problem. In particular, we form ensemble methods based on deep representations and handcrafted 3D shape descriptors. To facilitate lifelong learning, each approach is equipped with a memory unit for storing and retrieving object information instantly. The proposed model is suitable for open-ended learning scenarios where the number of 3D object categories is not fixed and can grow over time. We have performed extensive sets of experiments to assess the performance of the proposed approach in offline, and open-ended scenarios. For evaluation purposes, in addition to real object datasets, we generate a large synthetic household objects dataset consisting of 27000 views of 90 objects. Experimental results demonstrate the effectiveness of the proposed method on online few-shot 3D object recognition tasks, as well as its superior performance over the state-of-the-art open-ended learning approaches. Furthermore, our results show that while ensemble learning is modestly beneficial in offline settings, it is significantly beneficial in lifelong few-shot learning situations. Additionally, we demonstrated the effectiveness of our approach in both simulated and real-robot settings, where the robot rapidly learned new categories from limited examples. A video of our experiments is available online at: https://youtu.be/nxVrQCuYGdI.</p
Fine-grained Object Categorization for Service Robots
A robot working in a human-centered environment is frequently confronted with
fine-grained objects that must be distinguished from one another. Fine-grained
visual classification (FGVC) still remains a challenging problem due to large
intra-category dissimilarity and small inter-category dissimilarity.
Furthermore, flaws such as the influence of illumination and information
inadequacy persist in fine-grained RGB datasets. We propose a novel deep mixed
multi-modality approach based on Vision Transformer (ViT) and Convolutional
Neural Network (CNN) to improve the performance of FGVC. Furthermore, we
generate two synthetic fine-grained RGB-D datasets consisting of 13 car objects
with 720 views and 120 shoes with 7200 sample views. Finally, to assess the
performance of the proposed approach, we conducted several experiments using
fine-grained RGB-D datasets. Experimental results show that our method
outperformed other baselines in terms of recognition accuracy, and achieved
93.40 and 91.67 recognition accuracy on shoe and car dataset
respectively. We made the fine-grained RGB-D datasets publicly available for
the benefit of research communities
Research on the Cascade Vehicle Detection Method Based on CNN
This paper introduces an adaptive method for detecting front vehicles under complex weather conditions. In the field of vehicle detection from images extracted by cameras installed in vehicles, backgrounds with complicated weather, such as rainy and snowy days, increase the difficulty of target detection. In order to improve the accuracy and robustness of vehicle detection in front of driverless cars, a cascade vehicle detection method combining multifeature fusion and convolutional neural network (CNN) is proposed in this paper. Firstly, local binary patterns, Haar-like and orientation gradient histogram features from the front vehicle are extracted, then principal-component-analysis dimension reduction and serial-fusion processing are performed on the input image. Furthermore, a preliminary screening is conducted as the input of a support vector machine classifier based on the acquired fusion features, and the CNN model is employed to validate cascade detection of the filtered results. Finally, an integrated data set extracted from BDD, Udacity, and other data sets is utilized to test the method proposed. The recall rate is 98.69%, which is better than the traditional feature algorithm, and the recall rate of 97.32% in a complex driving environment indicates that the algorithm possesses good robustness