287 research outputs found
Analysis of Influencing Factors of Tablet Consumer Satisfaction Based on Online Comment Mining
How to extract effective information that affects consumer satisfaction from online comments has become a hot issue for customer behavior. This article is based on the data mining of online comments and the research object are the top-selling tablets on the JD platform from October to December 2018. We started by analyzing influential factors such as goods, after-sales service, and logistics, and crawled online review information of nearly 3,000 tablet computers from five major brands. We first use the jieba word segmentation tool to process the user comments, and use TF-IDF to calculate the frequency of different words in the comments to determine the main keywords of the comments. Secondly, we set up a user\u27s sentiment dictionary to determine the sentiment index of the review, and combined the keywords and sentiment index to get the degree of consumer satisfaction on different influencing factors. Finally, we imported the quantified characteristic factors into Clementine 12.0, and established a Bayesian network model of customer satisfaction, thereby obtaining a ranking table of the importance of each factor to product sales. To improve the model robustness, we adopt a multivariate linear model to check the accuracy of the output results. Our research can not only formulate effective product service sales strategies for merchants, but also guarantee customers to experience better products and services
CCD photometric study of the W UMa-type binary II CMa in the field of Berkeley 33
The CCD photometric data of the EW-type binary, II CMa, which is a contact
star in the field of the middle-aged open cluster Berkeley 33, are presented.
The complete R light curve was obtained. In the present paper, using the five
CCD epochs of light minimum (three of them are calculated from Mazur et al.
(1993)'s data and two from our new data), the orbital period P was revised to
0.22919704 days. The complete R light curve was analyzed by using the 2003
version of W-D (Wilson-Devinney) program. It is found that this is a contact
system with a mass ratio and a contact factor . The high mass
ratio () and the low contact factor () indicate that the system
just evolved into the marginal contact stage
Hierarchical generative modelling for autonomous robots
Humans can produce complex whole-body motions when interacting with their
surroundings, by planning, executing and combining individual limb movements.
We investigated this fundamental aspect of motor control in the setting of
autonomous robotic operations. We approach this problem by hierarchical
generative modelling equipped with multi-level planning-for autonomous task
completion-that mimics the deep temporal architecture of human motor control.
Here, temporal depth refers to the nested time scales at which successive
levels of a forward or generative model unfold, for example, delivering an
object requires a global plan to contextualise the fast coordination of
multiple local movements of limbs. This separation of temporal scales also
motivates robotics and control. Specifically, to achieve versatile sensorimotor
control, it is advantageous to hierarchically structure the planning and
low-level motor control of individual limbs. We use numerical and physical
simulation to conduct experiments and to establish the efficacy of this
formulation. Using a hierarchical generative model, we show how a humanoid
robot can autonomously complete a complex task that necessitates a holistic use
of locomotion, manipulation, and grasping. Specifically, we demonstrate the
ability of a humanoid robot that can retrieve and transport a box, open and
walk through a door to reach the destination, approach and kick a football,
while showing robust performance in presence of body damage and ground
irregularities. Our findings demonstrated the effectiveness of using
human-inspired motor control algorithms, and our method provides a viable
hierarchical architecture for the autonomous completion of challenging
goal-directed tasks
PolyBuilding: Polygon Transformer for End-to-End Building Extraction
We present PolyBuilding, a fully end-to-end polygon Transformer for building
extraction. PolyBuilding direct predicts vector representation of buildings
from remote sensing images. It builds upon an encoder-decoder transformer
architecture and simultaneously outputs building bounding boxes and polygons.
Given a set of polygon queries, the model learns the relations among them and
encodes context information from the image to predict the final set of building
polygons with fixed vertex numbers. Corner classification is performed to
distinguish the building corners from the sampled points, which can be used to
remove redundant vertices along the building walls during inference. A 1-d
non-maximum suppression (NMS) is further applied to reduce vertex redundancy
near the building corners. With the refinement operations, polygons with
regular shapes and low complexity can be effectively obtained. Comprehensive
experiments are conducted on the CrowdAI dataset. Quantitative and qualitative
results show that our approach outperforms prior polygonal building extraction
methods by a large margin. It also achieves a new state-of-the-art in terms of
pixel-level coverage, instance-level precision and recall, and geometry-level
properties (including contour regularity and polygon complexity)
Learning Motor Skills of Reactive Reaching and Grasping of Objects
Reactive grasping of objects is an essential capability of autonomous robot manipulation, which is yet challenging to learn such sensorimotor control to coordinate coherent hand-finger motions and be robust against disturbances and failures. This work proposed a deep reinforcement learning based scheme to train feedback control policies which can coordinate reaching and grasping actions in presence of uncertainties. We formulated geometric metrics and task-orientated quantities to design the reward, which enabled efficient exploration of grasping policies. Further, to improve the success rate, we deployed key initial states of difficult hand-finger poses to train policies to overcome potential failures due to challenging configurations. The extensive simulation validations and benchmarks demonstrated that the learned policy was robust to grasp both static and moving objects. Moreover, the policy generated successful failure recoveries within a short time in difficult configurations and was robust with synthetic noises in the state feedback which were unseen during training
SwinVRNN: A Data-Driven Ensemble Forecasting Model via Learned Distribution Perturbation
Data-driven approaches for medium-range weather forecasting are recently
shown extraordinarily promising for ensemble forecasting for their fast
inference speed compared to traditional numerical weather prediction (NWP)
models, but their forecast accuracy can hardly match the state-of-the-art
operational ECMWF Integrated Forecasting System (IFS) model. Previous
data-driven attempts achieve ensemble forecast using some simple perturbation
methods, like initial condition perturbation and Monte Carlo dropout. However,
they mostly suffer unsatisfactory ensemble performance, which is arguably
attributed to the sub-optimal ways of applying perturbation. We propose a Swin
Transformer-based Variational Recurrent Neural Network (SwinVRNN), which is a
stochastic weather forecasting model combining a SwinRNN predictor with a
perturbation module. SwinRNN is designed as a Swin Transformer-based recurrent
neural network, which predicts future states deterministically. Furthermore, to
model the stochasticity in prediction, we design a perturbation module
following the Variational Auto-Encoder paradigm to learn multivariate Gaussian
distributions of a time-variant stochastic latent variable from data. Ensemble
forecasting can be easily achieved by perturbing the model features leveraging
noise sampled from the learned distribution. We also compare four categories of
perturbation methods for ensemble forecasting, i.e. fixed distribution
perturbation, learned distribution perturbation, MC dropout, and multi model
ensemble. Comparisons on WeatherBench dataset show the learned distribution
perturbation method using our SwinVRNN model achieves superior forecast
accuracy and reasonable ensemble spread due to joint optimization of the two
targets. More notably, SwinVRNN surpasses operational IFS on surface variables
of 2-m temperature and 6-hourly total precipitation at all lead times up to
five days
UniNeXt: Exploring A Unified Architecture for Vision Recognition
Vision Transformers have shown great potential in computer vision tasks. Most
recent works have focused on elaborating the spatial token mixer for
performance gains. However, we observe that a well-designed general
architecture can significantly improve the performance of the entire backbone,
regardless of which spatial token mixer is equipped. In this paper, we propose
UniNeXt, an improved general architecture for the vision backbone. To verify
its effectiveness, we instantiate the spatial token mixer with various typical
and modern designs, including both convolution and attention modules. Compared
with the architecture in which they are first proposed, our UniNeXt
architecture can steadily boost the performance of all the spatial token
mixers, and narrows the performance gap among them. Surprisingly, our UniNeXt
equipped with naive local window attention even outperforms the previous
state-of-the-art. Interestingly, the ranking of these spatial token mixers also
changes under our UniNeXt, suggesting that an excellent spatial token mixer may
be stifled due to a suboptimal general architecture, which further shows the
importance of the study on the general architecture of vision backbone. All
models and codes will be publicly available
Force-Guided High-Precision Grasping Control of Fragile and Deformable Objects Using sEMG-Based Force Prediction
Regulating contact forces with high precision is crucial for grasping and
manipulating fragile or deformable objects. We aim to utilize the dexterity of
human hands to regulate the contact forces for robotic hands and exploit human
sensory-motor synergies in a wearable and non-invasive way. We extracted force
information from the electric activities of skeletal muscles during their
voluntary contractions through surface electromyography (sEMG). We built a
regression model based on a Neural Network to predict the gripping force from
the preprocessed sEMG signals and achieved high accuracy (R2 = 0.982). Based on
the force command predicted from human muscles, we developed a force-guided
control framework, where force control was realized via an admittance
controller that tracked the predicted gripping force reference to grasp
delicate and deformable objects. We demonstrated the effectiveness of the
proposed method on a set of representative fragile and deformable objects from
daily life, all of which were successfully grasped without any damage or
deformation.Comment: 8 pages, 11 figures, to be published on IEEE Robotics and Automation
Letters. For the attached video, see https://youtu.be/0AotKaWFJD
- …