2,110 research outputs found
Investigation and Improvement Strategies of College Students\u27 Self-cognition
Professor Howard Gardner\u27s theory of multiple intelligences believes that human intelligence is composed of at least eight abilities such as language intelligence, mathematical logic intelligence, and introspective intelligence. Introspective intelligence is the individual\u27s recognition of self\u27s behavior and psychological state. It is very important for personal self-understanding and constructing a correct self. It plays a significant role in human learning, employment and development The development level of self-cognition is different at different stages. The article compiles a questionnaire based on the characteristics of Campbell\u27s self-cognition.By collating and analyzing the data collected in the questionnaire, the basic status of self-knowledge of Chinese college students in the emerging stage can be obtained: College students\u27 self-awareness is maturing, but self-awareness is high; College students have average emotional management skills, poor emotional expression, and emotional fluctuations; Undergraduates have clear learning goals, but their self-fulfilling channels are confused; College students have their own value system, but the values are immature; Career ideals are seriously ahead of schedule, not in line with professional abilities. The countermeasures to improve college students\u27 self-cognition are: Educate students to build good interpersonal relationships; Strengthen the education of college students\u27 self-awareness and strengthen career guidance; Create a good and positive mood; Educate students to strengthen self-improvement strategies in multiple ways and promote the healthy and harmonious development of college students
Visual Thinking Methods and Training in Video Production
"A picture is worth a thousand words". Internet plus has brought people into the era of picture reading. Pictures and videos are everywhere. And dynamic video has the characteristics of sound, sound and documentary. It has become a popular media form for the public. Therefore, mobile phone video shooting and production are convenient, and the popularization of video production and dissemination has become inevitable. However, the creation of artistic and innovative video works requires producers to master certain visual thinking methods in addition to film montage theories and techniques. The article briefly outlines the forming process of the concept of visual thinking, and proposes advanced methods of visual thinking: intuitive method, selection method, discovery method, and inquiry method. In the process of video production, some methods of visual thinking are analyzed through a case, such as the visualization of textual information, the figuration of image, the logic of concreteness, and the systematization of logic. We have studied practical visual thinking training methods, from the three stages of video production: script creation, shooting practice, and video packaging
Scenario-Adaptive Fine-Grained Personalization Network: Tailoring User Behavior Representation to the Scenario Context
Existing methods often adjust representations adaptively only after
aggregating user behavior sequences. This coarse-grained approach to
re-weighting the entire user sequence hampers the model's ability to accurately
model the user interest migration across different scenarios. To enhance the
model's capacity to capture user interests from historical behavior sequences
in each scenario, we develop a ranking framework named the Scenario-Adaptive
Fine-Grained Personalization Network (SFPNet), which designs a kind of
fine-grained method for multi-scenario personalized recommendations.
Specifically, SFPNet comprises a series of blocks named as Scenario-Tailoring
Block, stacked sequentially. Each block initially deploys a parameter
personalization unit to integrate scenario information at a coarse-grained
level by redefining fundamental features. Subsequently, we consolidate
scenario-adaptively adjusted feature representations to serve as context
information. By employing residual connection, we incorporate this context into
the representation of each historical behavior, allowing for context-aware
fine-grained customization of the behavior representations at the
scenario-level, which in turn supports scenario-aware user interest modeling.Comment: Accepted by SIGIR 2024, 10 pages, 5 figures, 5 table
Improving Offline-to-Online Reinforcement Learning with Q Conditioned State Entropy Exploration
Studying how to fine-tune offline reinforcement learning (RL) pre-trained
policy is profoundly significant for enhancing the sample efficiency of RL
algorithms. However, directly fine-tuning pre-trained policies often results in
sub-optimal performance. This is primarily due to the distribution shift
between offline pre-training and online fine-tuning stages. Specifically, the
distribution shift limits the acquisition of effective online samples,
ultimately impacting the online fine-tuning performance. In order to narrow
down the distribution shift between offline and online stages, we proposed Q
conditioned state entropy (QCSE) as intrinsic reward. Specifically, QCSE
maximizes the state entropy of all samples individually, considering their
respective Q values. This approach encourages exploration of low-frequency
samples while penalizing high-frequency ones, and implicitly achieves State
Marginal Matching (SMM), thereby ensuring optimal performance, solving the
asymptotic sub-optimality of constraint-based approaches. Additionally, QCSE
can seamlessly integrate into various RL algorithms, enhancing online
fine-tuning performance. To validate our claim, we conduct extensive
experiments, and observe significant improvements with QCSE (about 13% for CQL
and 8% for Cal-QL). Furthermore, we extended experimental tests to other
algorithms, affirming the generality of QCSE
Context-Former: Stitching via Latent Conditioned Sequence Modeling
Offline reinforcement learning (RL) algorithms can learn better
decision-making compared to behavior policies by stitching the suboptimal
trajectories to derive more optimal ones. Meanwhile, Decision Transformer (DT)
abstracts the RL as sequence modeling, showcasing competitive performance on
offline RL benchmarks. However, recent studies demonstrate that DT lacks of
stitching capacity, thus exploiting stitching capability for DT is vital to
further improve its performance. In order to endow stitching capability to DT,
we abstract trajectory stitching as expert matching and introduce our approach,
ContextFormer, which integrates contextual information-based imitation learning
(IL) and sequence modeling to stitch sub-optimal trajectory fragments by
emulating the representations of a limited number of expert trajectories. To
validate our approach, we conduct experiments from two perspectives: 1) We
conduct extensive experiments on D4RL benchmarks under the settings of IL, and
experimental results demonstrate ContextFormer can achieve competitive
performance in multiple IL settings. 2) More importantly, we conduct a
comparison of ContextFormer with various competitive DT variants using
identical training datasets. The experimental results unveiled ContextFormer's
superiority, as it outperformed all other variants, showcasing its remarkable
performance
Nanoporous Structure of Sintered Metal Powder Heat Exchanger in Dilution Refrigeration: A Numerical Study
We use LAMMPS to randomly pack hard spheres to simulate the heat exchanger,
where the hard spheres represent sintered metal particles in the heat
exchanger. We simulated the heat exchanger under different sphere radii and
different packing fractions of the metal particle and researched pore space. To
improve the performance of the heat exchanger, we adopted this simulation
method to explore when the packing fraction is 65%, the optimal sintering
particle radius in the heat exchanger is 30~35nm.Comment: 5 pages,3 figures, one tabl
- …
