987 research outputs found
Methodology of teaching choreographic disciplines
ΠΠ΅ΡΠ° ΠΊΡΡΡΡ β Π½Π°Π΄Π°Π½Π½Ρ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌ ΠΌΠ°Π³ΡΡΡΡΠ°ΡΡΡΠΈ ΡΡΠ»ΡΡΠ½ΠΎΡ Ρ Π»ΠΎΠ³ΡΡΠ½ΠΎ-ΠΏΠΎΡΠ»ΡΠ΄ΠΎΠ²Π½ΠΎΡ ΡΠΈΡΡΠ΅ΠΌΠΈ Π·Π½Π°Π½Ρ ΠΏΡΠΎ Π΄ΠΈΠ΄Π°ΠΊΡΠΈΠΊΡ ΠΏΡΠ΄Π³ΠΎΡΠΎΠ²ΠΊΠΈ ΠΊΠ°Π΄ΡΡΠ² Π²ΠΈΡΠΎΡ ΠΊΠ²Π°Π»ΡΡΡΠΊΠ°ΡΡΡ, ΡΠΎΠ·ΠΊΡΠΈΡΡΡ ΠΊΠΎΠ½ΡΠ΅ΠΏΡΡΡ, ΠΎΡΠ½ΠΎΠ²ΠΈ ΡΠ΅ΠΎΡΡΡ, ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠΈ Ρ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ»ΠΎΠ³ΡΡ Π²ΠΈΠΊΠ»Π°Π΄Π°Π½Π½Ρ Ρ
ΠΎΡΠ΅ΠΎΠ³ΡΠ°ΡΡΡΠ½ΠΈΡ
Π΄ΠΈΡΡΠΈΠΏΠ»ΡΠ½ Ρ ΡΠΈΡΡΠ΅ΠΌΡ Π²ΠΈΡΠΎΡ ΡΠΊΠΎΠ»ΠΈ.Π¦Π΅Π»Ρ ΠΊΡΡΡΠ° - ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Π»Π΅Π½ΠΈΠ΅ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌ ΠΌΠ°Π³ΠΈΡΡΡΠ°ΡΡΡΡ ΡΠ΅Π»ΠΎΡΡΠ½ΠΎΠΉ ΠΈ Π»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈ-ΠΏΠΎΡΠ»Π΅Π΄ΠΎΠ²Π°ΡΠ΅Π»ΡΠ½ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ Π·Π½Π°Π½ΠΈΠΉ ΠΎ Π΄ΠΈΠ΄Π°ΠΊΡΠΈΠΊΠ΅ ΠΏΠΎΠ΄Π³ΠΎΡΠΎΠ²ΠΊΠΈ ΠΊΠ°Π΄ΡΠΎΠ² Π²ΡΡΡΠ΅ΠΉ ΠΊΠ²Π°Π»ΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ, ΡΠ°ΡΠΊΡΡΡΠΈΠ΅ ΠΊΠΎΠ½ΡΠ΅ΠΏΡΠΈΠΈ, ΠΎΡΠ½ΠΎΠ²Ρ ΡΠ΅ΠΎΡΠΈΠΈ, ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠΈ ΠΈ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ»ΠΎΠ³ΠΈΠΈ ΠΏΡΠ΅ΠΏΠΎΠ΄Π°Π²Π°Π½ΠΈΡ Ρ
ΠΎΡΠ΅ΠΎΠ³ΡΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΡ
Π΄ΠΈΡΡΠΈΠΏΠ»ΠΈΠ½ Π² ΡΠΈΡΡΠ΅ΠΌΠ΅ Π²ΡΡΡΠ΅ΠΉ ΡΠΊΠΎΠ»Ρ.The purpose of the course is to provide undergraduate students with a coherent and logically-consistent system of knowledge about the didactics of training highly skilled personnel, the disclosure of the concept, the basis of the theory, methodology and methodology of teaching choreographic disciplines in the system of higher education
Robust Reflection Removal with Flash-only Cues in the Wild
We propose a simple yet effective reflection-free cue for robust reflection
removal from a pair of flash and ambient (no-flash) images. The reflection-free
cue exploits a flash-only image obtained by subtracting the ambient image from
the corresponding flash image in raw data space. The flash-only image is
equivalent to an image taken in a dark environment with only a flash on. This
flash-only image is visually reflection-free and thus can provide robust cues
to infer the reflection in the ambient image. Since the flash-only image
usually has artifacts, we further propose a dedicated model that not only
utilizes the reflection-free cue but also avoids introducing artifacts, which
helps accurately estimate reflection and transmission. Our experiments on
real-world images with various types of reflection demonstrate the
effectiveness of our model with reflection-free flash-only cues: our model
outperforms state-of-the-art reflection removal approaches by more than 5.23dB
in PSNR. We extend our approach to handheld photography to address the
misalignment between the flash and no-flash pair. With misaligned training data
and the alignment module, our aligned model outperforms our previous version by
more than 3.19dB in PSNR on a misaligned dataset. We also study using linear
RGB images as training data. Our source code and dataset are publicly available
at https://github.com/ChenyangLEI/flash-reflection-removal.Comment: Extension of CVPR 2021 paper [arXiv:2103.04273], submitted to TPAMI.
Our source code and dataset are publicly available at
http://github.com/ChenyangLEI/flash-reflection-remova
New Perspective on Passively Quenched Single Photon Avalanche Diodes: Effect of Feedback on Impact Ionization
Single-photon avalanche diodes (SPADs) are primary devices in photon counting systems used in quantum cryptography, time resolved spectroscopy and photon counting optical communication. SPADs convert each photo-generated electron hole pair to a measurable current via an avalanche of impact ionizations. In this paper, a stochastically self-regulating avalanche model for passively quenched SPADs is presented. The model predicts, in qualitative agreement with experiments, three important phenomena that traditional models are unable to predict. These are: (1) an oscillatory behavior of the persistent avalanche current; (2) an exponential (memoryless) decay of the probability density function of the stochastic quenching time of the persistent avalanche current; and (3) a fast collapse of the avalanche current, under strong feedback conditions, preventing the development of a persistent avalanche current. The model specifically captures the effect of the loadβs feedback on the stochastic avalanche multiplication, an effect believed to be key in breaking todayβs counting rate barrier in the 1.55βΞΌm detection window
VGSG: Vision-Guided Semantic-Group Network for Text-based Person Search
Text-based Person Search (TBPS) aims to retrieve images of target pedestrian
indicated by textual descriptions. It is essential for TBPS to extract
fine-grained local features and align them crossing modality. Existing methods
utilize external tools or heavy cross-modal interaction to achieve explicit
alignment of cross-modal fine-grained features, which is inefficient and
time-consuming. In this work, we propose a Vision-Guided Semantic-Group Network
(VGSG) for text-based person search to extract well-aligned fine-grained visual
and textual features. In the proposed VGSG, we develop a Semantic-Group Textual
Learning (SGTL) module and a Vision-guided Knowledge Transfer (VGKT) module to
extract textual local features under the guidance of visual local clues. In
SGTL, in order to obtain the local textual representation, we group textual
features from the channel dimension based on the semantic cues of language
expression, which encourages similar semantic patterns to be grouped implicitly
without external tools. In VGKT, a vision-guided attention is employed to
extract visual-related textual features, which are inherently aligned with
visual cues and termed vision-guided textual features. Furthermore, we design a
relational knowledge transfer, including a vision-language similarity transfer
and a class probability transfer, to adaptively propagate information of the
vision-guided textual features to semantic-group textual features. With the
help of relational knowledge transfer, VGKT is capable of aligning
semantic-group textual features with corresponding visual features without
external tools and complex pairwise interaction. Experimental results on two
challenging benchmarks demonstrate its superiority over state-of-the-art
methods.Comment: Accepted to IEEE TI
- β¦