987 research outputs found

    Methodology of teaching choreographic disciplines

    Get PDF
    ΠœΠ΅Ρ‚Π° курсу – надання студСнтам магістратури цілісної Ρ– Π»ΠΎΠ³Ρ–Ρ‡Π½ΠΎ-послідовної систСми знань ΠΏΡ€ΠΎ Π΄ΠΈΠ΄Π°ΠΊΡ‚ΠΈΠΊΡƒ ΠΏΡ–Π΄Π³ΠΎΡ‚ΠΎΠ²ΠΊΠΈ ΠΊΠ°Π΄Ρ€Ρ–Π² Π²ΠΈΡ‰ΠΎΡ— ΠΊΠ²Π°Π»Ρ–Ρ„Ρ–ΠΊΠ°Ρ†Ρ–Ρ—, розкриття ΠΊΠΎΠ½Ρ†Π΅ΠΏΡ†Ρ–Ρ—, основи Ρ‚Π΅ΠΎΡ€Ρ–Ρ—, ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΈΠΊΠΈ Ρ– ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΎΠ»ΠΎΠ³Ρ–Ρ— викладання Ρ…ΠΎΡ€Π΅ΠΎΠ³Ρ€Π°Ρ„Ρ–Ρ‡Π½ΠΈΡ… дисциплін Ρƒ систСмі Π²ΠΈΡ‰ΠΎΡ— школи.ЦСль курса - прСдоставлСниС студСнтам магистратуры цСлостной ΠΈ логичСски-ΠΏΠΎΡΠ»Π΅Π΄ΠΎΠ²Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎΠΉ систСмы Π·Π½Π°Π½ΠΈΠΉ ΠΎ Π΄ΠΈΠ΄Π°ΠΊΡ‚ΠΈΠΊΠ΅ ΠΏΠΎΠ΄Π³ΠΎΡ‚ΠΎΠ²ΠΊΠΈ ΠΊΠ°Π΄Ρ€ΠΎΠ² Π²Ρ‹ΡΡˆΠ΅ΠΉ ΠΊΠ²Π°Π»ΠΈΡ„ΠΈΠΊΠ°Ρ†ΠΈΠΈ, раскрытиС ΠΊΠΎΠ½Ρ†Π΅ΠΏΡ†ΠΈΠΈ, основы Ρ‚Π΅ΠΎΡ€ΠΈΠΈ, ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΈΠΊΠΈ ΠΈ ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΎΠ»ΠΎΠ³ΠΈΠΈ прСподавания хорСографичСских дисциплин Π² систСмС Π²Ρ‹ΡΡˆΠ΅ΠΉ ΡˆΠΊΠΎΠ»Ρ‹.The purpose of the course is to provide undergraduate students with a coherent and logically-consistent system of knowledge about the didactics of training highly skilled personnel, the disclosure of the concept, the basis of the theory, methodology and methodology of teaching choreographic disciplines in the system of higher education

    Robust Reflection Removal with Flash-only Cues in the Wild

    Full text link
    We propose a simple yet effective reflection-free cue for robust reflection removal from a pair of flash and ambient (no-flash) images. The reflection-free cue exploits a flash-only image obtained by subtracting the ambient image from the corresponding flash image in raw data space. The flash-only image is equivalent to an image taken in a dark environment with only a flash on. This flash-only image is visually reflection-free and thus can provide robust cues to infer the reflection in the ambient image. Since the flash-only image usually has artifacts, we further propose a dedicated model that not only utilizes the reflection-free cue but also avoids introducing artifacts, which helps accurately estimate reflection and transmission. Our experiments on real-world images with various types of reflection demonstrate the effectiveness of our model with reflection-free flash-only cues: our model outperforms state-of-the-art reflection removal approaches by more than 5.23dB in PSNR. We extend our approach to handheld photography to address the misalignment between the flash and no-flash pair. With misaligned training data and the alignment module, our aligned model outperforms our previous version by more than 3.19dB in PSNR on a misaligned dataset. We also study using linear RGB images as training data. Our source code and dataset are publicly available at https://github.com/ChenyangLEI/flash-reflection-removal.Comment: Extension of CVPR 2021 paper [arXiv:2103.04273], submitted to TPAMI. Our source code and dataset are publicly available at http://github.com/ChenyangLEI/flash-reflection-remova

    New Perspective on Passively Quenched Single Photon Avalanche Diodes: Effect of Feedback on Impact Ionization

    Get PDF
    Single-photon avalanche diodes (SPADs) are primary devices in photon counting systems used in quantum cryptography, time resolved spectroscopy and photon counting optical communication. SPADs convert each photo-generated electron hole pair to a measurable current via an avalanche of impact ionizations. In this paper, a stochastically self-regulating avalanche model for passively quenched SPADs is presented. The model predicts, in qualitative agreement with experiments, three important phenomena that traditional models are unable to predict. These are: (1) an oscillatory behavior of the persistent avalanche current; (2) an exponential (memoryless) decay of the probability density function of the stochastic quenching time of the persistent avalanche current; and (3) a fast collapse of the avalanche current, under strong feedback conditions, preventing the development of a persistent avalanche current. The model specifically captures the effect of the load’s feedback on the stochastic avalanche multiplication, an effect believed to be key in breaking today’s counting rate barrier in the 1.55–μm detection window

    VGSG: Vision-Guided Semantic-Group Network for Text-based Person Search

    Full text link
    Text-based Person Search (TBPS) aims to retrieve images of target pedestrian indicated by textual descriptions. It is essential for TBPS to extract fine-grained local features and align them crossing modality. Existing methods utilize external tools or heavy cross-modal interaction to achieve explicit alignment of cross-modal fine-grained features, which is inefficient and time-consuming. In this work, we propose a Vision-Guided Semantic-Group Network (VGSG) for text-based person search to extract well-aligned fine-grained visual and textual features. In the proposed VGSG, we develop a Semantic-Group Textual Learning (SGTL) module and a Vision-guided Knowledge Transfer (VGKT) module to extract textual local features under the guidance of visual local clues. In SGTL, in order to obtain the local textual representation, we group textual features from the channel dimension based on the semantic cues of language expression, which encourages similar semantic patterns to be grouped implicitly without external tools. In VGKT, a vision-guided attention is employed to extract visual-related textual features, which are inherently aligned with visual cues and termed vision-guided textual features. Furthermore, we design a relational knowledge transfer, including a vision-language similarity transfer and a class probability transfer, to adaptively propagate information of the vision-guided textual features to semantic-group textual features. With the help of relational knowledge transfer, VGKT is capable of aligning semantic-group textual features with corresponding visual features without external tools and complex pairwise interaction. Experimental results on two challenging benchmarks demonstrate its superiority over state-of-the-art methods.Comment: Accepted to IEEE TI
    • …
    corecore