29 research outputs found

    Effectiveness of mobile learning languages of object-oriented programming

    Full text link
    Nowadays, in the process of learning object-oriented programming traditional forms of learning are mainly used, but in today's world where information technologies are rapidly developing every day, there is a need to introduce additional methods and forms of learningt. This emerging form of learning is mobile learning. The article considers the main directions and possibilities of mobile learning languages of object-oriented programmingНа сегодняшний день, в процессе обучения объектно-ориентированному программированию преимущественно используются традиционные формы обучения, но в современном мире, когда информационные технологии каждый день стремительно развиваются, есть необходимость внедрения дополнительных методов и форм обучения. И этой новой развивающейся формой обучения является мобильное обучение. В статье рассмотрены основные направления и возможности мобильного обучения языкам объектно-ориентированного программировани

    FEATURES OF METHODOLOGICAL AND SUBJECT TRAINING OF FUTURE TEACHERS OF COMPUTER SCIENCE

    Full text link
    В статье рассматриваются дидактический потенциал сетевых сервисов на основе облачных технологий для использования в обучении, а также в профессионально-педагогической деятельности будущих учителей информатикиThe article discusses the didactic potential of network services based on cloud technologies for use in training, as well as in the professional and pedagogical activities of future computer science teacher

    Hearing Feelings: Affective Categorization of Music and Speech in Alexithymia, an ERP Study

    Get PDF
    Background: Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials. Methodology: Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets. Conclusions: Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Electronic educational-methodical complex and features of its use in training teachers of mathematics

    No full text
    Examples of electronic educational-methodical complex in training future teachers of mathematics, connected with the strengthening of the role of computer mathematical systems and features of its use in training teachers of mathematics

    Understanding intention of movement from electroencephalograms

    No full text
    In this paper, we propose a new framework for understanding intention of movement that can be used in developing non-invasive brain-computer interfaces. The proposed method is based on extracting salient features from brain signals recorded whilst the subject is actually (or imagining) performing a wrist movement in different directions. Our method focuses on analysing the brain signals at the time preceding wrist movement, i.e. while the subject is preparing (or intending) to perform the movement. Feature selection and classification of the direction is done using a wrapper method based on support vector machines (SVMs). The classification results show that we are able to discriminate the directions using features extracted from brain signals prior to movement. We then extract rules from the SVM classifiers to compare the features extracted for real and imaginary movements in an attempt to understand the mechanisms of intention of movement. Our new approach could be potentially useful in building brain-computer interfaces where a paralysed person could communicate with a wheelchair and steer it to the desired direction using a rule-based knowledge system based on understanding of the subject's intention to move through his/her brain signals
    corecore