99 research outputs found
Recommended from our members
The composition of liquid atmospheric pressure matrix-assisted laser desorption/ionization matrices and its effect on ionization in mass spectrometry
New liquid atmospheric pressure (AP) matrix-assisted laser desorption/ionization (MALDI) matrices that produce predominantly multiply charged ions have been developed and evaluated with respect to their performance for peptide and protein analysis by mass spectrometry (MS). Both the chromophore and the viscous support liquid in these matrices were optimized for highest MS signal intensity, S/N values and maximum charge state. The best performance in both protein and peptide analysis was achieved employing light diols as matrix support liquids (e.g. ethylene glycol and propylene glycol). Investigating the influence of the chromophore, it was found that 2,5-dihydroxybenzoic acid resulted in a higher analyte ion signal intensity for the analysis of small peptides; however larger molecules (>17kDa) were undetectable. For larger molecules, a sample preparation based on Ξ±-cyano-4-hydroxycinnammic acid as the chromophore was developed and multiply protonated analytes with charge states of more than 50 were detected. Thus, for the first time it was possible to detect with MALDI MS proteins as large as ~80kDa with a high number of charge states, i.e. m/z values below 2000. Systematic investigations of various matrix support liquids have revealed a linear dependency between laser threshold energy and surface tension of the liquid MALDI sample
The role of dispersal constraints in the assembly of salt-marsh communities
Esther Chang onderzocht de invloed van zaadverspreidingsfactoren op de soortensamenΒstelling binnen plantengemeenschappen op de kwelder, in het bijzonder op SchiermonnikΒoog. Zij concludeert onder andere dat het grote vermogen van stormen om zaden te kunnen verbreiden het vΓ³Γ³rkomen van soorten op meer groeiplaatsen lijkt te versterken, maar tevens het aantal individuen van soorten te beperken door het wegspoelen van veel zaden uit bronpopulaties
Predicting fatigue and psychophysiological test performance from speech for safety-critical environments
Automatic systems for estimating operator fatigue have application in safety-critical environments. A system which could estimate level of fatigue from speech would have application in domains where operators engage in regular verbal communication as part of their duties. Previous studies on the prediction of fatigue from speech have been limited because of their reliance on subjective ratings and because they lack comparison to other methods for assessing fatigue. In this paper, we present an analysis of voice recordings and psychophysiological test scores collected from seven aerospace personnel during a training task in which they remained awake for 60 h. We show that voice features and test scores are affected by both the total time spent awake and the time position within each subjectβs circadian cycle. However, we show that time spent awake and time-of-day information are poor predictors of the test results, while voice features can give good predictions of the psychophysiological test scores and sleep latency. Mean absolute errors of prediction are possible within about 17.5% for sleep latency and 5β12% for test scores. We discuss the implications for the use of voice as a means to monitor the effects of fatigue on cognitive performance in practical applications
Recommended from our members
Production and analysis of multiply charged negative ions by liquid atmospheric pressure matrix-assisted laser desorption/ionization mass spectrometry
RATIONALE: Liquid AP-MALDI has been shown to enable the production of ESI-like multiply charged analyte ions with little sample consumption and long-lasting, robust ion yield for sensitive analysis by mass spectrometry. Previous reports have focused on positive ion production. Here, we report an initial optimisation of liquid AP-MALDI for ESI-like negative ion production and its application to the analysis of peptides/proteins, DNA and lipids.
METHODS: The instrumentation employed for this study is identical to that of earlier liquid AP-MALDI MS studies for positive analyte ion production with a simple non-commercial AP ion source that is attached to a Waters Synapt G2-Si mass spectrometer and incorporates a heated ion transfer tube. The preparation of liquid MALDI matrices is similar to positive ion mode analysis but has been adjusted for negative ion mode by changing the chromophore to 3-aminoquinoline and 9-aminoacridine for further improvements.
RESULTS: For DNA, liquid AP-MALDI MS analysis benefited from switching to 9-aminoacridine-based MALDI samples and the negative ion mode, increasing the number of charges by up to a factor of 2 and the analyte ion signal intensities by more than ten-fold compared to the positive ion mode. The limit of detection was recorded at around 10fmol for ATGCAT. For lipids, negative ion mode analysis provided a fully orthogonal set of detected lipids.
CONCLUSIONS: Negative ion mode is a sensitive alternative to positive ion mode in liquid AP-MALDI MS analysis. In particular, the analysis of lipids and DNA benefited from the complementarity of the detected lipid species and the vastly greater DNA ion signal intensities in negative ion mode
AUTOMATIC LIP-READING OF HEARING IMPAIRED PEOPLE
Inability to use speech interfaces greatly limits the deaf and hearing impaired people in the possibility of human-machine interaction. To solve this problem and to increase the accuracy and reliability of the automatic Russian sign language recognition system it is proposed to use lip-reading in addition to hand gestures recognition. Deaf and hearing impaired people use sign language as the main way of communication in everyday life. Sign language is a structured form of hand gestures and lips movements involving visual motions and signs, which is used as a communication system. Since sign language includes not only hand gestures, but also lip movements that mimic vocalized pronunciation, it is of interest to investigate how accurately such a visual speech can be recognized by a lip-reading system, especially considering the fact that the visual speech of hearing impaired people is often characterized with hyper-articulation, which should potentially facilitate its recognition. For this purpose, thesaurus of Russian sign language (TheRusLan) collected in SPIIRAS in 2018β19 was used. The database consists of color optical FullHD video recordings of 13 native Russian sign language signers (11 females and 2 males) from βPavlovsk boarding school for the hearing impairedβ. Each of the signers demonstrated 164 phrases for 5 times. This work covers the initial stages of this research, including data collection, data labeling, region-of-interest detection and methods for informative features extraction. The results of this study can later be used to create assistive technologies for deaf or hearing impaired people
ΠΠ½Π°Π»ΠΈΡΠΈΡΠ΅ΡΠΊΠΈΠΉ ΠΎΠ±Π·ΠΎΡ Π°ΡΠ΄ΠΈΠΎΠ²ΠΈΠ·ΡΠ°Π»ΡΠ½ΡΡ ΡΠΈΡΡΠ΅ΠΌ Π΄Π»Ρ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΡΡΠ΅Π΄ΡΡΠ² ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡΠ°Π»ΡΠ½ΠΎΠΉ Π·Π°ΡΠΈΡΡ Π½Π° Π»ΠΈΡΠ΅ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ°
ΠΠ°ΡΠΈΠ½Π°Ρ Ρ 2019 Π³ΠΎΠ΄Π° Π²ΡΠ΅ ΡΡΡΠ°Π½Ρ ΠΌΠΈΡΠ° ΡΡΠΎΠ»ΠΊΠ½ΡΠ»ΠΈΡΡ ΡΠΎ ΡΡΡΠ΅ΠΌΠΈΡΠ΅Π»ΡΠ½ΡΠΌ ΡΠ°ΡΠΏΡΠΎΡΡΡΠ°Π½Π΅Π½ΠΈΠ΅ΠΌ ΠΏΠ°Π½Π΄Π΅ΠΌΠΈΠΈ, Π²ΡΠ·Π²Π°Π½Π½ΠΎΠΉ ΠΊΠΎΡΠΎΠ½Π°Π²ΠΈΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠ΅ΠΊΡΠΈΠ΅ΠΉ COVID-19, Π±ΠΎΡΡΠ±Π° Ρ ΠΊΠΎΡΠΎΡΠΎΠΉ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ°Π΅ΡΡΡ ΠΌΠΈΡΠΎΠ²ΡΠΌ ΡΠΎΠΎΠ±ΡΠ΅ΡΡΠ²ΠΎΠΌ ΠΈ ΠΏΠΎ Π½Π°ΡΡΠΎΡΡΠ΅Π΅ Π²ΡΠ΅ΠΌΡ. ΠΠ΅ΡΠΌΠΎΡΡΡ Π½Π° ΠΎΡΠ΅Π²ΠΈΠ΄Π½ΡΡ ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΡ ΡΡΠ΅Π΄ΡΡΠ² ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡΠ°Π»ΡΠ½ΠΎΠΉ Π·Π°ΡΠΈΡΡ ΠΎΡΠ³Π°Π½ΠΎΠ² Π΄ΡΡ
Π°Π½ΠΈΡ ΠΎΡ Π·Π°ΡΠ°ΠΆΠ΅Π½ΠΈΡ ΠΊΠΎΡΠΎΠ½Π°Π²ΠΈΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠ΅ΠΊΡΠΈΠ΅ΠΉ, ΠΌΠ½ΠΎΠ³ΠΈΠ΅ Π»ΡΠ΄ΠΈ ΠΏΡΠ΅Π½Π΅Π±ΡΠ΅Π³Π°ΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ Π·Π°ΡΠΈΡΠ½ΡΡ
ΠΌΠ°ΡΠΎΠΊ Π΄Π»Ρ Π»ΠΈΡΠ° Π² ΠΎΠ±ΡΠ΅ΡΡΠ²Π΅Π½Π½ΡΡ
ΠΌΠ΅ΡΡΠ°Ρ
. ΠΠΎΡΡΠΎΠΌΡ Π΄Π»Ρ ΠΊΠΎΠ½ΡΡΠΎΠ»Ρ ΠΈ ΡΠ²ΠΎΠ΅Π²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠ³ΠΎ Π²ΡΡΠ²Π»Π΅Π½ΠΈΡ Π½Π°ΡΡΡΠΈΡΠ΅Π»Π΅ΠΉ ΠΎΠ±ΡΠ΅ΡΡΠ²Π΅Π½Π½ΡΡ
ΠΏΡΠ°Π²ΠΈΠ» Π·Π΄ΡΠ°Π²ΠΎΠΎΡ
ΡΠ°Π½Π΅Π½ΠΈΡ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ ΠΏΡΠΈΠΌΠ΅Π½ΡΡΡ ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΠ΅ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ, ΠΊΠΎΡΠΎΡΡΠ΅ Π±ΡΠ΄ΡΡ Π΄Π΅ΡΠ΅ΠΊΡΠΈΡΠΎΠ²Π°ΡΡ Π·Π°ΡΠΈΡΠ½ΡΠ΅ ΠΌΠ°ΡΠΊΠΈ Π½Π° Π»ΠΈΡΠ°Ρ
Π»ΡΠ΄Π΅ΠΉ ΠΏΠΎ Π²ΠΈΠ΄Π΅ΠΎ- ΠΈ Π°ΡΠ΄ΠΈΠΎΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ. Π ΡΡΠ°ΡΡΠ΅ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ Π°Π½Π°Π»ΠΈΡΠΈΡΠ΅ΡΠΊΠΈΠΉ ΠΎΠ±Π·ΠΎΡ ΡΡΡΠ΅ΡΡΠ²ΡΡΡΠΈΡ
ΠΈ ΡΠ°Π·ΡΠ°Π±Π°ΡΡΠ²Π°Π΅ΠΌΡΡ
ΠΈΠ½ΡΠ΅Π»Π»Π΅ΠΊΡΡΠ°Π»ΡΠ½ΡΡ
ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΡ
ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΉ Π±ΠΈΠΌΠΎΠ΄Π°Π»ΡΠ½ΠΎΠ³ΠΎ Π°Π½Π°Π»ΠΈΠ·Π° Π³ΠΎΠ»ΠΎΡΠΎΠ²ΡΡ
ΠΈ Π»ΠΈΡΠ΅Π²ΡΡ
Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° Π² ΠΌΠ°ΡΠΊΠ΅. Π‘ΡΡΠ΅ΡΡΠ²ΡΠ΅Ρ ΠΌΠ½ΠΎΠ³ΠΎ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΠΉ Π½Π° ΡΠ΅ΠΌΡ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ ΠΌΠ°ΡΠΎΠΊ ΠΏΠΎ Π²ΠΈΠ΄Π΅ΠΎΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡΠΌ, ΡΠ°ΠΊΠΆΠ΅ Π² ΠΎΡΠΊΡΡΡΠΎΠΌ Π΄ΠΎΡΡΡΠΏΠ΅ ΠΌΠΎΠΆΠ½ΠΎ Π½Π°ΠΉΡΠΈ Π·Π½Π°ΡΠΈΡΠ΅Π»ΡΠ½ΠΎΠ΅ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΠΊΠΎΡΠΏΡΡΠΎΠ², ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΠΈΡ
ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ Π»ΠΈΡ ΠΊΠ°ΠΊ Π±Π΅Π· ΠΌΠ°ΡΠΎΠΊ, ΡΠ°ΠΊ ΠΈ Π² ΠΌΠ°ΡΠΊΠ°Ρ
, ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΡ
ΡΠ°Π·Π»ΠΈΡΠ½ΡΠΌΠΈ ΡΠΏΠΎΡΠΎΠ±Π°ΠΌΠΈ. ΠΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΠΉ ΠΈ ΡΠ°Π·ΡΠ°Π±ΠΎΡΠΎΠΊ, Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½Π½ΡΡ
Π½Π° Π΄Π΅ΡΠ΅ΠΊΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΡΡΠ΅Π΄ΡΡΠ² ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡΠ°Π»ΡΠ½ΠΎΠΉ Π·Π°ΡΠΈΡΡ ΠΎΡΠ³Π°Π½ΠΎΠ² Π΄ΡΡ
Π°Π½ΠΈΡ ΠΏΠΎ Π°ΠΊΡΡΡΠΈΡΠ΅ΡΠΊΠΈΠΌ Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊΠ°ΠΌ ΡΠ΅ΡΠΈ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠ° ΠΏΠΎΠΊΠ° Π΄ΠΎΡΡΠ°ΡΠΎΡΠ½ΠΎ ΠΌΠ°Π»ΠΎ, ΡΠ°ΠΊ ΠΊΠ°ΠΊ ΡΡΠΎ Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ΠΈΠ΅ Π½Π°ΡΠ°Π»ΠΎ ΡΠ°Π·Π²ΠΈΠ²Π°ΡΡΡΡ ΡΠΎΠ»ΡΠΊΠΎ Π² ΠΏΠ΅ΡΠΈΠΎΠ΄ ΠΏΠ°Π½Π΄Π΅ΠΌΠΈΠΈ, Π²ΡΠ·Π²Π°Π½Π½ΠΎΠΉ ΠΊΠΎΡΠΎΠ½Π°Π²ΠΈΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠ΅ΠΊΡΠΈΠ΅ΠΉ COVID-19. Π‘ΡΡΠ΅ΡΡΠ²ΡΡΡΠΈΠ΅ ΡΠΈΡΡΠ΅ΠΌΡ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡΡ ΠΏΡΠ΅Π΄ΠΎΡΠ²ΡΠ°ΡΠΈΡΡ ΡΠ°ΡΠΏΡΠΎΡΡΡΠ°Π½Π΅Π½ΠΈΠ΅ ΠΊΠΎΡΠΎΠ½Π°Π²ΠΈΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠ΅ΠΊΡΠΈΠΈ Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΡΠ°ΡΠΏΠΎΠ·Π½Π°Π²Π°Π½ΠΈΡ Π½Π°Π»ΠΈΡΠΈΡ/ΠΎΡΡΡΡΡΡΠ²ΠΈΡ ΠΌΠ°ΡΠΎΠΊ Π½Π° Π»ΠΈΡΠ΅, ΡΠ°ΠΊΠΆΠ΅ Π΄Π°Π½Π½ΡΠ΅ ΡΠΈΡΡΠ΅ΠΌΡ ΠΏΠΎΠΌΠΎΠ³Π°ΡΡ Π² Π΄ΠΈΡΡΠ°Π½ΡΠΈΠΎΠ½Π½ΠΎΠΌ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠΈ COVID-19 Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ ΠΏΠ΅ΡΠ²ΡΡ
ΡΠΈΠΌΠΏΡΠΎΠΌΠΎΠ² Π²ΠΈΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠ΅ΠΊΡΠΈΠΈ ΠΏΠΎ Π°ΠΊΡΡΡΠΈΡΠ΅ΡΠΊΠΈΠΌ Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊΠ°ΠΌ. ΠΠ΄Π½Π°ΠΊΠΎ, Π½Π° ΡΠ΅Π³ΠΎΠ΄Π½ΡΡΠ½ΠΈΠΉ Π΄Π΅Π½Ρ ΡΡΡΠ΅ΡΡΠ²ΡΠ΅Ρ ΡΡΠ΄ Π½Π΅ΡΠ΅ΡΠ΅Π½Π½ΡΡ
ΠΏΡΠΎΠ±Π»Π΅ΠΌ Π² ΠΎΠ±Π»Π°ΡΡΠΈ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΠΌΠΏΡΠΎΠΌΠΎΠ² COVID-19 ΠΈ Π½Π°Π»ΠΈΡΠΈΡ/ΠΎΡΡΡΡΡΡΠ²ΠΈΡ ΠΌΠ°ΡΠΎΠΊ Π½Π° Π»ΠΈΡΠ°Ρ
Π»ΡΠ΄Π΅ΠΉ. Π ΠΏΠ΅ΡΠ²ΡΡ ΠΎΡΠ΅ΡΠ΅Π΄Ρ ΡΡΠΎ Π½ΠΈΠ·ΠΊΠ°Ρ ΡΠΎΡΠ½ΠΎΡΡΡ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ ΠΌΠ°ΡΠΎΠΊ ΠΈ ΠΊΠΎΡΠΎΠ½Π°Π²ΠΈΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠ΅ΠΊΡΠΈΠΈ, ΡΡΠΎ Π½Π΅ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΠΎΡΡΡΠ΅ΡΡΠ²Π»ΡΡΡ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΡΡ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΠΊΡ Π±Π΅Π· ΠΏΡΠΈΡΡΡΡΡΠ²ΠΈΡ ΡΠΊΡΠΏΠ΅ΡΡΠΎΠ² (ΠΌΠ΅Π΄ΠΈΡΠΈΠ½ΡΠΊΠΎΠ³ΠΎ ΠΏΠ΅ΡΡΠΎΠ½Π°Π»Π°). ΠΠ½ΠΎΠ³ΠΈΠ΅ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π΅ ΡΠΏΠΎΡΠΎΠ±Π½Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡ Π² ΡΠ΅ΠΆΠΈΠΌΠ΅ ΡΠ΅Π°Π»ΡΠ½ΠΎΠ³ΠΎ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ, ΠΈΠ·-Π·Π° ΡΠ΅Π³ΠΎ Π½Π΅Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΡ ΠΊΠΎΠ½ΡΡΠΎΠ»Ρ ΠΈ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³ Π½ΠΎΡΠ΅Π½ΠΈΡ Π·Π°ΡΠΈΡΠ½ΡΡ
ΠΌΠ°ΡΠΎΠΊ Π² ΠΎΠ±ΡΠ΅ΡΡΠ²Π΅Π½Π½ΡΡ
ΠΌΠ΅ΡΡΠ°Ρ
. Π’Π°ΠΊΠΆΠ΅ Π±ΠΎΠ»ΡΡΠΈΠ½ΡΡΠ²ΠΎ ΡΡΡΠ΅ΡΡΠ²ΡΡΡΠΈΡ
ΡΠΈΡΡΠ΅ΠΌ Π½Π΅Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ Π²ΡΡΡΠΎΠΈΡΡ Π² ΡΠΌΠ°ΡΡΡΠΎΠ½, ΡΡΠΎΠ±Ρ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»ΠΈ ΠΌΠΎΠ³Π»ΠΈ Π² Π»ΡΠ±ΠΎΠΌ ΠΌΠ΅ΡΡΠ΅ ΠΏΡΠΎΠΈΠ·Π²Π΅ΡΡΠΈ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ Π½Π°Π»ΠΈΡΠΈΡ ΠΊΠΎΡΠΎΠ½Π°Π²ΠΈΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠ΅ΠΊΡΠΈΠΈ. ΠΡΠ΅ ΠΎΠ΄Π½ΠΎΠΉ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠΎΠΉ ΡΠ²Π»ΡΠ΅ΡΡΡ ΡΠ±ΠΎΡ Π΄Π°Π½Π½ΡΡ
ΠΏΠ°ΡΠΈΠ΅Π½ΡΠΎΠ², Π·Π°ΡΠ°ΠΆΠ΅Π½Π½ΡΡ
COVID-19, ΡΠ°ΠΊ ΠΊΠ°ΠΊ ΠΌΠ½ΠΎΠ³ΠΈΠ΅ Π»ΡΠ΄ΠΈ Π½Π΅ ΡΠΎΠ³Π»Π°ΡΠ½Ρ ΡΠ°ΡΠΏΡΠΎΡΡΡΠ°Π½ΡΡΡ ΠΊΠΎΠ½ΡΠΈΠ΄Π΅Π½ΡΠΈΠ°Π»ΡΠ½ΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ
AUTOMATIC DETECTION AND RECOGNITION OF 3D MANUAL GESTURES FOR HUMAN-MACHINE INTERACTION
In this paper, we propose an approach to detect and recognize 3D one-handed gestures for human-machine interaction. The logical structure of the modules of the system for recording a gestural database is described. The logical structure of the database of 3D gestures is presented. Examples of frames showing gestures in the format of Full High Definition, in the map depth mode and in the infrared illustrated. Models of a deep convolutional network for detecting faces and hand shapes are described. The results of automatic detection of the area with the face and the shape of the hand are given. Identified the distinctive features of the gesture at a certain point in time. The process of recognizing 3D one-handed gestures is described. Due to its versatility, this method can be used in tasks of biometrics, computer vision, machine learning, automatic systems of face recognition, sign languages
Recommended from our members
Protein identification using a nanoUHPLC-AP-MALDI MS/MS workflow with CID of multiply charged proteolytic peptides
Liquid AP-MALDI can produce predominantly multiply charged ESI-like ions and stable durable analyte ion yields with samples allowing good shot-to-shot reproducibility and exhibiting self-healing properties during laser irradiation. In this study, LC-MALDI MS/MS workflows that utilize multiply charged ions are reported for the first time and compared with standard LC-ESI MS/MS for bottom-up proteomic analysis. The proposed method is compatible with trifluoroacetic acid as an LC ion pairing reagent and allows multiple MS/MS acquisitions of the LC-separated samples without substantial sample consumption. In addition, the method facilitates the storage of fully spotted MALDI target plates for months without significant sample degradation
Neural network-based method for visual recognition of driverβs voice commands using attention mechanism
Visual speech recognition or automated lip-reading systems actively apply to speech-to-text translation. Video data
proves to be useful in multimodal speech recognition systems, particularly when using acoustic data is difficult or
not available at all. The main purpose of this study is to improve driver command recognition by analyzing visual
information to reduce touch interaction with various vehicle systems (multimedia and navigation systems, phone calls,
etc.) while driving. We propose a method of automated lip-reading the driverβs speech while driving based on a deep
neural network of 3DResNet18 architecture. Using neural network architecture with bi-directional LSTM model and
attention mechanism allows achieving higher recognition accuracy with a slight decrease in performance. Two different
variants of neural network architectures for visual speech recognition are proposed and investigated. When using the
first neural network architecture, the result of voice recognition of the driver was 77.68 %, which was lower by 5.78 %
than when using the second one the accuracy of which was 83.46 %. Performance of the system which is determined
by a real-time indicator RTF in the case of the first neural network architecture is equal to 0.076, and the second β
RTF is 0.183 which is more than two times higher. The proposed method was tested on the data of multimodal corpus
RUSAVIC recorded in the car. Results of the study can be used in systems of audio-visual speech recognition which
is recommended in high noise conditions, for example, when driving a vehicle. In addition, the analysis performed
allows us to choose the optimal neural network model of visual speech recognition for subsequent incorporation into
the assistive system based on a mobile device
- β¦