73 research outputs found
Robust Multimodal Failure Detection for Microservice Systems
Proactive failure detection of instances is vitally essential to microservice
systems because an instance failure can propagate to the whole system and
degrade the system's performance. Over the years, many single-modal (i.e.,
metrics, logs, or traces) data-based nomaly detection methods have been
proposed. However, they tend to miss a large number of failures and generate
numerous false alarms because they ignore the correlation of multimodal data.
In this work, we propose AnoFusion, an unsupervised failure detection approach,
to proactively detect instance failures through multimodal data for
microservice systems. It applies a Graph Transformer Network (GTN) to learn the
correlation of the heterogeneous multimodal data and integrates a Graph
Attention Network (GAT) with Gated Recurrent Unit (GRU) to address the
challenges introduced by dynamically changing multimodal data. We evaluate the
performance of AnoFusion through two datasets, demonstrating that it achieves
the F1-score of 0.857 and 0.922, respectively, outperforming the
state-of-the-art failure detection approaches
A network analysis of facial and vocal emotion recognition deficits in schizophrenia
IntroductionFacial and vocal emotion recognition deficits are common in individuals with schizophrenia.MethodsIn this observational, single-center study, 106 patients with schizophrenia (SCZ) and 118 age- and sex-matched healthy controls underwent cognitive and emotional function assessments. The Temporal Experience of Pleasure Scale (TEPS), Personal and Social Performance Scale, Positive and Negative Symptom Scale, and Brief Negative Symptom Scale were used to evaluate psychotic symptoms in the SCZ group. Participants were assessed using the MATRICS Consensus Cognitive Battery (MCCB), the Positive and Negative Syndrome Scale, and emotion recognition tests involving 42 facial and 42 vocal emotional tasks.ResultsThe SCZ group had significant impairments in facial and vocal emotion recognition, with lower accuracy across all emotional categories. Mean scores in the SCZ group were significantly lower than those in the control group (facial, 23.55 ± 7.10 vs. 31.86 ± 5.16; vocal, 18.64 ± 9.48 vs. 29.42 ± 5.01, respectively; p<0.001). Emotion recognition deficits and demographic or clinical characteristics were not significantly correlated. Network analysis revealed strong intercorrelations among different cognitive domains, linking MCCB performance to emotion recognition abilities (r>0.9; p<0.001). Integration of tests of cognitive function (MCCB, area under the curve [AUC]=91.90%, p<0.01), emotion recognition abilities (facial, AUC=82.56%; vocal, AUC=82.82%; p<0.01), and TEPS (AUC=91.13%, p<0.01) proved useful for distinguishing patients with schizophrenia from healthy individuals.DiscussionThese findings underscore the importance of emotion recognition impairments in schizophrenia and their strong association with cognitive deficits. Future interventions should focus on targeted cognitive and affective training strategies. Incorporating multimodal assessments into clinical evaluations may enhance diagnostic accuracy
Texture Transfer Algorithm Based on Brightness Remapping and Gradient Structure Information
Natural Texture Synthesis Algorithm Based on Convolutional Neural Network and Edge Detection
Effect of Lanthanum and Cerium Co-Doping on Electronic Structure and Optical Properties of Anatase Tianium Dioxide
- …
