69 research outputs found

    Network-Wide Traffic Anomaly Detection and Localization Based on Robust Multivariate Probabilistic Calibration Model

    Get PDF
    Network anomaly detection and localization are of great significance to network security. Compared with the traditional methods of host computer, single link and single path, the network-wide anomaly detection approaches have distinctive advantages with respect to detection precision and range. However, when facing the actual problems of noise interference or data loss, the network-wide anomaly detection approaches also suffer significant performance reduction or may even become unavailable. Besides, researches on anomaly localization are rare. In order to solve the mentioned problems, this paper presents a robust multivariate probabilistic calibration model for network-wide anomaly detection and localization. It applies the latent variable probability theory with multivariate t-distribution to establish the normal traffic model. Not only does the algorithm implement network anomaly detection by judging whether the sample’s Mahalanobis distance exceeds the threshold, but also it locates anomalies by contribution analysis. Both theoretical analysis and experimental results demonstrate its robustness and wider use. The algorithm is applicable when dealing with both data integrity and loss. It also has a stronger resistance over noise interference and lower sensitivity to the change of parameters, all of which indicate its performance stability

    CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment

    Full text link
    The pre-trained image-text models, like CLIP, have demonstrated the strong power of vision-language representation learned from a large scale of web-collected image-text data. In light of the well-learned visual features, some existing works transfer image representation to video domain and achieve good results. However, how to utilize image-language pre-trained model (e.g., CLIP) for video-language pre-training (post-pretraining) is still under explored. In this paper, we investigate two questions: 1) what are the factors hindering post-pretraining CLIP to further improve the performance on video-language tasks? and 2) how to mitigate the impact of these factors? Through a series of comparative experiments and analyses, we find that the data scale and domain gap between language sources have great impacts. Motivated by these, we propose a Omnisource Cross-modal Learning method equipped with a Video Proxy mechanism on the basis of CLIP, namely CLIP-ViP. Extensive results show that our approach improves the performance of CLIP on video-text retrieval by a large margin. Our model also achieves SOTA results on a variety of datasets, including MSR-VTT, DiDeMo, LSMDC, and ActivityNet. We will release our code and pre-trained CLIP-ViP models at https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP

    TeViS:Translating Text Synopses to Video Storyboards

    Full text link
    A video storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards, however, remains challenging which not only requires cross-modal association between high-level texts and images but also demands long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images as the video storyboard to visualize the text synopsis. We construct a MovieNet-TeViS dataset based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes manually selected from corresponding movies by considering both relevance and cinematic coherence. To benchmark the task, we present strong CLIP-based baselines and a novel VQ-Trans. VQ-Trans first encodes text synopsis and images into a joint embedding space and uses vector quantization (VQ) to improve the visual representation. Then, it auto-regressively generates a sequence of visual features for retrieval and ordering. Experimental results demonstrate that VQ-Trans significantly outperforms prior methods and the CLIP-based baselines. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work. The code and data are available at: \url{https://ruc-aimind.github.io/projects/TeViS/}Comment: Accepted to ACM Multimedia 202

    Characterization of fluorescein arsenical hairpin (FIAsH) as a probe for single-molecule fluorescence spectroscopy

    Get PDF
    Sherpa Romeo green journal. Open access article. Creative Commons Attribution 4.0 International License (CC BY 4.0) appliesIn recent years, new labelling strategies have been developed that involve the genetic insertion of small amino-acid sequences for specific attachment of small organic fluorophores. Here, we focus on the tetracysteine FCM motif (FLNCCPGCCMEP), which binds to fluorescein arsenical hairpin (FlAsH), and the ybbR motif (TVLDSLEFIASKLA) which binds fluorophores conjugated to Coenzyme A (CoA) via a phosphoryl transfer reaction. We designed a peptide containing both motifs for orthogonal labelling with FlAsH and Alexa647 (AF647). Molecular dynamics simulations showed that both motifs remain solvent-accessible for labelling reactions. Fluorescence spectra, correlation spectroscopy and anisotropy decay were used to characterize labelling and to obtain photophysical parameters of free and peptide-bound FlAsH. The data demonstrates that FlAsH is a viable probe for single-molecule studies. Single-molecule imaging confirmed dual labeling of the peptide with FlAsH and AF647. Multiparameter single-molecule Förster Resonance Energy Transfer (smFRET) measurements were performed on freely diffusing peptides in solution. The smFRET histogram showed different peaks corresponding to different backbone and dye orientations, in agreement with the molecular dynamics simulations. The tandem of fluorophores and the labelling strategy described here are a promising alternative to bulky fusion fluorescent proteins for smFRET and single-molecule tracking studies of membrane proteins.Ye

    Diseño de un producto turístico basado en la cultura de la etnia tibetana

    Full text link
    En los últimos años, el turismo por el Tíbet ha tenido un rápido crecimiento. Una gran cantidad de turistas nacionales y extranjeros visitan esta zona para disfrutar del paisaje y su peculiar cultura. En vista del aumento de la demanda turística en el Tíbet, se va a diseñar un producto turístico que recoja la cultura tibetana y sus recursos turísticos. En primer lugar, se da una visión general del Tíbet, provincia ubicada en el suroeste de China. En la segunda fase del trabajo se realiza un análisis tanto de la oferta turística como de la demanda turística de esta zona. En la tercera fase del trabajo se presenta el itinerario y las actividades que incluye el producto turístico. Por último, se proponen los canales y las recomendaciones de promoción y comercialización del producto turístico.In recent years, the Tibet tourism has grown rapidly. A lot of national and international tourists visit this area to enjoy the landscape and its unique culture. In view of the increasing demand of tourism in Tibet, I tried to design a tourism product on Tibetan culture and tourism resources. First of all, an overview of Tibet which is a province in southwest China is given. In the second stage of this paper, I made an analysis of both the tourism supply and tourism demand in this area. In the third stage of the paper, I showed the itinerary and activities that included in the tourism product. Finally, I proposed channels and gave recommendation for promotion and marketing of the tourism productLi, Y. (2015). Diseño de un producto turístico basado en la cultura de la etnia tibetana. Universitat Politècnica de València. http://hdl.handle.net/10251/56219TFG
    corecore