16 research outputs found

    Not All Scale-Free Networks Are Born Equal: The Role of the Seed Graph in PPI Network Evolution

    Get PDF
    The (asymptotic) degree distributions of the best-known “scale-free” network models are all similar and are independent of the seed graph used; hence, it has been tempting to assume that networks generated by these models are generally similar. In this paper, we observe that several key topological features of such networks depend heavily on the specific model and the seed graph used. Furthermore, we show that starting with the “right” seed graph (typically a dense subgraph of the protein–protein interaction network analyzed), the duplication model captures many topological features of publicly available protein–protein interaction networks very well

    VITALAS at TRECVID-2009

    Get PDF
    This paper describes the participation of VITALAS in the TRECVID-2009 evaluation where we submitted runs for the High-Level Feature Extraction (HLFE) and Interactive Search tasks. For the HLFE task, we focus on the evaluation of low-level feature sets and fusion methods. The runs employ multiple low-level features based on all available modalities (visual, audio and text) and the results show that use of such features improves the retrieval eectiveness signicantly. We also use a concept score fusion approach that achieves good results with reduced low-level feature vector dimensionality. Furthermore, a weighting scheme is introduced for cluster assignment in the \bag-of-words" approach. Our runs achieved good performance compared to a baseline run and the submissions of other TRECVID-2009 participants. For the Interactive Search task, we focus on the evaluation of the integrated VITALAS system in order to gain insights into the use and eectiveness of the system's search functionalities on (the combination of) multiple modalities and study the behavior of two user groups: professional archivists and non-professional users. Our analysis indicates that both user groups submit about the same total number of queries and use the search functionalities in a similar way, but professional users save twice as many shots and examine shots deeper in the ranked retrieved list.The agreement between the TRECVID assessors and our users was quite low. In terms of the eectiveness of the dierent search modalities, similarity searches retrieve on average twice as many relevant shots as keyword searches, fused searches three times as many, while concept searches retrieve even up to ve times as many relevant shots, indicating the benets of the use of robust concept detectors in multimodal video retrieval. High-Level Feature Extraction Runs 1. A VITALAS.CERTH-ITI 1: Early fusion of all available low-level features. 2. A VITALAS.CERTH-ITI 2: Concept score fusion for ve low-level features and 100 concepts, text features and bag-of-words with color SIFT descriptor based on dense sampling. 3. A VITALAS.CERTH-ITI 3: Concept score fusion for ve low-level features and 100 concepts combined with text features. 4. A VITALAS.CERTH-ITI 4: Weighting scheme for bag-of-words based on dense sampling of the color SIFT descriptor. 5. A VITALAS.CERTH-ITI 5: Baseline run, bag-of-words based on dense sampling of the color SIFT descriptor. Interactive Search Runs 1. vitalas 1: Interactive run by professional archivists 2. vitalas 2: Interactive run by professional archivists 3. vitalas 3: Interactive run by non-professional users 4. vitalas 4: Interactive run by non-professional user

    CWI at ImageCLEF 2008

    No full text
    CWI used PF/Tijah, a flexible XML retrieval system, to evaluate image retrieval based on textual evidence in the context of the wikipediaMM task at ImageCLEF 2008. We employed a language modelling framework and found that the text associated with the Wikipedia images is a good source of evidence. We also investigated a length prior and found that biasing towards images with longer descriptions than the ones retrieved by our language modelling approach is not benficial

    VITALAS at TRECVID-2008

    No full text
    \emph{High Level Feature Extraction runs.} \begin{enumerate} \item A\_VITALAS.CERTH.ITI\_1: Combination of early fusion and concept score fusion with feature selection. \item A\_VITALAS.CERTH.ITI\_2: Concept score fusion with feature selection. \item A\_VITALAS.CERTH.ITI\_3: Clustering within feature space and concept score fusion with feature selection. \item A\_VITALAS.CERTH.ITI\_4: Concept score fusion for selected low level features. \item a\_VITALAS.CERTH.ITI\_5: Mandatory type `a' run, concept score fusion for selected low level features. \end{enumerate} This is the first participation of VITALAS in TRECVID. In the high level feature extraction task, our submitted runs are based mainly on visual features, while one run utilizes audio information as well; the text is not used. The experiments performed aim at evaluating the effectiveness of different approaches to input processing prior to the final classification (i.e., ranking) stage. These are (i) clustering of feature vectors within the feat
    corecore