330,006 research outputs found

    Testing heuristic optimisation methods for vibration-based detection of damage

    Get PDF
    Considerable efforts have been made in recent years for the development of non-destructive techniques for detection of structural damage based on changes of dynamic parameters of structures. The tests are made on a model of a simple beam with cantilevers. Two heuristic optimisation techniques are used in the proposed procedure for detecting the location and level of damage: taboo search and simulated annealing. The results show that both proposes procedures, with appropriate target functions and weight factors, are highly efficient for detecting the location and level of damage

    Novelty detection in video retrieval: finding new news in TV news stories

    Get PDF
    Novelty detection is defined as the detection of documents that provide "new" or previously unseen information. "New information" in a search result list is defined as the incremental information found in a document based on what the user has already learned from reviewing previous documents in a given ranked list of documents. It is assumed that, as a user views a list of documents, their information need changes or evolves, and their state of knowledge increases as they gain new information from the documents they see. The automatic detection of "novelty" , or newness, as part of an information retrieval system could greatly improve a searcherā€™s experience by presenting "documents" in order of how much extra information they add to what is already known, instead of how similar they are to a userā€™s query. This could be particularly useful in applications such as the search of broadcast news and automatic summary generation. There are many different aspects of information management, however, this thesis, presents research into the area of novelty detection within the content based video domain. It explores the benefits of integrating the many multi modal resources associated with video content those of low level feature detection evidences such as colour and edge, automatic concepts detections such as face, commercials, and anchor person, automatic speech recognition transcripts and manually annotated MPEG7 concepts into a novelty detection model. The effectiveness of this novelty detection model is evaluated on a collection of TV new data

    Social Network Analysis using Cultural Algorithms and its Variants

    Get PDF
    Finding relationships between social entities and discovering the underlying structures of networks are fundamental tasks for analyzing social networks. In recent years, various methods have been suggested to study these networks eļ¬ƒciently, however, due to the dynamic and complex nature that these networks have, a lot of open problems still exist in the ļ¬eld. The aim of this research is to propose an integrated computational model to study the structure and behavior of the complex social network. The focus of this research work is on two major classic problems in the ļ¬eld which are called community detection and link prediction. Moreover, a problem of population adaptation through knowledge migration in real-life social systems has been identiļ¬ed to model and study through the proposed method. To the best of our knowledge, this is the ļ¬rst work in the ļ¬eld which is exploring this concept through this approach. In this research, a new adaptive knowledge-based evolutionary framework is deļ¬ned to investigate the structure of social networks by adopting a multi-population cultural algorithm. The core of the model is designed based on a unique community-oriented approach to estimate the existence of a relationship between social entities in the network. In each evolutionary cycle, the normative knowledge is shaped through the extraction of the topological knowledge from the structure of the network. This source of knowledge is utilized for the various network analysis tasks such as estimating the quality of relation between social entities, related studies regarding the link prediction, population adaption, and knowledge formation. The main contributions of this work can be summarized in introducing a novel method to deļ¬ne, extract and represent diļ¬€erent sources of knowledge from a snapshot of a given network to determine the range of the optimal solution, and building a probability matrix to show the quality of relations between pairs of actors in the system. Introducing a new similarity metric, utilizing the prior knowledge in dynamic social network analysis and study the co-evolution of societies in a case of individual migration are another major contributions of this work. According to the obtained results, utilizing the proposed approach in community detection problem can reduce the search space size by 80%. It also can improve the accuracy of the search process in high dense networks by up to 30% compared with the other well-known methods. Addressing the link prediction problem through the proposed approach also can reach the comparable results with other methods and predict the next state of the system with a notably high accuracy. In addition, the obtained results from the study of population adaption through knowledge migration indicate that population with prior knowledge about an environment can adapt themselves to the new environment faster than the ones who do not have this knowledge if the level of changes between the two environments is less than 25%. Therefore, utilizing this approach in dynamic social network analysis can reduce the search time and space signiļ¬cantly (up to above 90%), if the snapshots of the system are taken when the level of changes in the network structure is within 25%. In summary, the experimental results indicate that this knowledge-based approach is capable of exploring the evolution and structure of the network with the high level of accuracy while it improves the performance by reducing the search space and processing time

    Wavelet methods and statistical applications: network security and bioinformatics

    Get PDF
    Wavelet methods possess versatile properties for statistical applications. We would like to explore the advantages of using wavelets in the analyses in two different research areas. First of all, we develop an integrated tool for online detection of network anomalies. We consider statistical change point detection algorithms, for both local changes in the variance and for jumps detection, and propose modified versions of these algorithms based on moving window techniques. We investigate performances on simulated data and on network traffic data with several superimposed attacks. All detection methods are based on wavelet packets transformations. We also propose a Bayesian model for the analysis of high-throughput data where the outcome of interest has a natural ordering. The method provides a unified approach for identifying relevant markers and predicting class memberships. This is accomplished by building a stochastic search variable selection method into an ordinal model. We apply the methodology to the analysis of proteomic studies in prostate cancer. We explore wavelet-based techniques to remove noise from the protein mass spectra. The goal is to identify protein markers associated with prostate-specific antigen (PSA) level, an ordinal diagnostic measure currently used to stratify patients into different risk groups

    A detection theory account of change detection

    Get PDF
    Previous studies have suggested that visual short-term memory (VSTM) has a storage limit of approximately four items. However, the type of high-threshold (HT) model used to derive this estimate is based on a number of assumptions that have been criticized in other experimental paradigms (e.g., visual search). Here we report findings from nine experiments in which VSTM for color, spatial frequency, and orientation was modeled using a signal detection theory (SDT) approach. In Experiments 1-6, two arrays composed of multiple stimulus elements were presented for 100 ms with a 1500 ms ISI. Observers were asked to report in a yes/no fashion whether there was any difference between the first and second arrays, and to rate their confidence in their response on a 1-4 scale. In Experiments 1-3, only one stimulus element difference could occur (T = 1) while set size was varied. In Experiments 4-6, set size was fixed while the number of stimuli that might change was varied (T = 1, 2, 3, and 4). Three general models were tested against the receiver operating characteristics generated by the six experiments. In addition to the HT model, two SDT models were tried: one assuming summation of signals prior to a decision, the other using a max rule. In Experiments 7-9, observers were asked to directly report the relevant feature attribute of a stimulus presented 1500 ms previously, from an array of varying set size. Overall, the results suggest that observers encode stimuli independently and in parallel, and that performance is limited by internal noise, which is a function of set size
    • ā€¦
    corecore