441 research outputs found

    Semantic analysis of field sports video using a petri-net of audio-visual concepts

    Get PDF
    The most common approach to automatic summarisation and highlight detection in sports video is to train an automatic classifier to detect semantic highlights based on occurrences of low-level features such as action replays, excited commentators or changes in a scoreboard. We propose an alternative approach based on the detection of perception concepts (PCs) and the construction of Petri-Nets which can be used for both semantic description and event detection within sports videos. Low-level algorithms for the detection of perception concepts using visual, aural and motion characteristics are proposed, and a series of Petri-Nets composed of perception concepts is formally defined to describe video content. We call this a Perception Concept Network-Petri Net (PCN-PN) model. Using PCN-PNs, personalized high-level semantic descriptions of video highlights can be facilitated and queries on high-level semantics can be achieved. A particular strength of this framework is that we can easily build semantic detectors based on PCN-PNs to search within sports videos and locate interesting events. Experimental results based on recorded sports video data across three types of sports games (soccer, basketball and rugby), and each from multiple broadcasters, are used to illustrate the potential of this framework

    Unrepresentative information - The case of newspaper reporting on campaign finance

    Get PDF
    This article examines evidence of sampling or statistical bias in newspaper reporting on campaign finance. We compile all stories from the five largest circulation newspapers in the United States that mention a dollar amount for campaign expenditures, contributions, or receipts from 1996 to 2000. We compare these figures to those recorded by the Federal Election Commission (FEC). The average figures reported in newspapers exceed the figures from the FEC by as much as eightfold. Press reports also focus excessively on corporate contributions and soft money, rather than on the more common types of donors-individual-and types of contributions-hard money. We further find that these biases are reflected in public perceptions of money in elections. Survey respondents overstate the amount of money raised and the share from different groups by roughly the amount found in newspapers, and better-educated people (those most likely to read newspapers) showed the greatest discrepancy between their beliefs and the facts

    Learning from Minimum Entropy Queries in a Large Committee Machine

    Full text link
    In supervised learning, the redundancy contained in random examples can be avoided by learning from queries. Using statistical mechanics, we study learning from minimum entropy queries in a large tree-committee machine. The generalization error decreases exponentially with the number of training examples, providing a significant improvement over the algebraic decay for random examples. The connection between entropy and generalization error in multi-layer networks is discussed, and a computationally cheap algorithm for constructing queries is suggested and analysed.Comment: 4 pages, REVTeX, multicol, epsf, two postscript figures. To appear in Physical Review E (Rapid Communications

    Webaffix: Discovering Morphological Links on the WWW

    Get PDF
    International audienceThis paper presents a new language-independent method for finding morphological links between newly appeared words (i.e. absent from reference word lists). Using the WWW as a corpus, the Webaffix tool detects the occurrences of new derived lexemes based on a given suffix, proposes a base lexeme following a standard scheme (such as noun-verb), and then performs a compatibility test on the word pairs produced, using the Web again, but as a source of cooccurrences. The resulting pairs of words are used to build generic morphological databases useful for a number of NLP tasks. We develop and comment an example use of Webaffix to find new noun/verb pairs in French

    Learning an Explicit Hyperparameter Prediction Policy Conditioned on Tasks

    Full text link
    Meta learning has attracted much attention recently in machine learning community. Contrary to conventional machine learning aiming to learn inherent prediction rules to predict labels for new query data, meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks by leveraging the meta-learned learning methodology. In this study, we interpret such learning methodology as learning an explicit hyperparameter prediction policy shared by all training tasks. Specifically, this policy is represented as a parameterized function called meta-learner, mapping from a training/test task to its suitable hyperparameter setting, extracted from a pre-specified function set called meta learning machine. Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks, instead of only obtaining fixed hyperparameters by many current meta learning methods, with less adaptability to query task's variations. Such understanding of meta learning also makes it easily succeed from traditional learning theory for analyzing its generalization bounds with general losses/tasks/models. The theory naturally leads to some feasible controlling strategies for ameliorating the quality of the extracted meta-learner, verified to be able to finely ameliorate its generalization capability in some typical meta learning applications, including few-shot regression, few-shot classification and domain generalization.Comment: 59 pages. arXiv admin note: text overlap with arXiv:1904.03758 by other author

    Content based retrieval of PET neurological images

    Get PDF
    Medical image management has posed challenges to many researchers, especially when the images have to be indexed and retrieved using their visual content that is meaningful to clinicians. In this study, an image retrieval system has been developed for 3D brain PET (Position emission tomography) images. It has been found that PET neurological images can be retrieved based upon their diagnostic status using only data pertaining to their content, and predominantly the visual content. During the study PET scans are spatially normalized, using existing techniques, and their visual data is quantified. The mid-sagittal-plane of each individual 3D PET scan is found and then utilized in the detection of abnormal asymmetries, such as tumours or physical injuries. All the asymmetries detected are referenced to the Talairarch and Tournoux anatomical atlas. The Cartesian co- ordinates in Talairarch space, of detected lesion, are employed along with the associated anatomical structure(s) as the indices within the content based image retrieval system. The anatomical atlas is then also utilized to isolate distinct anatomical areas that are related to a number of neurodegenerative disorders. After segmentation of the anatomical regions of interest algorithms are applied to characterize the texture of brain intensity using Gabor filters and to elucidate the mean index ratio of activation levels. These measurements are combined to produce a single feature vector that is incorporated into the content based image retrieval system. Experimental results on images with known diagnoses show that physical lesions such as head injuries and tumours can be, to a certain extent, detected correctly. Images with correctly detected and measured lesion are then retrieved from the database of images when a query pertains to the measured locale. Images with neurodegenerative disorder patterns have been indexed and retrieved via texture-based features. Retrieval accuracy is increased, for images from patients diagnosed with dementia, by combining the texture feature and mean index ratio value

    A knowledge based approach to integration of products, processes and reconfigurable automation resources

    Get PDF
    The success of next generation automotive companies will depend upon their ability to adapt to ever changing market trends thus becoming highly responsive. In the automotive sector, the assembly line design and reconfiguration is an especially critical and extremely complex job. The current research addresses some of the aspects of this activity under the umbrella of a larger ongoing research project called Business Driven Automation (BDA) project. The BDA project aims to carry out complete virtual 3D modeling-based verifications of the assembly line for new or revised products in contrast to the prevalent practice of manual evaluation of effects of product change on physical resources. [Continues.
    corecore