52,305 research outputs found
Recommended from our members
Memory in autism spectrum disorder: a meta-analysis of experimental studies
To address inconsistencies in the literature on memory in Autism Spectrum Disorder (ASD), we report the first ever meta-analysis of short-term (STM) and episodic long-term (LTM) memory in ASD, evaluating the effects of type of material, type of retrieval and the role of inter-item relations. Analysis of 64 studies comparing individuals with ASD and typical development (TD) showed greater difficulties in ASD compared to TD individuals in STM (Hedgesâ g=-0.53 [95%CI -0.90; -0.16], p=.005, IÂČ=96%) compared to LTM (g=-0.30 [95%CI -0.42; -0.17], p<.00001, IÂČ=24%), a small difficulty in verbal LTM (g=-0.21, p=.01), contrasting with a medium difficulty for visual LTM (g= -0.41, p=.0002) in ASD compared to TD individuals. We also found a general diminution in free recall compared to cued recall and recognition (LTM, free recall: g=-0.38, p<.00001, cued recall: g=-0.08, p=.58, recognition: g=-0.15, p=.16; STM, free recall: g=-0.59, p=.004, recognition: g=-0.33, p=.07). We discuss these results in terms of their relation to semantic memory. The limited diminution in verbal LTM and preserved overall recognition and cued recall (supported retrieval) may result from a greater overlap of these tasks with semantic long-term representations which are overall preserved in ASD. By contrast, difficulties in STM or free recall may result from less overlap with the semantic system or may involve additional cognitive operations and executive demands. These findings highlight the need to support STM functioning in ASD and acknowledge the potential benefit of using verbal materials at encoding and broader forms of memory support at retrieval to enhance performance
Video semantic content analysis framework based on ontology combined MPEG-7
The rapid increase in the available amount of video data is creating a growing demand for efficient methods for understanding and managing it at the semantic level. New multimedia standard, MPEG-7, provides the rich functionalities to enable the generation of audiovisual descriptions and is expressed solely in XML Schema which provides little support for expressing semantic knowledge. In this paper, a video semantic content analysis framework based on ontology combined MPEG-7 is presented. Domain
ontology is used to define high level semantic concepts and their relations in the context of the examined domain. MPEG-7 metadata terms of audiovisual descriptions and video content analysis algorithms are expressed in this ontology to enrich video semantic analysis. OWL is used for the ontology description. Rules in Description Logic are defined to describe how low-level features and algorithms for video analysis should be applied according to different perception content. Temporal Description Logic is used to describe the
semantic events, and a reasoning algorithm is proposed for events detection. The proposed framework is demonstrated in sports video domain and shows promising results
Using association rule mining to enrich semantic concepts for video retrieval
In order to achieve true content-based information retrieval on video we should analyse and index video with
high-level semantic concepts in addition to using user-generated tags and structured metadata like title, date,
etc. However the range of such high-level semantic concepts, detected either manually or automatically,
usually limited compared to the richness of information content in video and the potential vocabulary of
available concepts for indexing. Even though there is work to improve the performance of individual concept
classiïŹers, we should strive to make the best use of whatever partial sets of semantic concept occurrences
are available to us. We describe in this paper our method for using association rule mining to automatically
enrich the representation of video content through a set of semantic concepts based on concept co-occurrence
patterns. We describe our experiments on the TRECVid 2005 video corpus annotated with the 449 concepts
of the LSCOM ontology. The evaluation of our results shows the usefulness of our approach
Semantic analysis of field sports video using a petri-net of audio-visual concepts
The most common approach to automatic summarisation and highlight detection in sports video is to train an automatic classifier to detect semantic highlights based on occurrences of low-level features such as action replays, excited commentators or changes in a scoreboard. We propose an alternative approach based on the detection of perception concepts (PCs) and the construction of Petri-Nets which can be used for both semantic description and event detection within sports videos. Low-level algorithms for the detection of perception concepts using visual, aural and motion characteristics are proposed, and a series of Petri-Nets composed of perception concepts is formally defined to describe video content. We call this a Perception Concept Network-Petri Net (PCN-PN) model. Using PCN-PNs, personalized high-level semantic descriptions of video highlights can be facilitated and queries on high-level semantics can be achieved. A particular strength of this framework is that we can easily build semantic detectors based on PCN-PNs to search within sports videos and locate interesting events. Experimental results based on recorded sports
video data across three types of sports games (soccer, basketball and rugby), and each from multiple broadcasters, are used to illustrate the potential of this framework
- âŠ