264 research outputs found

    Feature extraction for speech and music discrimination

    Get PDF
    Driven by the demand of information retrieval, video editing and human-computer interface, in this paper we propose a novel spectral feature for music and speech discrimination. This scheme attempts to simulate a biological model using the averaged cepstrum, where human perception tends to pick up the areas of large cepstral changes. The cepstrum data that is away from the mean value will be exponentially reduced in magnitude. We conduct experiments of music/speech discrimination by comparing the performance of the proposed feature with that of previously proposed features in classification. The dynamic time warping based classification verifies that the proposed feature has the best quality of music/speech classification in the test database

    3D inference and modelling for video retrieval

    Get PDF
    A new scheme is proposed for extracting planar surfaces from 2D image sequences. We firstly perform feature correspondence over two neighboring frames, followed by the estimation of disparity and depth maps, provided a calibrated camera. We then apply iterative Random Sample Consensus (RANSAC) plane fitting to the generated 3D points to find a dominant plane in a maximum likelihood estimation style. Object points on or off this dominant plane are determined by measuring their Euclidean distance to the plane. Experimental work shows that the proposed scheme leads to better plane fitting results than the classical RANSAC method

    Automatic human face detection for content-based image annotation

    Get PDF
    In this paper, an automatic human face detection approach using colour analysis is applied for content-based image annotation. In the face detection, the probable face region is detected by adaptive boosting algorithm, and then combined with a colour filtering classifier to enhance the accuracy in face detection. The initial experimental benchmark shows the proposed scheme can be efficiently applied for image annotation with higher fidelity

    Combining perceptual features with diffusion distance for face recognition

    Get PDF

    Multiple description video coding for stereoscopic 3D

    Get PDF
    In this paper, we propose an MDC schemes for stereoscopic 3D video. In the literature, MDC has previously been applied in 2D video but not so much in 3D video. The proposed algorithm enhances the error resilience of the 3D video using the combination of even and odd frame based MDC while retaining good temporal prediction efficiency for video over error-prone networks. Improvements are made to the original even and odd frame MDC scheme by adding a controllable amount of side information to improve frame interpolation at the decoder. The side information is also sent according to the video sequence motion for further improvement. The performance of the proposed algorithms is evaluated in error free and error prone environments especially for wireless channels. Simulation results show improved performance using the proposed MDC at high error rates compared to the single description coding (SDC) and the original even and odd frame MDC

    Speech enhancement in noisy environments for video retrieval

    Get PDF
    In this paper, we propose a novel spectral subtraction approach for speech enhancement via maximum likelihood estimate (MLE). This scheme attempts to simulate the probability distribution of useful speech signals and hence maximally reduce the noise. To evaluate the quality of speech enhancement, we extract cepstral features from the enhanced signals, and then apply them to a dynamic time warping framework for similarity check between the clean and filtered signals. The performance of the proposed enhancement method is compared to that of other classical techniques. The entire framework does not assume any model for the background noise and does not require any noise training data

    A Critical Review of Deep Learning-Based Multi-Sensor Fusion Techniques

    Get PDF
    In this review, we provide a detailed coverage of multi-sensor fusion techniques that use RGB stereo images and a sparse LiDAR-projected depth map as input data to output a dense depth map prediction. We cover state-of-the-art fusion techniques which, in recent years, have been deep learning-based methods that are end-to-end trainable. We then conduct a comparative evaluation of the state-of-the-art techniques and provide a detailed analysis of their strengths and limitations as well as the applications they are best suited for

    User requirements for multimedia indexing and retrieval of unedited audio-visual footage - RUSHES

    Get PDF
    Multimedia analysis and reuse of raw un-edited audio visual content known as rushes is gaining acceptance by a large number of research labs and companies. A set of research projects are considering multimedia indexing, annotation, search and retrieval in the context of European funded research, but only the FP6 project RUSHES is focusing on automatic semantic annotation, indexing and retrieval of raw and un-edited audio-visual content. Even professional content creators and providers as well as home-users are dealing with this type of content and therefore novel technologies for semantic search and retrieval are required. As a first result of this project, the user requirements and possible user-scenarios are presented in this paper. These results lay down the foundation for the research and development of a multimedia search engine particularly dedicated to the specific needs of the users and the content

    3D multiple description coding for error resilience over wireless networks

    Get PDF
    Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience. The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users. This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE). Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.EThOS - Electronic Theses Online ServicePetroleum Technology Development Fund (PTDF)GBUnited Kingdo

    Lactobacillus rhamnosus GG-supplemented formula expands butyrate-producing bacterial strains in food allergic infants.

    Get PDF
    Dietary intervention with extensively hydrolyzed casein formula supplemented with Lactobacillus rhamnosus GG (EHCF+LGG) accelerates tolerance acquisition in infants with cow's milk allergy (CMA). We examined whether this effect is attributable, at least in part, to an influence on the gut microbiota. Fecal samples from healthy controls (n=20) and from CMA infants (n=19) before and after treatment with EHCF with (n=12) and without (n=7) supplementation with LGG were compared by 16S rRNA-based operational taxonomic unit clustering and oligotyping. Differential feature selection and generalized linear model fitting revealed that the CMA infants have a diverse gut microbial community structure dominated by Lachnospiraceae (20.5±9.7%) and Ruminococcaceae (16.2±9.1%). Blautia, Roseburia and Coprococcus were significantly enriched following treatment with EHCF and LGG, but only one genus, Oscillospira, was significantly different between infants that became tolerant and those that remained allergic. However, most tolerant infants showed a significant increase in fecal butyrate levels, and those taxa that were significantly enriched in these samples, Blautia and Roseburia, exhibited specific strain-level demarcations between tolerant and allergic infants. Our data suggest that EHCF+LGG promotes tolerance in infants with CMA, in part, by influencing the strain-level bacterial community structure of the infant gut
    • 

    corecore