7,237 research outputs found
Do syllables play a role in German speech perception? Behavioral and electrophysiological data from primed lexical decision.
Copyright © 2015 Bien, Bölte and Zwitserlood. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The
use, distribution or reproduction in other forums is permitted, provided the original
author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or
reproduction is permitted which does not comply with these terms.We investigated the role of the syllable during speech processing in German, in an auditory-auditory fragment priming study with lexical decision and simultaneous EEG registration. Spoken fragment primes either shared segments (related) with the spoken targets or not (unrelated), and this segmental overlap either corresponded to the first syllable of the target (e.g., /teis/ - /teisti/), or not (e.g., /teis/ - /teistləs/). Similar prime conditions applied for word and pseudoword targets. Lexical decision latencies revealed facilitation due to related fragments that corresponded to the first syllable of the target (/teis/ - /teisti/). Despite segmental overlap, there were no positive effects for related fragments that mismatched the first syllable. No facilitation was observed for pseudowords. The EEG analyses showed a consistent effect of relatedness, independent of syllabic match, from 200 to 500 ms, including the P350 and N400 windows. Moreover, this held for words and pseudowords that differed however in the N400 window. The only specific effect of syllabic match for related prime-target pairs was observed in the time window from 200 to 300 ms. We discuss the nature and potential origin of these effects, and their relevance for speech processing and lexical access
Print-Scan Resilient Text Image Watermarking Based on Stroke Direction Modulation for Chinese Document Authentication
Print-scan resilient watermarking has emerged as an attractive way for document security. This paper proposes an stroke direction modulation technique for watermarking in Chinese text images. The watermark produced by the idea offers robustness to print-photocopy-scan, yet provides relatively high embedding capacity without losing the transparency. During the embedding phase, the angle of rotatable strokes are quantized to embed the bits. This requires several stages of preprocessing, including stroke generation, junction searching, rotatable stroke decision and character partition. Moreover, shuffling is applied to equalize the uneven embedding capacity. For the data detection, denoising and deskewing mechanisms are used to compensate for the distortions induced by hardcopy. Experimental results show that our technique attains high detection accuracy against distortions resulting from print-scan operations, good quality photocopies and benign attacks in accord with the future goal of soft authentication
3D Face Synthesis with KINECT
This work describes the process of face synthesis by image morphing from less expensive 3D sensors such as KINECT that are prone to sensor noise. Its main aim is to create a useful face database for future face recognition studies.Peer reviewe
Restoration and segmentation of machine printed documents.
OCR (Optical Character Recognition) has been confronted with the problems of recognizing degraded document images such as text overlapping with non-text symbols, touching characters, etc. The recognition rate for those degraded document images will become unacceptable or completely fail if pre-processing algorithms are not performed before segmentation recognition algorithms are applied. Therefore, the principle objective of this thesis is to develop effective algorithms for tackling those problems in the field of document analysis. We focus our efforts only on the following aspects: 1. A morphological approach has been developed to extract text strings from regular periodic overlapping text/background images, since most OCR systems can only read traditional characters: black characters on a uniform white background, or vice versa. The proposed algorithms that perform text character extraction accommodate document images that contain various kinds of periodically distributed background symbols. The underlying strategy of the algorithms is to maximize background component removal while minimizing the shape distortion of text characters by using appropriate morphological operations. 2. Real-world images, which are frequently degraded due to human induced interference strokes, are inadequate for processing by document analysis systems. In order to process those document images, containing handwritten interference marks which do not possess the periodical property, a new algorithm combining a thinning technique and orientation attributes of connected components has been developed to effectively segment handwritten interference strokes. Morphological operations based on orientation map and skeleton images are used to successfully prevent the flooding water effect of conventional morphological operations for removing interference strokes. 3. Segmenting a word into its character components is one of the most critical steps in document recognition systems. Any failures and errors in this segmentation step can lead to a critical loss of information from documents. In this thesis, we propose new algorithms for resolving the ambiguities in segmenting touching characters. A modified segmentation discrimination function is presented for segmenting touching characters based on the pixel projection and profile projection. A dynamic recursive segmentation algorithm has been developed to effectively search for correct cutting points in touching character components. Based on 12 pages of NEWSLINE , the University of Windsor\u27s publication, a 99.6% character recognition accuracy has been achieved.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1996 .L52. Source: Dissertation Abstracts International, Volume: 59-08, Section: B, page: 4336. Advisers: M. Ahmadi; M. Shridhar. Thesis (Ph.D.)--University of Windsor (Canada), 1996
Trying to break new ground in aerial archaeology
Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection
Modified watershed approach for segmentation of complex optical coherence tomographic images
Watershed segmentation method has been used in various applications. But many
a times, due to its over-segmentation attributes, it underperforms in several
tasks where noise is a dominant source. In this study, Optical Coherence
Tomography images have been acquired, and segmentation has been performed to
analyse the different regions of fluid filled sacs in a lemon. A modified
watershed algorithm has been proposed which gives promising results for
segmentation of internal lemon structures
Recommended from our members
Structure analysis and lesion detection from retinal fundus images
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Ocular pathology is one of the main health problems worldwide. The number of people with retinopathy symptoms has increased considerably in recent years. Early adequate treatment has demonstrated to be effective to avoid the loss of the vision. The analysis of fundus images is a non intrusive option for periodical retinal screening.
Different models designed for the analysis of retinal images are based on supervised methods, which require of hand labelled images and processing time as part of the training stage. On the other hand most of the methods have been designed under the basis of specific characteristics of the retinal images (e.g. field of view, resolution). This compromises its performance to a reduce group of retinal image with similar features.
For these reasons an unsupervised model for the analysis of retinal image is required, a model that can work without human supervision or interaction. And that is able to perform on retinal images with different characteristics. In this research, we have worked on the development of this type of model. The system locates the eye structures (e.g. optic disc and blood vessels) as first step. Later, these structures are masked out from the retinal image in order to create a clear field to perform the lesion detection.
We have selected the Graph Cut technique as a base to design the retinal structures segmentation methods. This selection allows incorporating prior knowledge to constraint the searching for the optimal segmentation. Different link weight assignments were formulated in order to attend the specific needs of the retinal structures (e.g. shape).
This research project has put to work together the fields of image processing and ophthalmology to create a novel system that contribute significantly to the state of the art in medical image analysis. This new knowledge provides a new alternative to address the analysis of medical images and opens a new panorama for researchers exploring this research area.Mexican National Council of Science and Technolog
Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review
Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle
sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and
foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object
detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages
and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image
types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In
particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and
compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of
these approaches. The arti
- …