6,869 research outputs found

    Influence of Misarticulation on Preschoolers' Word Recognition

    Get PDF
    Previous research has shown that children are sensitive to speech variability in dialect and accent, and can extract information about the speaker. Misarticulated speech is a form of variability that children encounter in social situations with peers. Children are sensitive to the changes found in accented speech, but their perception of misarticulated speech has not been studied. If children do not understand misarticulated speech from their peers, they may experience a decrease in incidental word learning from peers and a reduced quality of social interactions. The purpose of the present study is to investigate if children are sensitive to misarticulations in speech, and if their ability to identify words containing misarticulated speech is affected by the speech sound substitutions being common or uncommon in children's developmental phonology. Twenty preschoolers heard minimal triplets of words that were canonical productions (e.g., leaf), productions with common substitutes (e.g weaf), and productions with uncommon substitutes (e.g. yeaf). A forced-choice paradigm required children to click on either a real picture or a novel, anomalous picture after hearing each token. Children's mouse movements, selections and reaction times were recorded and analyzed to determine if there is a difference in response between canonical productions and those containing substitutions. Children selected more real objects pictures when they heard a canonical production than a misarticulated production. Reaction time and area under the curve were negatively impacted in substitution conditions. Among the misarticulated productions, children selected more real objects when they heard a production containing a common substitute than when they heard an uncommon substitute, but reaction time and area under the curve were not significantly different. These findings suggest that children's word recognition is facilitated by their experience with words, which supports an exemplar model of the lexicon. Children are sensitive to substitution types that they have experience with, but this recognition comes at a cost to processing which may affect their overall understanding of rapid speech

    Advances in Pattern Recognition Algorithms, Architectures, and Devices

    Get PDF
    Over the last decade, tremendous advances have been made in the general area of pattern recognition techniques, devices, and algorithms. We have had the distinct pleasure of witnessing this remarkable growth as evidenced through their dissemination in the previous Optical Engineering special sections we have jointly edited— January 1998, March 1998, May 2000, and January 2002. Twenty-six papers were finally accepted for this latest special section, encompassing the recent trends and advancements made in many different areas of pattern recognition techniques utilizing algorithms, architectures, implementations, and devices. These techniques include matched spatial filter based recognition, hit-miss transforms, invariant pattern recognition, joint transform correlator JTC based recognition, morphological processing based recognition, neural network based recognition, wavelet based recognition, fingerprint and face recognition, data fusion based recognition, and target tracking, as well as other techniques. These papers summarize the work of 70 researchers from eight countries

    Reading Comprehension in Two Accommodated Reading Tasks with College Students with Reading Disabilities

    Get PDF
    Most K-12 post-secondary schools have shifted to exclusively providing a reading comprehension accommodation through assistive technology because it outweighs the burden of a tutor/reader. However, very little research has been conducted to examine the effects of assistive technology accommodations on reading comprehension and, of research conducted, there appears to be significant discrepancy of what accommodations are provided for specific diagnoses and how much these accommodations benefit the student. Hence, students are regularly provided accommodations that are not beneficial to them. Thus, a need exists to provide some structure in appropriately accommodating students with reading disabilities in a post-secondary setting. This study examined reading comprehension in three conditions using a quasi-experimental (ABC/BCA/CAB) alternating treatment design. The three conditions investigated subject reading to self (Condition A, baseline), using a person-reader (Condition B), and using text to speech technology (Condition C). Fourteen college students with independently diagnosed reading disabilities, participated in the study investigating the following research questions: How do different accommodations (reader, text to speech) influence college students with reading disabilities performance on reading comprehension tasks? What is the relationship between the IQ and achievement measures and specific accommodations on reading comprehension? How does student preference or experience impact accommodation efficacy? A within subjects ANOVA yielded no statistically significant difference between comprehension tasks (F(2,26) = 1.808, MSE = 3.016, p. = 184). A Pearson correlation coefficient indicated a statistically significant result in (r(12) = .76, p = .002) for reader and text to speech conditions, demonstrating a trend in performance in those conditions. A Pearson correlation coefficient was calculated for the relationship between participants IQ indices, passage comprehension subtests, and each of the three conditions (read to self, using a reader, using text to speech). A statistically significant correlation was found (r(13) = .665, p = .013) between PRI and reading to self. A statistically significant correlation was found (r(13) = .726, p = .005) between VCI and performance in the text to speech condition. Results regarding the impact of preference and experience indicated that students were not particularly adept at determining how best to accommodate their reading disability and that their experience did not influence reading comprehension. The author argues for individually specific accommodations, educating students what accommodation(s) work best for them and the inclusion of an assistive technology single subject design incorporated into all psychological evaluations

    Can a remote sensing approach with hyperspectral data provide early detection and mapping of spatial patterns of black bear bark stripping in coast redwoods?

    Get PDF
    The prevalence of black bear (Ursus americanus) bark stripping in commercial redwood (Sequoia sempervirens) timer stands has been increasing in recent years. This stripping is a threat to commercial timber production because of the deleterious effects on redwood tree fitness. This study sought to unveil a remote sensing method to detect these damaged trees early and map their spatial patterns. By developing a timely monitoring method, forest timber companies can manipulate their timber harvesting routines to adapt to the consequences of the problem. We explored the utility of high spatial resolution UAV-collected hyperspectral imagery as a means for early detection of individual trees stripped by black bears. A hyperspectral sensor was used to capture ultra-high spatial and spectral information pertaining to redwood trees with no damage, those that have been recently attacked by bears, and those with old bear damage. This spectral information was assessed using the Jeffries-Matusita (JM) distance to determine regions along the electromagnetic spectrum that are useful for discerning these three-health classes. While we were able to distinguish healthy trees from trees with old damage, we were unable to distinguish healthy trees from recently damaged trees due to the inherent characteristics of redwood tree growth and the subtle spectral changes within individual tree crowns for the time period assessed. The results, however, showed that with further assessment, a time window may be identified that informs damage before trees completely lose value

    Models and Methods for Automated Background Density Estimation in Hyperspectral Anomaly Detection

    Get PDF
    Detecting targets with unknown spectral signatures in hyperspectral imagery has been proven to be a topic of great interest in several applications. Because no knowledge about the targets of interest is assumed, this task is performed by searching the image for anomalous pixels, i.e. those pixels deviating from a statistical model of the background. According to the hyperspectral literature, there are two main approaches to Anomaly Detection (AD) thus leading to the definition of different ways for background modeling: global and local. Global AD algorithms are designed to locate small rare objects that are anomalous with respect to the global background, identified by a large portion of the image. On the other hand, in local AD strategies, pixels with significantly different spectral features from a local neighborhood just surrounding the observed pixel are detected as anomalies. In this thesis work, a new scheme is proposed for detecting both global and local anomalies. Specifically, a simplified Likelihood Ratio Test (LRT) decision strategy is derived that involves thresholding the background log-likelihood and, thus, only needs the specification of the background Probability Density Function (PDF). Within this framework, the use of parametric, semi-parametric (in particular finite mixtures), and non-parametric models is investigated for the background PDF estimation. Although such approaches are well known and have been widely employed in multivariate data analysis, they have been seldom applied to estimate the hyperspectral background PDF, mostly due to the difficulty of reliably learning the model parameters without the need of operator intervention, which is highly desirable in practical AD tasks. In fact, this work represents the first attempt to jointly examine such methods in order to asses and discuss the most critical issues related to their employment for PDF estimation of hyperspectral background with specific reference to the detection of anomalous objects in a scene. Specifically, semi- and non-parametric estimators have been successfully employed to estimate the image background PDF with the aim of detecting global anomalies in a scene by means of the use of ad hoc learning procedures. In particular, strategies developed within a Bayesian framework have been considered for automatically estimating the parameters of mixture models and one of the most well-known non-parametric techniques, i.e. the fixed kernel density estimator (FKDE). In this latter, the performance and the modeling ability depend on scale parameters, called bandwidths. It has been shown that the use of bandwidths that are fixed across the entire feature space, as done in the FKDE, is not effective when the sample data exhibit different local peculiarities across the entire data domain, which generally occurs in practical applications. Therefore, some possibilities are investigated to improve the image background PDF estimation of FKDE by allowing the bandwidths to vary over the estimation domain, thus adapting the amount of smoothing to the local density of the data so as to more reliably and accurately follow the background data structure of hyperspectral images of a scene. The use of such variable bandwidth kernel density estimators (VKDE) is also proposed for estimating the background PDF within the considered AD scheme for detecting local anomalies. Such a choice is done with the aim to cope with the problem of non-Gaussian background for improving classical local AD algorithms involving parametric and non-parametric background models. The locally data-adaptive non-parametric model has been chosen since it encompasses the potential, typical of non-parametric PDF estimators, in modeling data regardless of specific distributional assumption together with the benefits deriving from the employment of bandwidths that vary across the data domain. The ability of the proposed AD scheme resulting from the application of different background PDF models and learning methods is experimentally evaluated by employing real hyperspectral images containing objects that are anomalous with respect to the background

    Dynamics of Perceptual Organization in Complex Visual Search: The Identification of Self Organized Criticality with Respect to Visual Grouping Principles

    Get PDF
    The current project applies modern quantitative theories of visual perception to examine the effect of the Gestalt Law of proximity on visual cognition. Gestalt Laws are spontaneous dynamic processes (Brunswik & Kamiya, 1953; Wertheimer, 1938) that underlie the principles of perceptual organization. These principles serve as mental short-cuts, heuristic rule-of-thumb strategies that shorten decision-making time and allow continuous, efficient processing and flow of information (Hertwig & Todd, 2002). The proximity heuristic refers to the observation that objects near each other in the visual field tend to be grouped together by the perceptual system (Smith-Gratto & Fisher, 1999). Proximity can be directly quantified as the distance between adjacent objects (inter-object distances) in a visual array. Recent studies on eye movements have revealed the interactive nature of self organizing dynamic processes in visual cognition (Aks, Zelinsky, & Sprott, 2002; Stephen, & Mirman, 2010). Research by Aks and colleagues (2002) recorded eye-movements during a complex visual search task in which participants searched for a target among distracters. Their key finding was that visual search patterns are not randomly distributed, and that a simple form of temporal memory exists across the sequence of eye movements. The objective of the present research was to identify how the law of proximity impacts visual search behavior as reflected in eye movement patterns. We discovered that 1) eye movements are fractal; 2) more fractality will result in decreased reaction time during visual search, and 3) fractality facilitates the improvement of reaction times over blocks of trials. Results were interpreted in view of theories of cognitive resource allocation and perceptual efficiency. The current research could inspire potential innovations in computer vision, user interface design and visual cognition

    Neurometrics applied to banknote and security features design

    Get PDF
    El objetivo de este trabajo es presentar una metodología sobre la aplicación del neuroanálisis en el diseño de billetes y elementos de seguridad. Tradicionalmente, la evaluación de la percepción de los billetes se ha basado en respuestas explícitas de las personas, obtenidas a través de cuestionarios y entrevistas. Las medidas implícitas se refieren a métodos y técnicas capaces de capturar los procesos mentales implícitos de las personas. La neurociencia ha demostrado que la consciencia humana no interviene en la mayoría de los procesos cerebrales que regulan las emociones, actitudes, comportamientos y decisiones. Es decir, estos procesos implícitos son funciones cerebrales que se producen automáticamente y sin control consciente. La metodología sobre el neuroanálisis puede aplicarse al diseño de billetes y elementos de seguridad, y utilizarse como una herramienta de análisis eficaz para evaluar los procesos cognitivos de las personas, como el interés visual, la atención a ciertas áreas del billete, las emociones, la motivación, la carga mental para comprender el diseño y el nivel de estimulación. La metodología del neuroanálisis propuesta ofrece un criterio para tomar decisiones sobre qué diseños de billetes y elementos de seguridad tienen una configuración más adecuada para el público, basada en el seguimiento de procesos conscientes, usando medidas explícitas tradicionales, y procesos inconscientes, usando técnicas neurométricas. La metodología del neuroanálisis trata variables neurométricas cuantificables obtenidas del público al procesar eventos como el movimiento ocular, la fijación visual, la expresión facial, la variación del ritmo cardíaco, la conductancia de la piel, etc. La aplicación de un estudio de neuroanálisis se lleva a cabo con un grupo de personas representativo de la población para la que se realiza el diseño de un billete o los elementos de seguridad. En el estudio neurométrico se ofrece a los participantes muestras físicas adecuadamente preparadas para recoger las diferentes respuestas neurométricas de los participantes, que luego se procesan para sacar conclusiones.The aim of this paper is to present a methodology on the application of neuroanalysis to the design of banknotes and security features. Traditionally, evaluation of the perception of banknotes is based on explicit personal responses obtained through questionnaires and interviews. The implicit measures refer to methods and techniques capable of capturing people’s implicit mental processes. Neuroscience has shown that, in most brain processes regulating emotions, attitudes, behaviours and decisions, human consciousness does not intervene. That is to say, these implicit processes are brain functions that occur automatically and without conscious control. The methodology on neuroanalysis can be applied to the design of banknotes and security features, and used as an effective analysis tool to assess people’s cognitive processes, namely: visual interest, attention to certain areas of the banknote, emotions, motivation and the mental load to understand the design and level of stimulation. The proposed neuroanalysis methodology offers a criterion for making decisions about which banknote designs and security features have a more suitable configuration for the public. It is based on the monitoring of conscious processes, using traditional explicit measures, and unconscious processes, using neurometric techniques. The neuroanalysis methodology processes quantifiable neurometric variables obtained from the public when processing events, such as eye movement, sight fixation, facial expression, heart rate variation, skin conductance, etc. A neuroanalysis study is performed with a selected group of people representative of the population for which the design of a banknote or security features is made. In the neurometric study, suitably prepared physical samples are shown to the participants to collect their different neurometric responses, which are then processed to draw conclusions

    Psychology is – and should be – central to cognitive science

    Get PDF
    Abstract: Cognitive science is typically defined as the multidisciplinary study of mind, with the disciplines involved usually listed as philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology. Furthermore, these six “core disciplines” are generally regarded as having equal status vis-à-vis cognitive science. In contrast to the latter position, I argue that psychology has a special status here: it is central to cognitive science in a way that none of the other five disciplines is. I support this argument via both theoretical and empirical considerations.Keywords: Psychology; Cognitive Science; Interdisciplinarity/Multidisciplinarity  La psicologia è – e dovrebbe essere – al centro della scienza cognitivaRiassunto: La scienza cognitiva viene definita comunemente come indagine multidisciplinare sulla mente e tra le discipline che vi sono coinvolte vengono solitamente indicate la filosofia, la psicologia, l’intelligenza artificiale, la neuroscienza, la linguistica e l’antropologia. Queste sei “discipline fondamentali” sono generalmente considerate come aventi pari dignità nell’ambito della scienza cognitiva. A dispetto di quest’ultima posizione sosterrò, invece, che alla psicologia vada riconosciuto uno status speciale: la sua importanza per la scienza cognitiva è tale da non poter essere eguagliata da nessuna delle altre cinque discipline. A supporto di questa posizione porterò alcune riflessioni di natura teoretica ed empirica.Parole chiave: Psicologia; Scienza cognitiva; Interdisciplinarità/Multidisciplinarit

    Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications

    Get PDF
    The last decade has seen a revolution in the theory and application of machine learning and pattern recognition. Through these advancements, variable ranking has emerged as an active and growing research area and it is now beginning to be applied to many new problems. The rationale behind this fact is that many pattern recognition problems are by nature ranking problems. The main objective of a ranking algorithm is to sort objects according to some criteria, so that, the most relevant items will appear early in the produced result list. Ranking methods can be analyzed from two different methodological perspectives: ranking to learn and learning to rank. The former aims at studying methods and techniques to sort objects for improving the accuracy of a machine learning model. Enhancing a model performance can be challenging at times. For example, in pattern classification tasks, different data representations can complicate and hide the different explanatory factors of variation behind the data. In particular, hand-crafted features contain many cues that are either redundant or irrelevant, which turn out to reduce the overall accuracy of the classifier. In such a case feature selection is used, that, by producing ranked lists of features, helps to filter out the unwanted information. Moreover, in real-time systems (e.g., visual trackers) ranking approaches are used as optimization procedures which improve the robustness of the system that deals with the high variability of the image streams that change over time. The other way around, learning to rank is necessary in the construction of ranking models for information retrieval, biometric authentication, re-identification, and recommender systems. In this context, the ranking model's purpose is to sort objects according to their degrees of relevance, importance, or preference as defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author
    corecore