48 research outputs found

    Defect Detection System for Smartphone Front Camera Based on Improved Template Matching Algorithm

    Get PDF
    Automatic defect detection plays crucial role in resilient manufacturing in terms of product quality and cost effectiveness. With reference to the smartphone front cameras production process, the most recurrent defects can be classified into no hole, inner hole burr, outer circle damage, hole deformation, outer circle fracture and hole position offset. Due to the fast production lines and the defects micro size, Sampling-based methods has huge uncertainty and limitation, and Machine learning-based methods are characterised by low efficiency. To tackle these issues, this paper proposes a machine vision-based detection methods of smartphone front camera based on a multi-step template matching algorithm to reduce the computational effort. Specifically, in order to improve the algorithm efficiency, the images of the smartphone front cameras, acquired using industrial image acquisition devices are pre-processed by performing Hough circle and line transformations respectively, then locate the exact defect area as a region of interest (ROI). Finally, a multi-step template matching algorithm is used to detect and classify a number of common defects. Experimental results show an excellent suitability of the proposed system in detecting front camera surface defects. A benchmarking with other available technologies highlights how the proposed system yields an improvement in the detection speed by 46%, along with an improvement in the detection accuracy by 9%. The successful industrial implementation is discussed with reference to the integration into an automatic defect detection system in a smartphone front camera manufacturing context

    Intra-saccadic displacement sensitivity after a lesion to the posterior parietal cortex

    Get PDF
    Visual perception is introspectively stable and continuous across eye movements. It has been hypothesized that displacements in retinal input caused by eye movements can be dissociated from displacements in the external world using extra-retinal information, such as a corollary discharge from the oculomotor system. The extra-retinal information can inform the visual system about an upcoming eye movement and accompanying displacements in retinal input. The parietal cortex has been hypothesized to be critically involved in integrating retinal and extra-retinal information. Two tasks have been widely used to assess the quality of this integration: double-step saccades and intra-saccadic displacements. Double-step saccades performed by patients with parietal cortex lesions seemed to show hypometric second saccades. However, recently idea has been refuted by demonstrating that patients with very similar lesions were able to perform the double step saccades, albeit taking multiple saccades to reach the saccade target. So, it seems that extra-retinal information is still available for saccade execution after a lesion to the parietal lobe. Here, we investigated whether extra-retinal signals are also available for perceptual judgements in nine patients with strokes affecting the posterior parietal cortex. We assessed perceptual continuity with the intra-saccadic displacement task. We exploited the increased sensitivity when a small temporal blank is introduced after saccade offset (blank effect). The blank effect is thought to reflect the availability of extra-retinal signals for perceptual judgements. Although patients exhibited a relative difference to control subjects, they still demonstrated the blank effect. The data suggest that a lesion to the posterior parietal cortex (PPC) alters the processing of extra-retinal signals but does not abolish their influence altogether

    Masking in Central Visual Field Under a Variety of Temporal and Spatial Configurations

    Get PDF
    For over a century, visual maskingwhere one stimulus reduces the visibility of another stimulushas been used as a powerful tool to explore the visual system. Two major forms have emerged: backward masking and common onset masking. These two forms, which are characterized by the temporal properties of the stimuli, are often used to probe different underlying masking mechanisms, and the two forms typically employ a unique set of spatial characteristics of the mask. This clustering of stimulus properties makes it challenging to assess the effect of each stimulus property by itself. This dissertation describes an attempt to isolate the effects of these properties. In the first set of experiments various masking schedules are tested, including backward, common onset, and variations between, while keeping the spatial properties of the stimuli constant. In the second set of experiments four-dot common onset masking is explored in detail, and in one of the experiments, a single masking schedule is tested while varying the spatial properties of the mask. Across all experiments, target stimuli are presented foveally. A computational model is developed to account for data across both sets of experiments. Three important findings emerge. First, masking can be successfully obtained in central visual field using a variety of stimulus properties. Second, there is compelling evidence that persisting traces of these stimuli play an important role in masking. Third, there is strong evidence of both spatially local and global masking effects

    Proprioceptive contribution to oculomotor control in humans

    Get PDF
    This work was supported by an award from the Wellcome Trust Institutional Strategic Support Fund at the University of St Andrews, grant code 204821/Z/16/Z (DB).Stretch receptors in the extraocular muscles (EOMs) inform the central nervous system about the rotation of one's own eyes in the orbits. Whereas fine control of the skeletal muscles hinges critically on proprioceptive feedback, the role of proprioception in oculomotor control remains unclear. Human behavioural studies provide evidence for EOM proprioception in oculomotor control, however, behavioural and electrophysiological studies in the macaque do not. Unlike macaques, humans possess numerous muscle spindles in their EOMs. To find out whether the human oculomotor nuclei respond to proprioceptive feedback we used functional magnetic resonance imaging (fMRI). With their eyes closed, participants placed their right index finger on the eyelid at the outer corner of the right eye. When prompted by a sound, they pushed the eyeball gently and briefly towards the nose. Control conditions separated out motor and tactile task components. The stretch of the right lateral rectus muscle was associated with activation of the left oculomotor nucleus and subthreshold activation of the left abducens nucleus. Because these nuclei control the horizontal movements of the left eye, we hypothesized that proprioceptive stimulation of the right EOM triggered left eye movement. To test this, we followed up with an eye-tracking experiment in complete darkness using the same behavioural task as in the fMRI study. The left eye moved actively in the direction of the passive displacement of the right eye, albeit with a smaller amplitude. Eye tracking corroborated neuroimaging findings to suggest a proprioceptive contribution to ocular alignment.Publisher PDFPeer reviewe

    Koehenkilöiden suorituskykymittaukset: kuvataajuuden kasvattaminen ja latenssin kompensointi käyttäen kuvapohjaista renderöintia

    Get PDF
    Traditionally in computer graphics complex 3D scenes are represented as a collection of more primitive geometric surfaces. The geometric representation is then rendered into a 2D raster image suitable for display devices. Image based rendering is an interesting addition to a geometry based rendering. Performance is constrained only by display resolution, and not by scene geometry complexity or shader complexity. When used together with a geometry based renderer, an image based renderer can extrapolate additional frames into an animation sequence based on geometrically rendered frames. Existing research into image based rendering methods is investigated in context of interactive computer graphics. Also an image based renderer is implemented to run on a modern GPU shader architecture. Finally, it’s used in a first person shooter game experiment to measure task performance when using frame rate upconversion. Image based rendering is found to be promising for frame rate upconversion as well as for latency compensation. An implementation of an image based renderer is found feasible on modern GPUs. The experiment results show considerable improvement in test subject hit rates when using frame rate upconversion with latency compensation.Perinteisesti tietokonegrafiikassa monimutkaiset kolmiulotteiset maisemat kuvaillaan yksinkertaisempien geometristen pintojen kokoelmana. Geometrisesta kuvauksesta renderöidään kaksiulotteinen näyttöille sopiva rasterikuva. Kuvapohjainen renderöinti on mielenkiintoinen lisäys geometriapohjaisen renderöinnin rinnalle. Suorituskyky ei riipu virtuaalimaiseman geometrisestä monimutkaisuudesta tai varjostustehosteiden raskaudesta, vaan ainoastaan näytön erottelukyvystä. Yhdessä geometriapohjaisen renderöinnin kanssa käytettynä kuvapohjainen renderöija voi ekstrapoloida uusia kuvia animaatiosekvenssiin vanhojen tavallisesti renderöitujen kuvien perusteella. Kuvapohjaista renderöintia tutkitaan vuorovaikutteisen tietokonegrafiikan näkökulmasta olemassa olevan kirjallisuuden pohjalta. Lisäksi toteutetaan kuvapohjainen renderöija nykyaikaisille grafiikkasuorittimille. Lopuksi toteutetaan käyttäjäkoe käyttäen kuvapohjaista renderöijaa kuvataajuuden kasvattamiseksi, jossa koehenkilöiden suorituskykyä mitataan ammuskelupelissä. Kuvapohjainen renderöinti todetaan lupaavaksi keinoksi kuvataajuuden kasvattamiseksi ja latenssin kompensointiin. Kuvapohjaisen renderöijan toteuttaminen nykyaikaiselle grafiikkasuorittimille todetaan mahdolliseksi. Käyttäjäkokeen tulokset osoittavat, että koehenkilöiden osumatarkkuus koheni merkittävästi kun käytettiin kuvataajuuden kasvattamista ja latenssin kompensointia

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets
    corecore