3,385 research outputs found

    A study of data coding technology developments in the 1980-1985 time frame, volume 2

    Get PDF
    The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci

    A Dataset of Gaze Behavior in VR Faithful to Natural Statistics

    Get PDF
    Eye tracking technology is advancing swiftly and many areas of research have begun taking advantage of this. Existing eye trackers project gaze onto a 2D plane, whether it be the display of a head mounted virtual reality (VR) helmet or an image of a real life scene the user is in. This allows us to easily analyze what a viewer is looking at, but limits classification of gaze behaviors from this type of signal. Instead, a system that takes into account head movements within the same space as gaze velocity allows researchers to classify more advanced gaze behaviors such as smooth pursuits and fixations resulting from vestibulo-ocular reflex. For this work data is collected in real world environments where head and gaze movements are recorded over a variety of tasks. The resulting data is then used to construct a distribution of naturally occurring gaze behaviors. This distribution is then used to drive a VR data collection experiment that elicits specific gaze behaviors such as fixations and saccades with specific velocities and directions. A dataset of 12 subjects was collected while subjects play a shooting game in the virtual world. Data was analyzed to see if the intended eye movements were produced, and also to compare the eye movements that occur in fast versus slow presentation of targets

    What you see is what you feel:Top-down emotional effects in face detection

    Get PDF
    Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Furthermore, whilst emotional state has been found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. In four studies and a computational model, we investigated how emotions affect mental representations of faces and how facial representations could be used to transmit and communicate people’s emotional states. To this end, we used an adapted reverse correlation techniquesuggested by Gill et al. (2019) which was based on an earlier idea of the ‘Superstitious Approach’ (Gosselin & Schyns, 2003). In Experiment 1 we measured how naturally occurring anxiety and depression, caused by external factors, affected people’s mental representations of faces. In two sessions, on separate days, participants (coders) were presented with ‘colourful’ visual noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified by the coders as a face, we reconstructed the pictorial mental representation utilised by each participant in the identification process. Across coders, we found significant correlations between changes in the size of the mental representation of faces and changes in their level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations can reflect their emotional state, we conducted a validation study (Experiment 2) with a group of naïve participants (verifiers) who were asked to classify the reconstructed mental representations of faces by emotion. Thus, we assessed whether the mental representations communicate coders’ emotional states to others. The analysis showed no significant correlation between coders’ emotional states, depicted in their mental representation of faces and verifiers’ evaluation scores. In Experiment 3, we investigated how different induced moods, negative and positive, affected mental representation of faces. Coders underwent two different mood induction conditions during two separate sessions. They were presented with the same ‘colourful’ noise stimuli used in Experiment 1 and asked to detect faces. We were able to reconstruct pictorial mental representations of faces based on the identified fragments. The analysis showed a significant negative correlation between changes in coders’ mood along the dimension of arousal and changes in size of their mental representation of faces. Similar to Experiment 2, we conducted a validation study (Experiment 4) to investigate if coders’ mood could have been communicated to others through their mental representations of faces. Similarly, to Experiment 2, we found no correlation between coders’ mood, depicted in their mental representations of faces and verifiers’ evaluation of the intensity of transmitted emotional expression. Lastly, we tested a preliminary computational model (Experiment 5) to classify and predict coders’ emotional states based on their reconstructed mental representations of faces. In spite of the small number of training examples and the high dimensionality of the input, the model was successful just above chance level. Future studies should look at the possibility of improving the computational model by using a larger training set and testing other classifiers. Overall, the present work confirmed the presence of facial templates used during face detection. It provides an adapted version of a reverse correlation technique that can be used to access mental representation of faces, with a significant reduction in number of trials. Lastly, it provides evidence on how emotions can influence and affect the size of mental representations of faces

    Cognitive Information Processing

    Get PDF
    Contains research objectives and summary of research on fourteen research projects and reports on four research projects.Joint Services Electronics Program (Contract DAAB07-75-C-1346)National Science Foundation (Grant EPP74-12653)National Science Foundation (Grant ENG74-24344)National Institutes of Health (Grant 2 PO1 GM19428-04)Swiss National Funds for Scientific ResearchM.I.T. Health Sciences Fund (Grant 76-11)National Institutes of Health (Grant F03 GM58698)National Institutes of Health (Biomedical Sciences Support Grant)Associated Press (Grant

    A new and general approach to signal denoising and eye movement classification based on segmented linear regression

    Get PDF
    We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.Peer reviewe
    • …
    corecore