168,730 research outputs found

    Bodyspace at the pub: sexual orientations and organizational space

    Get PDF
    In this article we argue that sexuality is not only an undercurrent of service environments, but is integral to the way that these workspaces are experienced and negotiated. Through drawing on Sara Ahmed’s (2006a) ‘orientation’ thesis, we develop a concept of ‘bodyspace’ to suggest that individuals understand, shape and make meaning of work spaces through complex sexually-orientated negotiations. Presenting analysis from a study of UK pubs, we explore bodyspace in the lived experience of workplace sexuality through three elements of orientation: background; bodily dwelling; and lines of directionality. Our findings show how organizational spaces afford or mitigate possibilities for particular bodies, which simultaneously shape expectations and experiences of sexuality at work. Bodyspace therefore provides one way of exposing the connection between sexual ‘orientation’ and the lived experience of service sector work

    Common Arc Method for Diffraction Pattern Orientation

    Get PDF
    Very short pulses of x-ray free-electron lasers opened the way to obtain diffraction signal from single particles beyond the radiation dose limit. For 3D structure reconstruction many patterns are recorded in the object's unknown orientation. We describe a method for orientation of continuous diffraction patterns of non-periodic objects, utilizing intensity correlations in the curved intersections of the corresponding Ewald spheres, hence named Common Arc orientation. Present implementation of the algorithm optionally takes into account the Friedel law, handles missing data and is capable to determine the point group of symmetric objects. Its performance is demonstrated on simulated diffraction datasets and verification of the results indicates high orientation accuracy even at low signal levels. The Common Arc method fills a gap in the wide palette of the orientation methods.Comment: 16 pages, 10 figure

    The What-And-Where Filter: A Spatial Mapping Neural Network for Object Recognition and Image Understanding

    Full text link
    The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.Advanced Research Projects Agency (ONR-N00014-92-J-4015, AFOSR 90-0083); British Petroleum (89-A-1204); National Science Foundation (IRI-90-00530, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100, N00014-95-1-0409, N00014-95-1-0657); Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0334

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM
    • …
    corecore