262,952 research outputs found

    A Novel Methodology for Calculating Large Numbers of Symmetrical Matrices on a Graphics Processing Unit: Towards Efficient, Real-Time Hyperspectral Image Processing

    Get PDF
    Hyperspectral imagery (HSI) is often processed to identify targets of interest. Many of the quantitative analysis techniques developed for this purpose mathematically manipulate the data to derive information about the target of interest based on local spectral covariance matrices. The calculation of a local spectral covariance matrix for every pixel in a given hyperspectral data scene is so computationally intensive that real-time processing with these algorithms is not feasible with today’s general purpose processing solutions. Specialized solutions are cost prohibitive, inflexible, inaccessible, or not feasible for on-board applications. Advances in graphics processing unit (GPU) capabilities and programmability offer an opportunity for general purpose computing with access to hundreds of processing cores in a system that is affordable and accessible. The GPU also offers flexibility, accessibility and feasibility that other specialized solutions do not offer. The architecture for the NVIDIA GPU used in this research is significantly different from the architecture of other parallel computing solutions. With such a substantial change in architecture it follows that the paradigm for programming graphics hardware is significantly different from traditional serial and parallel software development paradigms. In this research a methodology for mapping an HSI target detection algorithm to the NVIDIA GPU hardware and Compute Unified Device Architecture (CUDA) Application Programming Interface (API) is developed. The RX algorithm is chosen as a representative stochastic HSI algorithm that requires the calculation of a spectral covariance matrix. The developed methodology is designed to calculate a local covariance matrix for every pixel in the input HSI data scene. A characterization of the limitations imposed by the chosen GPU is given and a path forward toward optimization of a GPU-based method for real-time HSI data processing is defined

    The meaning and experience of well-being in dementia for psychiatrists involved in diagnostic disclosure: a qualitative study

    Get PDF
    Literature indicates that people's experiences of receiving a diagnosis of dementia can have a lasting impact on well-being. Psychiatrists frequently lead in communicating a diagnosis but little is known about the factors that could contribute to potential disparities between actual and best practice with regard to diagnostic disclosure. A clearer understanding of psychiatrists’ subjective experiences of disclosure is therefore needed to improve adherence to best practice guidelines and ensure that diagnostic disclosure facilitates living well with dementia. This study utilized qualitative methodology. Semi-structured interviews conducted with 11 psychiatrists were analyzed using Interpretive Phenomenological Analysis (IPA). Three superordinate and nine subordinate themes emerged from the data analysis. These included the following: (i) “The levels of well-being” (Continuing with life, Keeping a sense of who they are, Acceptance of the self), (ii) “Living well is a process” (Disclosure can set the scene for well-being, Positive but realistic messages, Whose role it is to support well-being?), and (iii) Ideal care versus real care (Supporting well-being is not prioritized, There isn't time, The fragmentation of care). Findings indicate that psychiatrists frame well-being in dementia as a multi-faceted biopsychosocial construct but that certain nihilistic attitudes may affect how well-being is integrated into diagnostic communication. Such attitudes were linked with the perceived threat of dementia and limitations of post-diagnostic care. Behaviors used to manage the negative affect associated with ethical and clinical tensions triggered by attempts to facilitate well-being at the point of diagnosis, and their impact on adherence to best practice disclosure, are discussed

    Mining Balanced Sequential Patterns in RTS Games 1

    Get PDF
    International audienceThe video game industry has grown enormously over the last twenty years, bringing new challenges to the artificial intelli-gence and data analysis communities. We tackle here the problem of automatic discovery of strategies in real-time strategy games through pattern mining. Such patterns are the basic units for many tasks such as automated agent design, but also to build tools for the profession-ally played video games in the electronic sports scene. Our formal-ization relies on a sequential pattern mining approach and a novel measure, the balance measure, telling how a strategy is likely to win. We experiment our methodology on a real-time strategy game that is professionally played in the electronic sport community

    Testing QoE in Different 3D HDTV Technologies

    Get PDF
    The three dimensional (3D) display technology has started flooding the consumer television market. There is a number of different systems available with different marketing strategies and different advertised advantages. The main goal of the experiment described in this paper is to compare the systems in terms of achievable Quality of Experience (QoE) in different situations. The display systems considered are the liquid crystal display using polarized light and passive lightweight glasses for the separation of the left- and right-eye images, a plasma display with time multiplexed images and active shutter glasses and a projection system with time multiplexed images and active shutter glasses. As no standardized test methodology has been defined for testing of stereoscopic systems, we develop our own approach to testing different aspects of QoE on different systems without reference using semantic differential scales. We present an analysis of scores with respect to different phenomena under study and define which of the tested aspects can really express a difference in the performance of the considered display technologies

    Fidelity metrics for virtual environment simulations based on spatial memory awareness states

    Get PDF
    This paper describes a methodology based on human judgments of memory awareness states for assessing the simulation fidelity of a virtual environment (VE) in relation to its real scene counterpart. To demonstrate the distinction between task performance-based approaches and additional human evaluation of cognitive awareness states, a photorealistic VE was created. Resulting scenes displayed on a headmounted display (HMD) with or without head tracking and desktop monitor were then compared to the real-world task situation they represented, investigating spatial memory after exposure. Participants described how they completed their spatial recollections by selecting one of four choices of awareness states after retrieval in an initial test and a retention test a week after exposure to the environment. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection and also included guesses, even if informed. Experimental results revealed variations in the distribution of participants’ awareness states across conditions while, in certain cases, task performance failed to reveal any. Experimental conditions that incorporated head tracking were not associated with visually induced recollections. Generally, simulation of task performance does not necessarily lead to simulation of the awareness states involved when completing a memory task. The general premise of this research focuses on how tasks are achieved, rather than only on what is achieved. The extent to which judgments of human memory recall, memory awareness states, and presence in the physical and VE are similar provides a fidelity metric of the simulation in question

    Variable Resolution & Dimensional Mapping For 3d Model Optimization

    Get PDF
    Three-dimensional computer models, especially geospatial architectural data sets, can be visualized in the same way humans experience the world, providing a realistic, interactive experience. Scene familiarization, architectural analysis, scientific visualization, and many other applications would benefit from finely detailed, high resolution, 3D models. Automated methods to construct these 3D models traditionally has produced data sets that are often low fidelity or inaccurate; otherwise, they are initially highly detailed, but are very labor and time intensive to construct. Such data sets are often not practical for common real-time usage and are not easily updated. This thesis proposes Variable Resolution & Dimensional Mapping (VRDM), a methodology that has been developed to address some of the limitations of existing approaches to model construction from images. Key components of VRDM are texture palettes, which enable variable and ultra-high resolution images to be easily composited; texture features, which allow image features to integrated as image or geometry, and have the ability to modify the geometric model structure to add detail. These components support a primary VRDM objective of facilitating model refinement with additional data. This can be done until the desired fidelity is achieved as practical limits of infinite detail are approached. Texture Levels, the third component, enable real-time interaction with a very detailed model, along with the flexibility of having alternate pixel data for a given area of the model and this is achieved through extra dimensions. Together these techniques have been used to construct models that can contain GBs of imagery data
    • 

    corecore