525 research outputs found

    Color Image Enhancement via Combine Homomorphic Ratio and Histogram Equalization Approaches: Using Underwater Images as Illustrative Examples

    Get PDF
    The histogram is one of the important characteristics of grayscale images, and the histogram equalization is effective method of image enhancement. When processing color images in models, such as the RGB model, the histogram equalization can be applied for each color component and, then, a new color image is composed from processed components. This is a traditional way of processing color images, which does not preserve the existent relation or correlation between colors at each pixel. In this work, a new model of color image enhancement is proposed, by preserving the ratios of colors at all pixels after processing the image. This model is described for the color histogram equalization (HE) and examples of application on color images are given. Our preliminary results show that the application of the model with the HE can be effectively used for enhancing color images, including underwater images. Intensive computer simulations show that for single underwater image enhancement, the presented method increases the image contrast and brightness and indicates a good natural appearance and relatively genuine color

    Using the Natural Scenes’ Edges for Assessing Image Quality Blindly and Efficiently

    Get PDF
    Two real blind/no-reference (NR) image quality assessment (IQA) algorithms in the spatial domain are developed. To measure image quality, the introduced approach uses an unprecedented concept for gathering a set of novel features based on edges of natural scenes. The enhanced sensitivity of the human eye to the information carried by edge and contour of an image supports this claim. The effectiveness of the proposed technique in quantifying image quality has been studied. The gathered features are formed using both Weibull distribution statistics and two sharpness functions to devise two separate NR IQA algorithms. The presented algorithms do not need training on databases of human judgments or even prior knowledge about expected distortions, so they are real NR IQA algorithms. In contrast to the most general no-reference IQA, the model used for this study is generic and has been created in such a way that it is not specified to any particular distortion type. When testing the proposed algorithms on LIVE database, experiments show that they correlate well with subjective opinion scores. They also show that the introduced methods significantly outperform the popular full-reference peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) methods. Besides they outperform the recently developed NR natural image quality evaluator (NIQE) model

    Crosstalk in stereoscopic displays

    Get PDF
    Crosstalk is an important image quality attribute of stereoscopic 3D displays. The research presented in this thesis examines the presence, mechanisms, simulation, and reduction of crosstalk for a selection of stereoscopic display technologies. High levels of crosstalk degrade the perceived quality of stereoscopic displays hence it is important to minimise crosstalk. This thesis provides new insights which are critical to a detailed understanding of crosstalk and consequently to the development of effective crosstalk reduction techniques

    Rummaging Through the Wreckage: Geographies of Trauma, Memory, and Loss at the National September 11th Memorial & Museum at the World Trade Center

    Get PDF
    This dissertation traces the emergence of 9/11 memory as it is shaped in relation to the event\u27s memorialization at nationally-dedicated landscapes of memory. Focusing on the National September 11th Memorial & Museum, The National Flight 93 Memorial, and the National Pentagon 9/11 Memorial, my research examines how cultural memory is mediated through the establishment of `places of memory\u27 within the built-environment. Here, I argue, the preservation of place acts as a repository of national memory by safeguarding the history of 9/11 for future generations. Contextualizing these landscapes of memory within the global war on terrorism, my analytical framework engages the transnational significance of 9/11 memory in a global world. Accordingly, this research situates 9/11 remembrance within interdisciplinary and cross-border conversations that theorize national practices of preservation and commemoration in relation to transnational flows of people, information, and ideas. Here, my research articulates the formation of 9/11 memory as a unique `geography of trauma.\u27 Offering an original contribution to geography, this research theorizes the spatial and temporal movement of traumatic memories across time and space. Aimed at understanding how these historic sites are mediated in relation to other landscapes of violence and cultural trauma--past and present--my research draws on critical geopolitical theorizations of the nation-state, feminist theories of emotion and embodiment, queer deployments of affect, and cultural theories of memory, as tools for navigating post-structural ideas of power, knowledge, discourse, and empire

    The role of stigma in writing charitable appeals

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)This study investigated choices made by fundraisers when crafting appeals to unknown potential donors. Specifically, it asked if and how fundraisers’ choices vary depending on whether they were raising money for a population that faced societal stigma. Research on fundraising often focuses on donor behavior, without considering the type of the beneficiary and the discretionary decisions made by fundraisers. This study drew on literature about stigma and literature about fundraising communication. It employed mixed methodologies to explore this research question. The first part of the study used an online experimental survey, in which 76 practicing fundraisers wrote an acquisition appeal letter for a nonprofit after random assignment to benefit either clients with mental illness (stigmatized population) or older adults (non-stigmatized population), then answered attitudinal questions about the beneficiary population. Participants believed individuals with mental illness were more stigmatized than older adults. Analysis of the letters using linguistic software showed that fundraisers used more humanizing language when writing about the non-stigmatized population, compared to the stigmatized population. Several aspects of the appeal letters, identified through existing theory, were examined but did not vary at statistically significant levels between the groups. Exploratory factor analysis showed several patterns of elements recurring within the letters. One of these patterns, addressing social expectations, varied significantly by client group. In the second part of the study, semi-structured interviews with fifteen participants showed that writing for the stigmatized client population raised special concerns in communicating with potential donors: many interviewees described identifying client stories and evidence to justify helping stigmatized clients in a way that wasn’t thought as necessary for non-stigmatized clients. They also attempted to mitigate threatening stereotypes to maintain readers’ comfort levels. Fundraisers regularly evaluated how readers were likely to think of different kinds of clients. Fundraisers’ own implicit assumptions also came into play

    Neural network computing using on-chip accelerators

    Get PDF
    The use of neural networks, machine learning, or artificial intelligence, in its broadest and most controversial sense, has been a tumultuous journey involving three distinct hype cycles and a history dating back to the 1960s. Resurgent, enthusiastic interest in machine learning and its applications bolsters the case for machine learning as a fundamental computational kernel. Furthermore, researchers have demonstrated that machine learning can be utilized as an auxiliary component of applications to enhance or enable new types of computation such as approximate computing or automatic parallelization. In our view, machine learning becomes not the underlying application, but a ubiquitous component of applications. This view necessitates a different approach towards the deployment of machine learning computation that spans not only hardware design of accelerator architectures, but also user and supervisor software to enable the safe, simultaneous use of machine learning accelerator resources. In this dissertation, we propose a multi-transaction model of neural network computation to meet the needs of future machine learning applications. We demonstrate that this model, encompassing a decoupled backend accelerator for inference and learning from hardware and software for managing neural network transactions can be achieved with low overhead and integrated with a modern RISC-V microprocessor. Our extensions span user and supervisor software and data structures and, coupled with our hardware, enable multiple transactions from different address spaces to execute simultaneously, yet safely. Together, our system demonstrates the utility of a multi-transaction model to increase energy efficiency improvements and improve overall accelerator throughput for machine learning applications
    • …
    corecore