790 research outputs found

    A survey of parallel algorithms for fractal image compression

    Get PDF
    This paper presents a short survey of the key research work that has been undertaken in the application of parallel algorithms for Fractal image compression. The interest in fractal image compression techniques stems from their ability to achieve high compression ratios whilst maintaining a very high quality in the reconstructed image. The main drawback of this compression method is the very high computational cost that is associated with the encoding phase. Consequently, there has been significant interest in exploiting parallel computing architectures in order to speed up this phase, whilst still maintaining the advantageous features of the approach. This paper presents a brief introduction to fractal image compression, including the iterated function system theory upon which it is based, and then reviews the different techniques that have been, and can be, applied in order to parallelize the compression algorithm

    Giving eyes to ICT!, or How does a computer recognize a cow?

    Get PDF
    Het door Schouten en andere onderzoekers op het CWI ontwikkelde systeem berust op het beschrijven van beelden met behulp van fractale meetkunde. De menselijke waarneming blijkt mede daardoor zo efficiënt omdat zij sterk werkt met gelijkenissen. Het ligt dus voor de hand het te zoeken in wiskundige methoden die dat ook doen. Schouten heeft daarom beeldcodering met behulp van 'fractals' onderzocht. Fractals zijn zelfgelijkende meetkundige figuren, opgebouwd door herhaalde transformatie (iteratie) van een eenvoudig basispatroon, dat zich daardoor op steeds kleinere schalen vertakt. Op elk niveau van detaillering lijkt een fractal op zichzelf (Droste-effect). Met fractals kan men vrij eenvoudig bedrieglijk echte natuurvoorstellingen maken. Fractale beeldcodering gaat ervan uit dat het omgekeerde ook geldt: een beeld effectief opslaan in de vorm van de basispatronen van een klein aantal fractals, samen met het voorschrift hoe het oorspronkelijke beeld daaruit te reconstrueren. Het op het CWI in samenwerking met onderzoekers uit Leuven ontwikkelde systeem is mede gebaseerd op deze methode. ISBN 906196502

    Improve a technique for searching and indexing images utilizing content investigation

    Get PDF
    In this research, algorithms were developed to assess the similarities between two or more images and reduce the time spent on searching for them based on the analysis of color intensity diversity and (histogram analysis), besides analyzing the partial proportion of the color component of the images. These algorithms can be used to search for images in the database, systems, and computer networks. This experiment was carried out using a computer program that was programmed using C Builder. The results were analyzed and illustrated with multiple examples. The results showed that the time spent in the searching process depends on several criteria. The most important of these criteria is the number of images used in the search process, number and method of processing the images involved in the search process, in addition to evaluating the time spent in the search process of the three algorithms whose effectiveness has been evaluated. (The best algorithm according to results and performance)

    Making visible the invisible through the analysis of acknowledgements in the humanities

    Full text link
    Purpose: Science is subject to a normative structure that includes how the contributions and interactions between scientists are rewarded. Authorship and citations have been the key elements within the reward system of science, whereas acknowledgements, despite being a well-established element in scholarly communication, have not received the same attention. This paper aims to put forward the bearing of acknowledgements in the humanities to bring to the foreground contributions and interactions that, otherwise, would remain invisible through traditional indicators of research performance. Design/methodology/approach: The study provides a comprehensive framework to understanding acknowledgements as part of the reward system with a special focus on its value in the humanities as a reflection of intellectual indebtedness. The distinctive features of research in the humanities are outlined and the role of acknowledgements as a source of contributorship information is reviewed to support these assumptions. Findings: Peer interactive communication is the prevailing support thanked in the acknowledgements of humanities, so the notion of acknowledgements as super-citations can make special sense in this area. Since single-authored papers still predominate as publishing pattern in this domain, the study of acknowledgements might help to understand social interactions and intellectual influences that lie behind a piece of research and are not visible through authorship. Originality/value: Previous works have proposed and explored the prevailing acknowledgement types by domain. This paper focuses on the humanities to show the role of acknowledgements within the reward system and highlight publication patterns and inherent research features which make acknowledgements particularly interesting in the area as reflection of the socio-cognitive structure of research.Comment: 14 page

    Computer Vision for Timber Harvesting

    Get PDF

    Comparison between Impact factor, SCImago journal rank indicator and Eigenfactor score of nuclear medicine journals

    Get PDF
    Despite its widespread acceptance in the scientific world, impactfactor (IF) has been criticized recently on many accounts:including lack of quality assessment of the citations, influenceof self citation, English language bias, etc. In the current study,we evaluated three indices of journal scientific impact: (IF),Eigenfactor Score (ES), and SCImago Journal rank indicator(SJR) of nuclear medicine journals. Overall 13 nuclear medicinejournals are indexed in ISI and SCOPUS and 7 in SCOPUSonly. Self citations, Citations to non-English articles, citationsto non-citable items and citations to review articles contributeto IFs of some journals very prominently, which can be betterdetected by ES and SJR to some extent. Considering all threeindices while judging quality of the nuclear medicine journalswould be a better strategy due to several shortcomings of IF.Despite its widespread acceptance in the scientific world, impactfactor (IF) has been criticized recently on many accounts:including lack of quality assessment of the citations, influenceof self citation, English language bias, etc. In the current study,we evaluated three indices of journal scientific impact: (IF),Eigenfactor Score (ES), and SCImago Journal rank indicator(SJR) of nuclear medicine journals. Overall 13 nuclear medicinejournals are indexed in ISI and SCOPUS and 7 in SCOPUSonly. Self citations, Citations to non-English articles, citationsto non-citable items and citations to review articles contributeto IFs of some journals very prominently, which can be betterdetected by ES and SJR to some extent. Considering all threeindices while judging quality of the nuclear medicine journalswould be a better strategy due to several shortcomings of IF

    Speeding up the cyclic edit distance using LAESA with early abandon

    Get PDF
    The cyclic edit distance between two strings is the minimum edit distance between one of this strings and every possible cyclic shift of the other. This can be useful, for example, in image analysis where strings describe the contour of shapes or in computational biology for classifying circular permuted proteins or circular DNA/RNA molecules. The cyclic edit distance can be computed in O(mnlog m) time, however, in real recognition tasks this is a high computational cost because of the size of databases. A method to reduce the number of comparisons and avoid an exhaustive search is convenient. In this work, we present a new algorithm based on a modification of LAESA (linear approximating and eliminating search algorithm) for applying pruning in the computation of distances. It is an efficient procedure for classification and retrieval of cyclic strings. Experimental results show that our proposal considerably outperforms LAESAWork partially supported by the Spanish Government (TIN2010-18958), and the Generalitat Valenciana (PROMETEOII/2014/062)

    Ranked Spatial-keyword Search over Web-accessible Geotagged Data: State of the Art

    Get PDF
    Search engines, such as Google and Yahoo!, provide efficient retrieval and ranking of web pages based on queries consisting of a set of given keywords. Recent studies show that 20% of all Web queries also have location constraints, i.e., also refer to the location of a geotagged web page. An increasing number of applications support location based keyword search, including Google Maps, Bing Maps, Yahoo! Local, and Yelp. Such applications depict points of interest on the map and combine their location with the keywords provided by the associated document(s). The posed queries consist of two conditions: a set of keywords and a spatial location. The goal is to find points of interest with these keywords close to the location. We refer to such a query as spatial-keyword query. Moreover, mobile devices nowadays are enhanced with built-in GPS receivers, which permits applications (such as search engines or yellow page services) to acquire the location of the user implicitly, and provide location-based services. For instance, Google Mobile App provides a simple search service for smartphones where the location of the user is automatically captured and employed to retrieve results relevant to her current location. As an example, a search for ”pizza” results in a list of pizza restaurants nearby the user. Given the popularity of spatial-keyword queries and their wide applicability in practical scenarios, it is critical to (i) establish mechanisms for efficient processing of spatial-keyword queries, and (ii) support more expressive query formulation by means of novel 1 query types. Although studies on both keyword search and spatial queries do exist, the problem of combining the search capabilities of both simultaneously has received little attention

    Information Forensics and Security: A quarter-century-long journey

    Get PDF
    Information forensics and security (IFS) is an active R&D area whose goal is to ensure that people use devices, data, and intellectual properties for authorized purposes and to facilitate the gathering of solid evidence to hold perpetrators accountable. For over a quarter century, since the 1990s, the IFS research area has grown tremendously to address the societal needs of the digital information era. The IEEE Signal Processing Society (SPS) has emerged as an important hub and leader in this area, and this article celebrates some landmark technical contributions. In particular, we highlight the major technological advances by the research community in some selected focus areas in the field during the past 25 years and present future trends
    corecore