499 research outputs found

    Improving Shape Retrieval by Integrating AIR and Modified Mutual k

    Get PDF
    In computer vision, image retrieval remained a significant problem and recent resurgent of image retrieval also relies on other postprocessing methods to improve the accuracy instead of solely relying on good feature representation. Our method addressed the shape retrieval of binary images. This paper proposes a new integration scheme to best utilize feature representation along with contextual information. For feature representation we used articulation invariant representation; dynamic programming is then utilized for better shape matching followed by manifold learning based postprocessing modified mutual kNN graph to further improve the similarity score. We conducted extensive experiments on widely used MPEG-7 database of shape images by so-called bulls-eye score with and without normalization of modified mutual kNN graph which clearly indicates the importance of normalization. Finally, our method demonstrated better results compared to other methods. We also computed the computational time with another graph transduction method which clearly shows that our method is computationally very fast. Furthermore, to show consistency of postprocessing method, we also performed experiments on challenging ORL and YALE face datasets and improved baseline results

    Shape similarity analysis by self-tuning locally constrained mixed-diffusion

    Get PDF
    Similarity analysis is a powerful tool for shape matching/retrieval and other computer vision tasks. In the literature, various shape (dis)similarity measures have been introduced. Different measures specialize on different aspects of the data. In this paper, we consider the problem of improving retrieval accuracy by systematically fusing several different measures. To this end, we propose the locally constrained mixeddiffusion method, which partly fuses the given measures into one and propagates on the resulted locally dense data space. Furthermore, we advocate the use of self-adaptive neighborhoods to automatically determine the appropriate size of the neighborhoods in the diffusion process, with which the retrieval performance is comparable to the best manually tuned kNNs. The superiority of our approach is empirically demonstrated on both shape and image datasets. Our approach achieves a score of 100% in the bull’s eye test on the MPEG-7 shape dataset, which is the best reported result to date.Lei Luo, Chunhua Shen, Chunyuan Zhang and Anton van den Henge

    Principles and Applications of Data Science

    Get PDF
    Data science is an emerging multidisciplinary field which lies at the intersection of computer science, statistics, and mathematics, with different applications and related to data mining, deep learning, and big data. This Special Issue on “Principles and Applications of Data Science” focuses on the latest developments in the theories, techniques, and applications of data science. The topics include data cleansing, data mining, machine learning, deep learning, and the applications of medical and healthcare, as well as social media

    A survey, review, and future trends of skin lesion segmentation and classification

    Get PDF
    The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    A Novel Machine Learning Classifier Based on a Qualia Modeling Agent (QMA)

    Get PDF
    This dissertation addresses a problem found in supervised machine learning (ML) classification, that the target variable, i.e., the variable a classifier predicts, has to be identified before training begins and cannot change during training and testing. This research develops a computational agent, which overcomes this problem. The Qualia Modeling Agent (QMA) is modeled after two cognitive theories: Stanovich\u27s tripartite framework, which proposes learning results from interactions between conscious and unconscious processes; and, the Integrated Information Theory (IIT) of Consciousness, which proposes that the fundamental structural elements of consciousness are qualia. By modeling the informational relationships of qualia, the QMA allows for retaining and reasoning-over data sets in a non-ontological, non-hierarchical qualia space (QS). This novel computational approach supports concept drift, by allowing the target variable to change ad infinitum without re-training while achieving classification accuracy comparable to or greater than benchmark classifiers. Additionally, the research produced a functioning model of Stanovich\u27s framework, and a computationally tractable working solution for a representation of qualia, which when exposed to new examples, is able to match the causal structure and generate new inferences

    A Framework for the Semantics-aware Modelling of Objects

    Get PDF
    The evolution of 3D visual content calls for innovative methods for modelling shapes based on their intended usage, function and role in a complex scenario. Even if different attempts have been done in this direction, shape modelling still mainly focuses on geometry. However, 3D models have a structure, given by the arrangement of salient parts, and shape and structure are deeply related to semantics and functionality. Changing geometry without semantic clues may invalidate such functionalities or the meaning of objects or their parts. We approach the problem by considering semantics as the formalised knowledge related to a category of objects; the geometry can vary provided that the semantics is preserved. We represent the semantics and the variable geometry of a class of shapes through the parametric template: an annotated 3D model whose geometry can be deformed provided that some semantic constraints remain satisfied. In this work, we design and develop a framework for the semantics-aware modelling of shapes, offering the user a single application environment where the whole workflow of defining the parametric template and applying semantics-aware deformations can take place. In particular, the system provides tools for the selection and annotation of geometry based on a formalised contextual knowledge; shape analysis methods to derive new knowledge implicitly encoded in the geometry, and possibly enrich the given semantics; a set of constraints that the user can apply to salient parts and a deformation operation that takes into account the semantic constraints and provides an optimal solution. The framework is modular so that new tools can be continuously added. While producing some innovative results in specific areas, the goal of this work is the development of a comprehensive framework combining state of the art techniques and new algorithms, thus enabling the user to conceptualise her/his knowledge and model geometric shapes. The original contributions regard the formalisation of the concept of annotation, with attached properties, and of the relations between significant parts of objects; a new technique for guaranteeing the persistence of annotations after significant changes in shape's resolution; the exploitation of shape descriptors for the extraction of quantitative information and the assessment of shape variability within a class; and the extension of the popular cage-based deformation techniques to include constraints on the allowed displacement of vertices. In this thesis, we report the design and development of the framework as well as results in two application scenarios, namely product design and archaeological reconstruction

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy
    corecore