180 research outputs found

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Data mining of commodity proposals based on context recommendations

    Get PDF
    Інтернет-технології є невід’ємною складовою відносин, які виникають у сучасному суспільстві. Через швидке впровадження та зручність електронних майданчиків, прогнозовано зростає попит на ринку IT-продуктів для рекомендаційних систем. У статті розглянуті різноманітні обмеження поточних рекомендаційних методів та обговорено можливі розширення, що можуть покращити рекомендаційні можливості та зробити їх більш ціностними для широкого кола додатків. Ці розширення включають покращення сприймання користувачів та елементів, включення контекстної інформації в рекомендаційний процес, підтримка багатокритеріальних рейтингів та надання більш гнучких і водночас менш нав’язливих типів рекомендацій. Важливу роль відіграє інтеграція діяльності, якаполягає у підтримці усіх аспектів електронної комерції від виконання транзакцій до підтримки мережі постачання, що дає змогу спростити документообіг та збільшити вигоду учасників.Направленість даної розробки – проводити аналітичну обробку даних торгівельних майданчиків, на основі контекстних рекомендацій, об’єктивний аналіз та здійснювати актуальний моніторинг ділової активності на торговельному майданчику. Розглянуто задачу складання різноманітних аналітичних звітів, що дозволить учасникам ринку IT-продуктів для рекомендаційних систем об’єктивно і своєчасно аналізувати розвиток ситуації на ринку, виявляти існуючі та прогнозні тенденції. Побудова сфери надання інтелектуальних аналітичних послуг здійснюється для залучення додаткових учасників, або якісно нових гравців ринку та одержання додаткового прибутку.Для обробки доцільно використовувати принципово нові технології Data Mining, що дозволить отримати якісно цінні дані. Data Mining – це технологія, призначена для пошуку у великих інформаційних масивах неочевидних даних, об’єктивних, корисних на практиці закономірностей.Internet technologies are an integral part of the relationship that arises in modern society. The rapid introduction and convenience of electronic platforms triggered the projected growth in demand in the market for IT products for recommender systems.The article discusses various limitations of current recommender methods and discusses possible extensions that can improve the recommender capabilities and make them more valuable for a wide range of applications. These extensions include improving the perception of users and elements, including contextual information in the recommendatory process, supporting multi-criteria ratings and providing more flexible and at the same time less intrusive types of recommendations.When integrating the relevant information technology to develop a commodity proposals environment, it is therefore necessary to consider the personalization requirements of the proposal to ensure that the technology achieves its intended result. This study therefore sought to apply context aware technology and recommendation algorithms to develop a system realize personalized goals in a context aware manner and improve commodity proposals effectiveness.In order to offer context-aware and personalized information, intelligent processing techniques are necessary. Different initiatives considering many contexts have been proposed, but users preferences need to be learned to offer contextualized and personalized services, products or information. Therefore, this paper proposes an agent-based architecture for context-aware and personalized event recommendation based on ontology and the spreading algorithm. The use of ontology allows to define the domain knowledge model, while the spreading activation algorithm learns user patterns by discovering user interests.Also from the statistical observation, it is found that there exists a higher level agreement towards the system between the participants of both end users and expert

    Graph-based segmentation and scene understanding for context-free point clouds

    Get PDF
    The acquisition of 3D point clouds representing the surface structure of real-world scenes has become common practice in many areas including architecture, cultural heritage and urban planning. Improvements in sample acquisition rates and precision are contributing to an increase in size and quality of point cloud data. The management of these large volumes of data is quickly becoming a challenge, leading to the design of algorithms intended to analyse and decrease the complexity of this data. Point cloud segmentation algorithms partition point clouds for better management, and scene understanding algorithms identify the components of a scene in the presence of considerable clutter and noise. In many cases, segmentation algorithms operate within the remit of a specific context, wherein their effectiveness is measured. Similarly, scene understanding algorithms depend on specific scene properties and fail to identify objects in a number of situations. This work addresses this lack of generality in current segmentation and scene understanding processes, and proposes methods for point clouds acquired using diverse scanning technologies in a wide spectrum of contexts. The approach to segmentation proposed by this work partitions a point cloud with minimal information, abstracting the data into a set of connected segment primitives to support efficient manipulation. A graph-based query mechanism is used to express further relations between segments and provide the building blocks for scene understanding. The presented method for scene understanding is agnostic of scene specific context and supports both supervised and unsupervised approaches. In the former, a graph-based object descriptor is derived from a training process and used in object identification. The latter approach applies pattern matching to identify regular structures. A novel external memory algorithm based on a hybrid spatial subdivision technique is introduced to handle very large point clouds and accelerate the computation of the k-nearest neighbour function. Segmentation has been successfully applied to extract segments representing geographic landmarks and architectural features from a variety of point clouds, whereas scene understanding has been successfully applied to indoor scenes on which other methods fail. The overall results demonstrate that the context-agnostic methods presented in this work can be successfully employed to manage the complexity of ever growing repositories

    Browse-to-search

    Full text link
    This demonstration presents a novel interactive online shopping application based on visual search technologies. When users want to buy something on a shopping site, they usually have the requirement of looking for related information from other web sites. Therefore users need to switch between the web page being browsed and other websites that provide search results. The proposed application enables users to naturally search products of interest when they browse a web page, and make their even causal purchase intent easily satisfied. The interactive shopping experience is characterized by: 1) in session - it allows users to specify the purchase intent in the browsing session, instead of leaving the current page and navigating to other websites; 2) in context - -the browsed web page provides implicit context information which helps infer user purchase preferences; 3) in focus - users easily specify their search interest using gesture on touch devices and do not need to formulate queries in search box; 4) natural-gesture inputs and visual-based search provides users a natural shopping experience. The system is evaluated against a data set consisting of several millions commercial product images. © 2012 Authors

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Spline-based dense medial descriptors for lossy image compression

    Get PDF
    Medial descriptors are of significant interest for image simplification, representation, manipulation, and compression. On the other hand, B-splines are well-known tools for specifying smooth curves in computer graphics and geometric design. In this paper, we integrate the two by modeling medial descriptors with stable and accurate B-splines for image compression. Representing medial descriptors with B-splines can not only greatly improve compression but is also an effective vector representation of raster images. A comprehensive evaluation shows that our Spline-based Dense Medial Descriptors (SDMD) method achieves much higher compression ratios at similar or even better quality to the well-known JPEG technique. We illustrate our approach with applications in generating super-resolution images and salient feature preserving image compression

    Deep Learning for Free-Hand Sketch: A Survey

    Get PDF
    Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present. The recent prevalence of touchscreen devices has made sketch creation a much easier task than ever and consequently made sketch-oriented applications increasingly popular. The progress of deep learning has immensely benefited free-hand sketch research and applications. This paper presents a comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable. The main contents of this survey include: (i) A discussion of the intrinsic traits and unique challenges of free-hand sketch, to highlight the essential differences between sketch data and other data modalities, e.g., natural photos. (ii) A review of the developments of free-hand sketch research in the deep learning era, by surveying existing datasets, research topics, and the state-of-the-art methods through a detailed taxonomy and experimental evaluation. (iii) Promotion of future work via a discussion of bottlenecks, open problems, and potential research directions for the community.Comment: This paper is accepted by IEEE TPAM
    corecore