2,013 research outputs found

    Modeling Sketching Primitives to Support Freehand Drawing Based on Context Awareness

    Get PDF
    Freehand drawing is an easy and intuitive method for thinking input and output. In sketch based interface, there lack support for natural sketching with drawing cues, like overlapping, overlooping, hatching, etc. which happen frequently in physical pen and paper. In this paper, we analyze some characters of drawing cues in sketch based interface and describe the different types of sketching primitives. An improved sketch information model is given and the idea is to present and record design thinking during freehand drawing process with individuality and diversification. The interaction model based on context is developed which can guide and help new sketch-based interface development. New applications with different context contents can be easily derived from it and developed further. Our approach can support the tasks that are common across applications, requiring the designer to only provide support for the application-specific tasks. It is capable of and applicable for modeling various sketching interfaces and applications. Finally, we illustrate the general operations of the system by examples in different applications

    Heart Failure Monitoring System Based on Wearable and Information Technologies

    Get PDF
    In Europe, Cardiovascular Diseases (CVD) are the leading source of death, causing 45% of all deceases. Besides, Heart Failure, the paradigm of CVD, mainly affects people older than 65. In the current aging society, the European MyHeart Project was created, whose mission is to empower citizens to fight CVD by leading a preventive lifestyle and being able to be diagnosed at an early stage. This paper presents the development of a Heart Failure Management System, based on daily monitoring of Vital Body Signals, with wearable and mobile technologies, for the continuous assessment of this chronic disease. The System makes use of the latest technologies for monitoring heart condition, both with wearable garments (e.g. for measuring ECG and Respiration); and portable devices (such as Weight Scale and Blood Pressure Cuff) both with Bluetooth capabilitie

    A Generative Model of People in Clothing

    Full text link
    We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible

    SENS: Sketch-based Implicit Neural Shape Modeling

    Full text link
    We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches, including those of an abstract nature. Our method allows users to quickly and easily sketch a shape, and then maps the sketch into the latent space of a part-aware neural implicit shape architecture. SENS analyzes the sketch and encodes its parts into ViT patch encoding, then feeds them into a transformer decoder that converts them to shape embeddings, suitable for editing 3D neural implicit shapes. SENS not only provides intuitive sketch-based generation and editing, but also excels in capturing the intent of the user's sketch to generate a variety of novel and expressive 3D shapes, even from abstract sketches. We demonstrate the effectiveness of our model compared to the state-of-the-art using objective metric evaluation criteria and a decisive user study, both indicating strong performance on sketches with a medium level of abstraction. Furthermore, we showcase its intuitive sketch-based shape editing capabilities.Comment: 18 pages, 18 figure

    Multi-modal Embedding Fusion-based Recommender

    Full text link
    Recommendation systems have lately been popularized globally, with primary use cases in online interaction systems, with significant focus on e-commerce platforms. We have developed a machine learning-based recommendation platform, which can be easily applied to almost any items and/or actions domain. Contrary to existing recommendation systems, our platform supports multiple types of interaction data with multiple modalities of metadata natively. This is achieved through multi-modal fusion of various data representations. We deployed the platform into multiple e-commerce stores of different kinds, e.g. food and beverages, shoes, fashion items, telecom operators. Here, we present our system, its flexibility and performance. We also show benchmark results on open datasets, that significantly outperform state-of-the-art prior work.Comment: 7 pages, 8 figure

    Psychological challenges for the analysis of style.

    Get PDF
    This article remains the copyright of Cambridge University Press. The definitive version of this article can be found at: http://dx.doi.org/10.1017/S089006040606015XAnalyses of styles in design have paid little attention to how people see style, and how designers use perceptions of style to guide designing. While formal and computational methods for analysing styles and generating designs provide impressively parsimonious accounts of what some styles are, they do not address many of the factors that influence how humans understand styles. The subtlety of human style judgements raises challenges for computational approaches to style. This paper differentiates between a range of distinct meanings of 'style', and explores how designers and ordinary people learn and apply perceptual similarity classes and style concepts in different situations to interpret and create designed artefacts. A range of psychological evidence indicates that style perception is dependent on knowledge, and involves the interaction of perceptual recognition of style features and explanatory inference processes that create a coherent understanding of an object as an exemplar of a style. This paper concludes by outlining how formal style analyses can be used in combination with psychological research to develop a fuller understanding of style perception and creative design
    • …
    corecore