991 research outputs found

    The Archive of Unrealised Devices

    Get PDF
    Google Patents is an eight-year-old virtual searchable database containing the United States Patent and Trademark Office (USPTO) and the European Patent Office (EPO) patents, with US patent applications dating back to 1790. This searchable online archive of invention, novelty and innovation is a valuable tool for designers and researchers. As a point of departure for recent art-based research, Google Patents online database is mined by me as a creative practitioner. As an artist-hacker, the found material used in my research arises from patent searches for fantastical machines and devices developed to assist with swimming, dating from the 1870s to the early twentieth century. The retrieved patent, etched drawings and information evidence an understanding of a new sport at particular moments in time. However, almost all of these patents remained ‘unrealized’, only contained within the drawing and text of the patent itself. These patents are used as the visual and conceptual basis for The Swimming Machine Archive (2014), a growing body of collages featuring fictional devices for moving through water

    Detecting Internet visual plagiarism in higher education photography with Google™ Search by Image : proposed upload methods and system evaluation

    Get PDF
    Thesis (M. Tech. (Design and Studio Art)) - Central University of Technology, Free State, 2014The Information Age has presented those in the discipline of photography with very many advantages. Digital photographers enjoy all the perquisites of convenience while still producing high-quality images. Lecturers find themselves the authorities of increasingly archaic knowledge in a perpetual race to keep up with technology. When inspiration becomes imitation and visual plagiarism occurs, lecturers may find themselves at a loss for taking action as content-based image retrieval systems, like Google™ Search by Image (SBI), have not yet been systematically tested for the detection of visual plagiarism. Currently there exists no efficacious method available to photography lecturers in higher education for detecting visual plagiarism. As such, the aim of this study is to ascertain the most effective uploading methods and precision of the Google™ SBI system which lecturers can use to establish a systematic workflow that will combat visual plagiarism in photography programmes. Images were selected from the Google™ Images database by means of random sampling and uploaded to Google™ SBI to determine if the system can match the images to their Internet source. Each of the images received a black and white conversion, a contrast adjustment and a hue shift to ascertain whether the system can also match altered images. Composite images were compiled to establish whether the system can detect images from the salient feature. Results were recorded and the precision values calculated to determine the system’s success rate and accuracy. The results were favourable and 93.25% of the adjusted images retrieved results with a precision value of 0.96. The composite images had a success rate of 80% when uploaded intact with no dissections and a perfect precision value of 1.00. Google™ SBI can successfully be used by the photography lecturer as a functional visual plagiarism detection system to match images unethically appropriated by students from the Internet

    Visual Image Recognition System with Object-Level Image Representation

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    TOWARDS ATTRIBUTE-AWARE CROSS-DOMAIN IMAGE RETRIEVAL

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    DEEP LEARNING FOR FASHION AND FORENSICS

    Get PDF
    Deep learning is the new electricity, which has dramatically reshaped people's everyday life. In this thesis, we focus on two emerging applications of deep learning - fashion and forensics. The ubiquity of online fashion shopping demands effective search and recommendation services for customers. To this end, we first propose an automatic spatially-aware concept discovery approach using weakly labeled image-text data from shopping websites. We first fine-tune GoogleNet by jointly modeling clothing images and their corresponding descriptions in a visual-semantic embedding space. Then, for each attribute (word), we generate its spatially-aware representation by combining its semantic word vector representation with its spatial representation derived from the convolutional maps of the fine-tuned network. The resulting spatially-aware representations are further used to cluster attributes into multiple groups to form spatially-aware concepts (e.g., the neckline concept might consist of attributes like v-neck, round-neck}, \textit{etc}). Finally, we decompose the visual-semantic embedding space into multiple concept-specific subspaces, which facilitates structured browsing and attribute-feedback product retrieval by exploiting multimodal linguistic regularities. We conducted extensive experiments on our newly collected Fashion200K dataset, and results on clustering quality evaluation and attribute-feedback product retrieval task demonstrate the effectiveness of our automatically discovered spatially-aware concepts. For fashion recommendation tasks, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods. In addition to searching and recommendation, customers also would like to virtually try-on fashion items. We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. Conditioned upon a new clothing-agnostic yet descriptive person representation, our framework first generates a coarse synthesized image with the target clothing item overlaid on that same person in the same pose. We further enhance the initial blurry clothing area with a refinement network. The network is trained to learn how much detail to utilize from the target clothing item, and where to apply to the person in order to synthesize a photo-realistic image in which the target item deforms naturally with clear visual patterns. Experiments on our newly collected dataset demonstrate its promise in the image-based virtual try-on task over state-of-the-art generative models. Interestingly, VITON can be modified to swap faces instead of swapping clothing items. Conditioned on the landmarks of a face, generative adversarial networks can synthesize a target identity on to the original face keeping the original facial expression. We achieve this by introducing an identity preserving loss together with a perceptually-aware discriminator. The identity preserving loss tries to keep the synthesized face presents the same identity as the target, while the perceptually-aware discriminator ensures the generated face looks realistic. It is worth noticing that these face-swap techniques can be easily used to manipulated people's faces, and might cause serious social and political consequences. Researchers have developed powerful tools to detect these manipulations. In this dissertation, we utilize convolutional neural networks to boost the detection accuracy of tampered face or person in images. Firstly, a two-stream network is proposed to determine if a face has been tampered with. We train a GoogLeNet to detect tampering artifacts in a face classification stream, and train a patch based triplet network to leverage features capturing local noise residuals and camera characteristics as a second stream. In addition, we use two different online face swapping applications to create a new dataset that consists of 2010 tampered images, each of which contains a tampered face. We evaluate the proposed two-stream network on our newly collected dataset. Experimental results demonstrate the effectiveness of our method. Further, spliced people are also very common in image manipulation. We describe a tampering detection system containing multiple modules, which model different aspects of tampering traces. The system first detects faces in an image. Then, for each detected face, it enlarges the bounding box to include a portrait image of that person. Three models are fused to detect if this person (portrait) is tampered or not: (i) PortraintNet: A binary classifier fine-tuned on ImageNet pre-trained GoogLeNet. (ii) SegNet: A U-Net predicts tampered masks and boundaries, followed by a LeNet to classify if the predicted masks and boundaries indicating the image has been tampered with or not. (iii) EdgeNet: A U-Net predicts the edge mask of each portrait, and the extracted portrait edges are fed into a GoogLeNet for tampering classification. Experiments show that these three models are complementary and can be fused to effectively detect a spliced portrait image

    The development of clothing concepts in response to analysis of changing gendered social attitudes

    Get PDF
    The relationship of gender and clothing were widely discussed by theorists, and fashion collections illustrated this thinking. This study aimed to address one area of this relationship, by conducting practice-based research to develop garments for women who wear men’s clothing. The study responds to real insights from the women themselves through qualitative interviews. This study aimed to understand why women choose to wear men’s clothing and to use this to question gender assignment in clothing, in order to develop design concepts for the development of clothing for this specific group of women. This interdisciplinary practice-based study combined phenomenological thinking and practice with theory to engage more deeply with why women choose to wear men’s clothing. The Victorian square cut shirt became pivotal to the process of design and accorded with preferences for large shapes and interesting proportion. The Pit brow study highlighted how historical gender roles can aid in the understanding of gendered clothing now. Surveys asking about the gendered perception of clothing on and off the body found significantly that clothing is perceived differently when not on a body. Qualitative interviews were conducted with 10 women answering a call for women who wear men’s clothing. Experimental design concepts were developed though combining, research inputs and an output culminating in a selection of garments was produced. The practice found that space between the body and clothing provides feelings of well-being, through comfort, space and coverage of the body. This study contributes to knowledge of garment design practice, by recording and analysing the complex thinking behind garment design for women who wear men’s clothing for fashion. Experimental responsive making, can create new and effective design methods through an intra-active relationship with fabric and an openness to the haphazard. The process of research and design combined with theory has defined preferences for the development of clothing for the group of women. The conceptual model, Women’s clothing preferences. Wellbeing in relation to gender and body image, records the final preferences and is a resource for future use for the design of clothing for all people. Gender assignment in clothing from the perspective of the viewer was found to be variable and influenced by personal and situational aspects, which were 4 | P a g e changeable. For the women participants, gendered clothing for the wearer, was found to be selected primarily by merit of wear properties. Women who wear men’s clothing do not wish to be defined by their gendered body, but by a sense of who they are

    Understanding and Supporting Visual Communication within Costume Design Practice

    Get PDF
    Theatres provide artistic value to many people and generate revenue for communities, yet little research has been conducted to understand or support theatrical designers. Over 1,800 non-profit theatres and 3,522 theatre companies and dinner theatres operate in the United States. In 2008, 11 million people attended 1,587 Broadway shows for a total gross of 894 million dollars. These numbers do not take into account College and community theatres, operas, and ballets, all of which also require costumes. This dissertation studied image search, selection, and use within costume design practice to: 1) understand how image use as a collaborative visual communication tool affects the search and selection process and 2) assist an often overlooked community. Previous research in image search and selection has focused on specific resources or institutions. In contrast, this research used case study methodology to understand image search, selection, and use within the broad context of an image-intensive process. The researcher observed costume designers and other theatre members as they located, selected, shared, discussed, and modified images through an iterative design process resulting in a final set of images, the costumes themselves. The researcher also interviewed participants throughout the design process, photographed artifacts, and conducted a final interview with participants at the end of each case study. The resulting data was coded using grounded theory and guided by previous research. Based on the analysis, the researcher suggests a three-stage model that describes image use in costume design and provides a starting point for understanding image use in other collaborative design practices. Participants used a wide range of analog and digital resources, including personal and institutional collections, but often used the same three search and selection strategies regardless of the resource type. Set building and refinement, image comparison, and tagging were all important features of the image search and selection process but are not well supported in most image search systems. In addition, participants continuously added resources to personal collections for future use on individual productions. This research set out to understand search and selection within the context of collaborative use on a single production, but what became apparent was the central nature of collaboration across productions to the search and selection process itself. Personal networks between costume designers and within the theatre community played a central role in solving challenges costume designers encounter as part of their work. This research bridges a gap in current image research by placing image search and selection within the context of a collaborative design practice. At the same time, it suggests guidelines for developing technology to support a community which has long been overlooked. With additional research, the findings from this research can be extended to apply to the theatrical community as a whole and also to other design professionals
    corecore