904 research outputs found

    Post-Myriad Genetics Copyright of Synthetic Biology and Living Media

    Get PDF
    This Article addresses copyright as a viable form of intellectual property protection for living, organic creations of science and art. The United States Supreme Court\u27s decision in Association for Molecular Pathology v. Myriad Genetics, Inc. narrowed patent-eligible protection over living components of humans or other organisms. Synthetic biologists are expected to look with renewed focus on copyright law for the intellectual property protection of biological creations. The contribution of this Article is to reveal that the same issues are raised with regard to the copyrightability of the works of synthetic biology as are raised by pictorial, graphic, and sculptural arts that use and produce living media as their works. The current contours of copyrightability present four identical questions that are particularly relevant to and difficult to answer in the context of science and art that purports to create works of living media: * Is living media copyrightable subject matter? * What is authorship (or who is an author) of living media? * What does it mean to create a fixed and tangible work of living media? * What constitutes an original creation of living media under the originality doctrines of merger and scenes a faire? This Article will provide an analytical framework for rethinking the contours of copyright so as to answer these questions by comparing contemporary scientific methods of creation with artistic methods in order to determine the copyright narratives and metaphors of subject matter, authorship, creation, and originality that best address the concerns underlying these four questions and allow copyright protection over the works

    Post-Myriad Genetics Copyright of Synthetic Biology and Living Media

    Get PDF

    2016 - The Twenty-first Annual Symposium of Student Scholars

    Get PDF
    The full program book from the Twenty-first Annual Symposium of Student Scholars, held on April 21, 2016. Includes abstracts from the presentations and posters.https://digitalcommons.kennesaw.edu/sssprograms/1015/thumbnail.jp

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Multimodal and Embodied Learning with Language as the Anchor

    Get PDF
    Since most worldly phenomena can be expressed via language, language is a crucial medium for transferring information and integrating multiple information sources. For example, humans can describe what they see, hear and feel, and also explain how they move with words. Conversely, humans can imagine scenes, sounds, and feelings, and move their body from language descriptions. Therefore, language plays an important role in solving machine learning (ML) and artificial intelligence (AI) problems with multimodal input sources. This thesis studies how different modalities can be integrated with language in multimodal learning settings as follows. First, we explore the possibility to integrate external information from the textual description about an image into a visual question answering system which integrates the key words/phrases in paragraph captions in semi-symbolic form, to make the alignment between features easier. We expand the direction to a video question answering task. We employ dense captions, which generate object-level descriptions of an image, to help localize the key frames in a video clip for answering a question. Next, we build benchmarks to evaluate embodied agents to perform tasks according to natural language instruction from humans. We introduce a new instruction-following navigation and object assembly system, called ArraMon in which agents follow the natural language instructions to collect an object and put it in a target location, requiring agents to deeply understand referring expressions and the concept of direction from the egocentric perspective. We also suggest a new task setup for the useful Cooperative Vision-and-Dialog Navigation (CVDN) dataset. We analyze scoring behaviors of models and find issues from the existing Navigation from Dialog History (NDH) task and propose a more realistic and challenging task setup, called NDH-Full which better appreciates the purpose of the CVDN dataset. Finally, we explore AI assistant systems which help humans with different tasks. We introduce a new correctional captioning dataset on human body pose, called FixMyPose, to encourage the ML/AI community to build such guidance systems that require models to learn to distinguish different levels of pose difference to describe desirable pose change. Also, we introduce a new conversational image search and editing assistant system, called CAISE, in which an agent helps a user to search images and edit them by holding a conversation.Doctor of Philosoph

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    "IT LOOKS LIKE SOUND!" : DRAWING A HISTORY OF "ANIMATED MUSIC" IN THE EARLY TWENTIETH CENTURY

    Get PDF
    In the early 1930s, film sound technicians created completely synthetic sound by drawing or photographing patterns on the soundtrack area of the filmstrip. Several artists in Germany, Russia, England, and Canada used this innovation to write what came to be called "animated music" or "ornamental sound." It was featured in a few commercial and small artistic productions and was enthusiastically received by the public. It was heralded as the future of musical composition that could eliminate performers, scores, and abstract notation by one system of graphic sound notation and mechanized playback. Its popularity among mainstream filmmaking did not last long, however, due to its limited development. The artists drawing animated sound were dependent entirely upon their technological medium, and when the sound-on-film system faded from popularity and production, so did their art. By examining from a musicological perspective, for the first time, specific examples of animated music from the work of Norman McLaren, Oskar Fischinger, Rudolph Pfenninger, and several filmmakers in Russia, this thesis enumerates the techniques used in animated sound. It also explores the process of its creation, adaptation, and decline. In doing so, it reveals an important chapter in the little-known early history of modern synthesized sound alongside the futuristic musical ideas it both answered and inspired

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas
    • …
    corecore