191 research outputs found

    Salvador Dali: Design for the Theater

    Get PDF

    Leveraging Non-Volatile Memory in Modern Storage Management Architectures

    Get PDF
    Non-volatile memory technologies (NVM) introduce a novel class of devices that combine characteristics of both storage and main memory. Like storage, NVM is not only persistent, but also denser and cheaper than DRAM. Like DRAM, NVM is byte-addressable and has lower access latency. In recent years, NVM has gained a lot of attention both in academia and in the data management industry, with views ranging from skepticism to over excitement. Some critics claim that NVM is not cheap enough to replace flash-based SSDs nor is it fast enough to replace DRAM, while others see it simply as a storage device. Supporters of NVM have observed that its low latency and byte-addressability requires radical changes and a complete rewrite of storage management architectures. This thesis takes a moderate stance between these two views. We consider that, while NVM might not replace flash-based SSD or DRAM in the near future, it has the potential to reduce the gap between them. Furthermore, treating NVM as a regular storage media does not fully leverage its byte-addressability and low latency. On the other hand, completely redesigning systems to be NVM-centric is impractical. Proposals that attempt to leverage NVM to simplify storage management result in completely new architectures that face the same challenges that are already well-understood and addressed by the traditional architectures. Therefore, we take three common storage management architectures as a starting point, and propose incremental changes to enable them to better leverage NVM. First, in the context of log-structured merge-trees, we investigate the impact of storing data in NVM, and devise methods to enable small granularity accesses and NVM-aware caching policies. Second, in the context of B+Trees, we propose to extend the buffer pool and describe a technique based on the concept of optimistic consistency to handle corrupted pages in NVM. Third, we employ NVM to enable larger capacity and reduced costs in a index+log key-value store, and combine it with other techniques to build a system that achieves low tail latency. This thesis aims to describe and evaluate these techniques in order to enable storage management architectures to leverage NVM and achieve increased performance and lower costs, without major architectural changes.:1 Introduction 1.1 Non-Volatile Memory 1.2 Challenges 1.3 Non-Volatile Memory & Database Systems 1.4 Contributions and Outline 2 Background 2.1 Non-Volatile Memory 2.1.1 Types of NVM 2.1.2 Access Modes 2.1.3 Byte-addressability and Persistency 2.1.4 Performance 2.2 Related Work 2.3 Case Study: Persistent Tree Structures 2.3.1 Persistent Trees 2.3.2 Evaluation 3 Log-Structured Merge-Trees 3.1 LSM and NVM 3.2 LSM Architecture 3.2.1 LevelDB 3.3 Persistent Memory Environment 3.4 2Q Cache Policy for NVM 3.5 Evaluation 3.5.1 Write Performance 3.5.2 Read Performance 3.5.3 Mixed Workloads 3.6 Additional Case Study: RocksDB 3.6.1 Evaluation 4 B+Trees 4.1 B+Tree and NVM 4.1.1 Category #1: Buffer Extension 4.1.2 Category #2: DRAM Buffered Access 4.1.3 Category #3: Persistent Trees 4.2 Persistent Buffer Pool with Optimistic Consistency 4.2.1 Architecture and Assumptions 4.2.2 Embracing Corruption 4.3 Detecting Corruption 4.3.1 Embracing Corruption 4.4 Repairing Corruptions 4.5 Performance Evaluation and Expectations 4.5.1 Checksums Overhead 4.5.2 Runtime and Recovery 4.6 Discussion 5 Index+Log Key-Value Stores 5.1 The Case for Tail Latency 5.2 Goals and Overview 5.3 Execution Model 5.3.1 Reactive Systems and Actor Model 5.3.2 Message-Passing Communication 5.3.3 Cooperative Multitasking 5.4 Log-Structured Storage 5.5 Networking 5.6 Implementation Details 5.6.1 NVM Allocation on RStore 5.6.2 Log-Structured Storage and Indexing 5.6.3 Garbage Collection 5.6.4 Logging and Recovery 5.7 Systems Operations 5.8 Evaluation 5.8.1 Methodology 5.8.2 Environment 5.8.3 Other Systems 5.8.4 Throughput Scalability 5.8.5 Tail Latency 5.8.6 Scans 5.8.7 Memory Consumption 5.9 Related Work 6 Conclusion Bibliography A PiBenc

    Art and the unconscious : a semiotic case study of the painting process

    Get PDF
    This dissertation is an attempt to design an interpretation model for the comprehension of unconscious content in artworks, as well as to find painting techniques to free the unconscious mind, allowing it to be expressed through artwork. The interpretation model, still in its infancy, is ripe for further development. The unconscious mind is a fascinating subject—in art production as well as in many scientific fields. This hidden part of the mind, being the source of creativity, constitutes an important foundation for many possible and valuable inquiries in multiple areas of knowledge. In the present study, the unconscious is approached from an art-educational perspective. The nature of the unconscious is addressed through the theories of Carl Gustav Jung and Charles Sanders Peirce, as well as through the information gained from data the author produced herself during the experimental painting process she devised for this study. For psychological distinctions not addressed by Jung, the theories of Sigmund Freud are used to forward this inquiry into the unconscious mind. A research method was created to bring Peirce’s theories into consonance with Jung’s amplification method. Since Peirce’s theories are challenging to read, to avoid misinterpretation, the author used Phyllis Chiasson’s 2001 book Peirce’s Pragmatism: The Design for Thinking as a secondary source. Peirce’s three modes of reality—firstness, secondness, and thirdness—were utilized to interpret artworks. This three-mode reality allows interpreters to reflect on their subjective feelings and then to compare them to collected data. The interpreters’ intuitive self-interpretations often correlate well with the more objective data. In this approach to interpretation, the work of art is seen as a sign, in the Jungian as well as in the Peircean sense, and interpretation seeks to discover a sign’s objects—icon, index, and symbol. Additionally, the objects are studied in combination with Peirce’s designation of the sign’s character elements—sinsign, qualisign, and legisign. Peirce’s theory offers a logical and productive structure for approaching a variety of signs and reaching a multiplicity of interpretations. Jungian theories inculcated a combined psychological and artistic perspective for the interpretation of artworks. Jung’s method of amplification is an effort to bring a symbol to life, and it is used as a technique to discover—through the seeking of parallels—a possible context for any unconscious content that an image might have. In amplification, a word or element—from a fantasy, dream, or, in this study, artwork—is associated, through use of what Jung called the active imagination, with another context where it also occurs. It must be remembered that unconscious images in artworks do not easily open themselves up for interpretation. One way to interpret possibly unconscious images is for the interpreter to become vulnerable by employing his or her own unconscious mind to interpret an artwork; such use of the active imagination can enable a subjective experience of the artwork on the part of the interpreter, who might thereby uncover unconscious content. Moreover, in this study, Jung’s theory of archetypes is employed, in parallel with Peirce’s and Jung’s theories of the sign, to illuminate an artwork’s images by connecting them with collective unconscious archetypes. The author relied upon The Book of Symbols: Reflections on Archetypal Images (Ronnberg and Martin 2010) as the main source for interpreting possibly unconscious elements in the artworks. This approach is especially powerful when artists interpret their own artwork—possibly leading to a galvanizing self-discovery as they revisit past encounters, personal highlights, and other pieces of unconscious content that might reveal previously unknown meaning important to their life. By comparing archetypes to the unconscious content in their own lives, people can discover themselves. Unconscious phenomena were approached on both the theoretical and empirical levels. Different methods and ideas were used to stimulate the author’s unconscious thinking while performing artwork analyses of three paintings: surrealist Salvador Dalí’s (1904–1989) Assumpta Corpuscularia Lapislazulina; abstract expressionist Jackson Pollock’s (1912-1956) The Deep; and one painting by the author herself, and for which the process of painting is videorecorded (www.astagallery.com/academic.html). With regard to the third painting interpreted, the author is the study subject, and her artistic production is used as an opportunity to explore the unconscious mind. During the act of painting, an attempt is made to free unconscious thinking by fusing Dalí’s and Pollock’s methods as well as by testing multiple other methods. The author’s artistic production was conjoined with use of a technique that is called the verbal protocol method, which generates additional data not necessarily visible in the final artwork. This method unseals the artist’s tacit knowledge, which in normal circumstances remains silent. In the verbal protocol method, the author, while engaged in the act of painting, speaks aloud the stream of consciousness that accompanies and guides the art-making activity; the recorded and transcribed monologue from the artistic production is supplied, in both Finnish and English, in appendices. This thinking-aloud technique allows a person to become more self-aware and to create more solutions while struggling with emergent artistic problems. Such narratives can reveal more about the painting than the completed artwork alone can convey. Along with the artist’s finished painting and the videorecorded material, narratives produced during the painting activity were interpreted. Moreover, the discoveries arising from the author’s interpretation of her own artwork are correlated with some of the latest research on the unconscious. This study allows the reader-viewer an intimate glimpse into the author’s subjective painting experience and demonstrates the participation of the unconscious in an artwork’s creation. The interpretations methodology constitutes an interpretation model suitable for other artists and art educators to follow. Keywords: unconscious, art, archetype, mandal

    Insubordinate Costume

    Get PDF
    Working as a costume designer/maker I became increasingly interested in the agency and power of costume and the different ways costumes can transform the performing body, override fixed boundaries and subvert the traditional hierarchies of the theatre where the costume designer/maker is typically required to accommodate the wishes of the director or choreographer. The costumes in this study are the antitheses of subordinate costume, which is often dictated to by practicalities, or placed within the confines of text, directorial notions, predefined choreography or the passive function of dressing actors. In this research, I examine historical and contemporary examples of scenographic costume: the type of costume that creates an almost complete stage environment by itself, simultaneously acting as costume, set and performance. With reference to theories of play and creativity, I explore the way costume can be used as a research tool and investigate how playing with my modular Insubordinate Costumes enables different creative interpretations and offers diverse dramaturgical possibilities. The term Insubordinate Costume evolved from my research and is used to reflect the defiant, rebellious and unruly nature of costume when it flouts practicalities and textual confines to embrace the role of protagonist. In order to explore the agency of my Insubordinate Costumes, I developed flat-pack modular pieces which can be constructed in different ways and organised workshops with both single performers and small groups in order to analyse a range of different approaches to performance making. The rule of play is essential to the approach to these costumes, both in the playful essence of the costume and in the way the body interacts with it. Although the modular pieces are always the same, the resulting sculptural forms created by each performer have always been unique, as have their performances

    Security protocols suite for machine-to-machine systems

    Get PDF
    Nowadays, the great diffusion of advanced devices, such as smart-phones, has shown that there is a growing trend to rely on new technologies to generate and/or support progress; the society is clearly ready to trust on next-generation communication systems to face today’s concerns on economic and social fields. The reason for this sociological change is represented by the fact that the technologies have been open to all users, even if the latter do not necessarily have a specific knowledge in this field, and therefore the introduction of new user-friendly applications has now appeared as a business opportunity and a key factor to increase the general cohesion among all citizens. Within the actors of this technological evolution, wireless machine-to-machine (M2M) networks are becoming of great importance. These wireless networks are made up of interconnected low-power devices that are able to provide a great variety of services with little or even no user intervention. Examples of these services can be fleet management, fire detection, utilities consumption (water and energy distribution, etc.) or patients monitoring. However, since any arising technology goes together with its security threats, which have to be faced, further studies are necessary to secure wireless M2M technology. In this context, main threats are those related to attacks to the services availability and to the privacy of both the subscribers’ and the services providers’ data. Taking into account the often limited resources of the M2M devices at the hardware level, ensuring the availability and privacy requirements in the range of M2M applications while minimizing the waste of valuable resources is even more challenging. Based on the above facts, this Ph. D. thesis is aimed at providing efficient security solutions for wireless M2M networks that effectively reduce energy consumption of the network while not affecting the overall security services of the system. With this goal, we first propose a coherent taxonomy of M2M network that allows us to identify which security topics deserve special attention and which entities or specific services are particularly threatened. Second, we define an efficient, secure-data aggregation scheme that is able to increase the network lifetime by optimizing the energy consumption of the devices. Third, we propose a novel physical authenticator or frame checker that minimizes the communication costs in wireless channels and that successfully faces exhaustion attacks. Fourth, we study specific aspects of typical key management schemes to provide a novel protocol which ensures the distribution of secret keys for all the cryptographic methods used in this system. Fifth, we describe the collaboration with the WAVE2M community in order to define a proper frame format actually able to support the necessary security services, including the ones that we have already proposed; WAVE2M was funded to promote the global use of an emerging wireless communication technology for ultra-low and long-range services. And finally sixth, we provide with an accurate analysis of privacy solutions that actually fit M2M-networks services’ requirements. All the analyses along this thesis are corroborated by simulations that confirm significant improvements in terms of efficiency while supporting the necessary security requirements for M2M networks

    Digital light

    Get PDF
    Light symbolises the highest good, it enables all visual art, and today it lies at the heart of billion-dollar industries. The control of light forms the foundation of contemporary vision. Digital Light brings together artists, curators, technologists and media archaeologists to study the historical evolution of digital light-based technologies. Digital Light provides a critical account of the capacities and limitations of contemporary digital light-based technologies and techniques by tracing their genealogies and comparing them with their predecessor media. As digital light remediates multiple historical forms (photography, print, film, video, projection, paint), the collection draws from all of these histories, connecting them to the digital present and placing them in dialogue with one another.Light is at once universal and deeply historical. The invention of mechanical media (including photography and cinematography) allied with changing print technologies (half-tone, lithography) helped structure the emerging electronic media of television and video, which in turn shaped the bitmap processing and raster display of digital visual media. Digital light is, as Stephen Jones points out in his contribution, an oxymoron: light is photons, particulate and discrete, and therefore always digital. But photons are also waveforms, subject to manipulation in myriad ways. From Fourier transforms to chip design, colour management to the translation of vector graphics into arithmetic displays, light is constantly disciplined to human purposes. In the form of fibre optics, light is now the infrastructure of all our media; in urban plazas and handheld devices, screens have become ubiquitous, and also standardised. This collection addresses how this occurred, what it means, and how artists, curators and engineers confront and challenge the constraints of increasingly normalised digital visual media.While various art pieces and other content are considered throughout the collection, the focus is specifically on what such pieces suggest about the intersection of technique and technology. Including accounts by prominent artists and professionals, the collection emphasises the centrality of use and experimentation in the shaping of technological platforms. Indeed, a recurring theme is how techniques of previous media become technologies, inscribed in both digital software and hardware. Contributions include considerations of image-oriented software and file formats; screen technologies; projection and urban screen surfaces; histories of computer graphics, 2D and 3D image editing software, photography and cinematic art; and transformations of light-based art resulting from the distributed architectures of the internet and the logic of the database.Digital Light brings together high profile figures in diverse but increasingly convergent fields, from academy award-winner and co-founder of Pixar, Alvy Ray Smith to feminist philosopher Cathryn Vasseleu

    Coherent Light from Projection to Fibre Optics

    Get PDF

    Digital Light

    Get PDF
    Light symbolises the highest good, it enables all visual art, and today it lies at the heart of billion-dollar industries. The control of light forms the foundation of contemporary vision. Digital Light brings together artists, curators, technologists and media archaeologists to study the historical evolution of digital light-based technologies. Digital Light provides a critical account of the capacities and limitations of contemporary digital light-based technologies and techniques by tracing their genealogies and comparing them with their predecessor media. As digital light remediates multiple historical forms (photography, print, film, video, projection, paint), the collection draws from all of these histories, connecting them to the digital present and placing them in dialogue with one another. Light is at once universal and deeply historical. The invention of mechanical media (including photography and cinematography) allied with changing print technologies (half-tone, lithography) helped structure the emerging electronic media of television and video, which in turn shaped the bitmap processing and raster display of digital visual media. Digital light is, as Stephen Jones points out in his contribution, an oxymoron: light is photons, particulate and discrete, and therefore always digital. But photons are also waveforms, subject to manipulation in myriad ways. From Fourier transforms to chip design, colour management to the translation of vector graphics into arithmetic displays, light is constantly disciplined to human purposes. In the form of fibre optics, light is now the infrastructure of all our media; in urban plazas and handheld devices, screens have become ubiquitous, and also standardised. This collection addresses how this occurred, what it means, and how artists, curators and engineers confront and challenge the constraints of increasingly normalised digital visual media. While various art pieces and other content are considered throughout the collection, the focus is specifically on what such pieces suggest about the intersection of technique and technology. Including accounts by prominent artists and professionals, the collection emphasises the centrality of use and experimentation in the shaping of technological platforms. Indeed, a recurring theme is how techniques of previous media become technologies, inscribed in both digital software and hardware. Contributions include considerations of image-oriented software and file formats; screen technologies; projection and urban screen surfaces; histories of computer graphics, 2D and 3D image editing software, photography and cinematic art; and transformations of light-based art resulting from the distributed architectures of the internet and the logic of the database. Digital Light brings together high profile figures in diverse but increasingly convergent fields, from academy award-winner and co-founder of Pixar, Alvy Ray Smith to feminist philosopher Cathryn Vasseleu

    Fault-tolerant distributed transactions for partitioned OLTP databases

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-112).This thesis presents Dtxn, a fault-tolerant distributed transaction system designed specifically for building online transaction processing (OLTP) databases. Databases have traditionally been designed as general purpose data processing tools. By being designed only for OLTP workloads, Dtxn can be more efficient. It is designed to support very large databases by partitioning data across a cluster of commodity servers in a data center. Combining multiple servers together allows systems built with Dtxn to be cost effective, highly available, scalable, and fault-tolerant. Dtxn provides three novel features. First, it provides reusable infrastructure for building a distributed OLTP database out of single machine databases. This allows developers to take a specialized backend storage engine and use it across multiple machines, without needing to re-implement the distributed transaction infrastructure. We used Dtxn to build four different applications: a simple key/value store, a specialized TPC-C implementation, a main-memory OLTP database, and a traditional disk-based OLTP database. Second, Dtxn provides a novel concurrency control mechanism called speculative concurrency control, designed for main memory OLTP workloads that are primarily composed of transactions with a single round of communication between the application and database. Speculative concurrency control executes one transaction at a time, with no concurrency control overhead. In cases where there may be stalls due to network communication, it speculates future transactions. Our results show that this provides significantly better throughput than traditional two-phase locking, outperforming it by a factor of two on the TPC-C benchmark. Finally, Dtxn supports live migration, allowing part of the data on one server to be moved to another server while processing transactions. Our experiments show that our approach has nearly no visible impact on throughput or latency when moving data under moderate to high loads. It has significantly less impact than the best commercially available systems when the database is overloaded. The period of time where the throughput is reduced is less than half as long as failing over to another replica or using virtual machine migration.by Evan Philip Charles Jones.Ph.D
    corecore