102 research outputs found

    Photoacoustic Microscopy of Ceramics Using Laser Heterodyne Detection

    Get PDF
    In recent years a variety of thermoacoustic techniques have been used to image surface and near surface features in ceramics. These include early gas cell methods [1] as well as the Scanning Electron Acoustic Microscopy (SEAM) technique [2] and more recently the Mirage or Optical Beam Deflection (OBD) methods [3,4]. In the gas cell and Mirage methods, the effect of the outgoing thermal wave on the air boundary at the sample surface is sensed, respectively, by a microphone and by a laser beam. In SEAM the specimen vibration or acoustic wave is sensed directly by a contacting transducer. The gas cell and Mirage methods generate pure thermal images but require long times to generate useful images of areas a few millimeters square. SEAM, on the other hand, is a high signal/noise technique due to the exceptional sensitivity of piezoelectrics and thus requires shorter imaging times. However, SEAM is sensitive to both the thermal wave signal and the local mechanical response of the specimen to the thermal wave and convolutes the two responses. The author has recently demonstrated that the entire image can be dominated by the mechanical response alone [5]. Thus SEAM image interpretation is considerably clearer for such cases. This thermomechanical mechanism is now fairly well understood and its analysis will be presented elsewhere. SEAM is thus an excellent method for imaging ceramics. Using Coordinate Modulation (CM) with SEAM 5mm x 5mm areas of ceramics have been imaged for surface and subsurface defects in 2 minutes. CM requires the electron beam to be dithered a small amount instead of being intensity modulated, the usual approach. This increases image contrast and will be described later

    Probabilistic approaches for modeling text structure and their application to text-to-text generation

    Get PDF
    Since the early days of generation research, it has been acknowledged that modeling the global structure of a document is crucial for producing coherent, readable output. However, traditional knowledge-intensive approaches have been of limited utility in addressing this problem since they cannot be effectively scaled to operate in domain-independent, large-scale applications. Due to this difficulty, existing text-to-text generation systems rarely rely on such structural information when producing an output text. Consequently, texts generated by these methods do not match the quality of those written by humans – they are often fraught with severe coherence violations and disfluencies. In this chapter, I will present probabilistic models of document structure that can be effectively learned from raw document collections. This feature distinguishes these new models from traditional knowledge intensive approaches used in symbolic concept-to-text generation. Our results demonstrate that these probabilistic models can be directly applied to content organization, and suggest that these models can prove useful in an even broader range of text-to-text applications than we have considered here.National Science Foundation (U.S.) (CAREER grant IIS- 0448168)Microsoft Research. New Faculty Fellowshi

    PiSpy: an affordable, accessible, and flexible imaging platform for the automated observation of organismal biology and behavior

    Get PDF
    A great deal of understanding can be gleaned from direct observation of organismal growth, development, and behavior. However, direct observation can be time consuming and influence the organism through unintentional stimuli. Additionally, video capturing equipment can often be prohibitively expensive, difficult to modify to one’s specific needs, and may come with unnecessary features. Here, we describe PiSpy, a low-cost, automated video acquisition platform that uses a Raspberry Pi computer and camera to record video or images at specified time intervals or when externally triggered. All settings and controls, such as programmable light cycling, are accessible to users with no programming experience through an easy-to-use graphical user interface. Importantly, the entire PiSpy system can be assembled for less than $100 using laser-cut and 3D-printed components. We demonstrate the broad applications and flexibility of PiSpy across a range of model and non-model organisms. Designs, instructions, and code can be accessed through an online repository, where a global community of PiSpy users can also contribute their own unique customizations and help grow the community of open-source research solutions

    Role of forested land for natural flood management in the UK: A review

    Get PDF

    Architectural Paint Research and the Archaeology of Buildings

    Get PDF
    Architectural Paint Research (APR) is the archaeological study of interior and exterior applied decoration. Over time, applied layers of paint and other decorative finishes build-up on the surface of a built structure, encapsulating microscopic deposits of material evidence. This evidence can be used to inform the phase dating of a structure, or illuminate the historic function of a space. It can challenge preconceived ideas of how specific areas were decorated, and track the changes in aesthetics over time. It can identify when architects’ ideologies have been balanced by practical considerations. It can provide an insight into the intangible and ephemeral atmosphere that decoration gives to a room. Finally, it can examine the dirt trapped between layers of decoration and thus categorize the physical environmental conditions that surrounded a building at varying points in its history. Although used in the commercial heritage and conservation sectors, Architectural Paint Research is almost completely unknown to building archaeologists. This article aims to introduce APR to a new audience, and argues that is an invaluable tool in the archaeological interpretation of buildings

    Linking genes to literature: text mining, information extraction, and retrieval applications for biology

    Get PDF
    Efficient access to information contained in online scientific literature collections is essential for life science research, playing a crucial role from the initial stage of experiment planning to the final interpretation and communication of the results. The biological literature also constitutes the main information source for manual literature curation used by expert-curated databases. Following the increasing popularity of web-based applications for analyzing biological data, new text-mining and information extraction strategies are being implemented. These systems exploit existing regularities in natural language to extract biologically relevant information from electronic texts automatically. The aim of the BioCreative challenge is to promote the development of such tools and to provide insight into their performance. This review presents a general introduction to the main characteristics and applications of currently available text-mining systems for life sciences in terms of the following: the type of biological information demands being addressed; the level of information granularity of both user queries and results; and the features and methods commonly exploited by these applications. The current trend in biomedical text mining points toward an increasing diversification in terms of application types and techniques, together with integration of domain-specific resources such as ontologies. Additional descriptions of some of the systems discussed here are available on the internet

    Microplanning with Communicative Intentions: The SPUD System

    Get PDF
    The process of microplanning in Natural Language Generation (NLG) encompasses a range of problems in which a generator must bridge underlying domain-specific representations and general linguistic representations. These problems include constructing linguistic referring expressions to identify domain objects, selecting lexical items to express domain concepts, and using complex linguistic constructions to concisely convey related domain facts. In this paper, we argue that such problems are best solved through a uniform, comprehensive, declarative process. In our approach, the generator directly explores a search space for utterances described by a linguistic grammar. At each stage of search, the generator uses a model of interpretation, which characterizes the potential links between the utterance and the domain and context, to assess its progress in conveying domain-specific representations. We further address the challenges for implementation and knowledge representation in this approach. We show how to implement this approach effectively by using the lexicalized tree-adjoining grammar formalism (LTAG) to connect structure to meaning and using modal logic programming to connect meaning to context. We articulate a detailed methodology for designing grammatical and conceptua
    • 

    corecore