1,119 research outputs found

    Analyzing Handwritten and Transcribed Symbols in Disparate Corpora

    Get PDF
    Cuneiform tablets appertain to the oldest textual artifacts used for more than three millennia and are comparable in amount and relevance to texts written in Latin or ancient Greek. These tablets are typically found in the Middle East and were written by imprinting wedge-shaped impressions into wet clay. Motivated by the increased demand for computerized analysis of documents within the Digital Humanities, we develop the foundation for quantitative processing of cuneiform script. Using a 3D-Scanner to acquire a cuneiform tablet or manually creating line tracings are two completely different representations of the same type of text source. Each representation is typically processed with its own tool-set and the textual analysis is therefore limited to a certain type of digital representation. To homogenize these data source a unifying minimal wedge feature description is introduced. It is extracted by pattern matching and subsequent conflict resolution as cuneiform is written densely with highly overlapping wedges. Similarity metrics for cuneiform signs based on distinct assumptions are presented. (i) An implicit model represents cuneiform signs using undirected mathematical graphs and measures the similarity of signs with graph kernels. (ii) An explicit model approaches the problem of recognition by an optimal assignment between the wedge configurations of two signs. Further, methods for spotting cuneiform script are developed, combining the feature descriptors for cuneiform wedges with prior work on segmentation-free word spotting using part-structured models. The ink-ball model is adapted by treating wedge feature descriptors as individual parts. The similarity metrics and the adapted spotting model are both evaluated on a real-world dataset outperforming the state-of-the-art in cuneiform sign similarity and spotting. To prove the applicability of these methods for computational cuneiform analysis, a novel approach is presented for mining frequent constellations of wedges resulting in spatial n-grams. Furthermore, a method for automatized transliteration of tablets is evaluated by employing structured and sequential learning on a dataset of parallel sentences. Finally, the conclusion outlines how the presented methods enable the development of new tools and computational analyses, which are objective and reproducible, for quantitative processing of cuneiform script

    Right ventricular biomechanics in pulmonary hypertension

    Get PDF
    As outcome in pulmonary hypertension is strongly associated with progressive right ventricular dysfunction, the work in this thesis seeks to determine the regional distribution of forces on the right ventricle, its geometry, and deformations subsequent to load. This thesis contributes to the understanding of how circulating biomarkers of energy metabolism and stress-response pathways are related to adverse cardiac remodelling and functional decompensation. A numerical model of the heart was used to derive a three-dimensional representation of right ventricular morphology, function and wall stress in pulmonary hypertension patients. This approach was tested by modelling the effect of pulmonary endarterectomy in patients with chronic thromboembolic disease. The relationship between the cardiac phenotype and 10 circulating metabolites, known to be associated with all-cause mortality, was assessed using mass univariate regression. Increasing afterload (mean pulmonary artery pressure) was significantly associated with hypertrophy of the right ventricular inlet and dilatation, indicative of global eccentric remodelling, and decreased systolic excursion. Right ventricular ejection fraction was found to be negatively associated with 3-hydroxy-3-methylglutarate, N-formylmethionine, and fumarate. Wall stress was related to all-cause mortality and its decrease after pulmonary endarterectomy was associated with a fall in brain natriuretic peptide. Six metabolites were associated with elevated end-systolic wall stress: dehydroepiandrosterone sulfate, N2,N2-dimethylguanosine, N1-methylinosine, 3-hydroxy-3-methylglutarate, N-acetylmethionine, and N-formylmethionine. Metabolic profiles related to energy metabolism and stress-response are associated with elevations in right ventricular end-systolic wall stress that have prognostic significance in pulmonary hypertension patients. These results show that statistical parametric mapping can give regional information on the right ventricle and that metabolic phenotyping, as well as predicting outcomes, provides markers informative of the biomechanical status of the right ventricle in pulmonary hypertension.Open Acces

    NASA Tech Briefs, September 2008

    Get PDF
    Topics covered include: Nanotip Carpets as Antireflection Surfaces; Nano-Engineered Catalysts for Direct Methanol Fuel Cells; Capillography of Mats of Nanofibers; Directed Growth of Carbon Nanotubes Across Gaps; High-Voltage, Asymmetric-Waveform Generator; Magic-T Junction Using Microstrip/Slotline Transitions; On-Wafer Measurement of a Silicon-Based CMOS VCO at 324 GHz; Group-III Nitride Field Emitters; HEMT Amplifiers and Equipment for their On-Wafer Testing; Thermal Spray Formation of Polymer Coatings; Improved Gas Filling and Sealing of an HC-PCF; Making More-Complex Molecules Using Superthermal Atom/Molecule Collisions; Nematic Cells for Digital Light Deflection; Improved Silica Aerogel Composite Materials; Microgravity, Mesh-Crawling Legged Robots; Advanced Active-Magnetic-Bearing Thrust- Measurement System; Thermally Actuated Hydraulic Pumps; A New, Highly Improved Two-Cycle Engine; Flexible Structural-Health-Monitoring Sheets; Alignment Pins for Assembling and Disassembling Structures; Purifying Nucleic Acids from Samples of Extremely Low Biomass; Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery; UV-Resistant Non-Spore-Forming Bacteria From Spacecraft-Assembly Facilities; Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy; Simplified Modeling of Oxidation of Hydrocarbons; Near-Field Spectroscopy with Nanoparticles Deposited by AFM; Light Collimator and Monitor for a Spectroradiometer; Hyperspectral Fluorescence and Reflectance Imaging Instrument; Improving the Optical Quality Factor of the WGM Resonator; Ultra-Stable Beacon Source for Laboratory Testing of Optical Tracking; Transmissive Diffractive Optical Element Solar Concentrators; Delaying Trains of Short Light Pulses in WGM Resonators; Toward Better Modeling of Supercritical Turbulent Mixing; JPEG 2000 Encoding with Perceptual Distortion Control; Intelligent Integrated Health Management for a System of Systems; Delay Banking for Managing Air Traffic; and Spline-Based Smoothing of Airfoil Curvatures

    The Dollar General: Continuous Custom Gesture Recognition Techniques At Everyday Low Prices

    Get PDF
    Humans use gestures to emphasize ideas and disseminate information. Their importance is apparent in how we continuously augment social interactions with motion—gesticulating in harmony with nearly every utterance to ensure observers understand that which we wish to communicate, and their relevance has not escaped the HCI community\u27s attention. For almost as long as computers have been able to sample human motion at the user interface boundary, software systems have been made to understand gestures as command metaphors. Customization, in particular, has great potential to improve user experience, whereby users map specific gestures to specific software functions. However, custom gesture recognition remains a challenging problem, especially when training data is limited, input is continuous, and designers who wish to use customization in their software are limited by mathematical attainment, machine learning experience, domain knowledge, or a combination thereof. Data collection, filtering, segmentation, pattern matching, synthesis, and rejection analysis are all non-trivial problems a gesture recognition system must solve. To address these issues, we introduce The Dollar General (TDG), a complete pipeline composed of several novel continuous custom gesture recognition techniques. Specifically, TDG comprises an automatic low-pass filter tuner that we use to improve signal quality, a segmenter for identifying gesture candidates in a continuous input stream, a classifier for discriminating gesture candidates from non-gesture motions, and a synthetic data generation module we use to train the classifier. Our system achieves high recognition accuracy with as little as one or two training samples per gesture class, is largely input device agnostic, and does not require advanced mathematical knowledge to understand and implement. In this dissertation, we motivate the importance of gestures and customization, describe each pipeline component in detail, and introduce strategies for data collection and prototype selection

    Proceedings of the 4th international conference on disability, virtual reality and associated technologies (ICDVRAT 2002)

    Get PDF
    The proceedings of the conferenc

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model.Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified.In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments.The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model. Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified. In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments. The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design
    • …
    corecore