127,593 research outputs found

    A new multisensor software architecture for movement detection: Preliminary study with people with cerebral palsy

    Get PDF
    A five-layered software architecture translating movements into mouse clicks has been developed and tested on an Arduino platform with two different sensors: accelerometer and flex sensor. The archi-tecture comprises low-pass and derivative filters, an unsupervised classifier that adapts continuously to the strength of the user's movements and a finite state machine which sets up a timer to prevent in-voluntary movements from triggering false positives. Four people without disabilities and four people with cerebral palsy (CP) took part in the experi-ments. People without disabilities obtained an average of 100% and 99.3% in precision and true positive rate (TPR) respectively and there were no statistically significant differences among type of sensors and placement. In the same experiment, people with disabilities obtained 97.9% and 100% in precision and TPR respectively. However, these results worsened when subjects used the system to access a commu-nication board, 89.6% and 94.8% respectively. With their usual method of access-an adapted switch- they obtained a precision and TPR of 86.7% and 97.8% respectively. For 3-outof- 4 participants with disabilities our system detected the movement faster than the switch. For subjects with CP, the accelerometer was the easiest to use because it is more sensitive to gross motor motion than the flex sensor which requires more complex movements. A final survey showed that 3-out-of-4 participants with disabilities would prefer to use this new technology instead of their tra-ditional method of access

    Towards content accessibility through lexical simplification for Maltese as a low-resource language

    Get PDF
    Natural Language Processing techniques have been developed to assist in simplifying online content while preserving meaning. However, for low-resource languages, like Maltese, there are still numerous challenges and limitations. Lexical Simplification (LS) is a core technique typically adopted to improve content accessibility, and has been widely studied for highresource languages such as English and French. Motivated by the need to improve access to Maltese content and the limitations in this context, this work set out to develop and evaluate an LS system for Maltese text. An LS pipeline was developed consisting of (1) potential complex word identification, (2) substitute generation, (3) substitute selection, and (4) substitute ranking. An evaluation data set was developed to assess the performance of each step. Results are encouraging and will lead to numerous future work. Finally, a single-blind study was carried out with over 200 participants, where the system’s perceived quality in text simplification was evaluated. Results suggest that meaning is retained about 50% of the time, and when meaning is retained, about 70% of system-generated sentences are either perceived as simpler or of equal simplicity to the original. Challenges remain, and this study proposes a number of areas that may benefit from further research.peer-reviewe

    A Framework for Evaluating Traceability Benchmark Metrics

    Get PDF
    Many software traceability techniques have been developed in the past decade, but suffer from inaccuracy. To address this shortcoming, the software traceability research community seeks to employ benchmarking. Benchmarking will help the community agree on whether improvements to traceability techniques have addressed the challenges faced by the research community. A plethora of evaluation methods have been applied, with no consensus on what should be part of a community benchmark. The goals of this paper are: to identify recurring problems in evaluation of traceability techniques, to identify essential properties that evaluation methods should possess to overcome the identified problems, and to provide guidelines for benchmarking software traceability techniques. We illustrate the properties and guidelines using empirical evaluation of three software traceability techniques on nine data sets. The proposed benchmarking framework can be broadly applied to domains beyond traceability research

    Development of a Computer Vision-Based Three-Dimensional Reconstruction Method for Volume-Change Measurement of Unsaturated Soils during Triaxial Testing

    Get PDF
    Problems associated with unsaturated soils are ubiquitous in the U.S., where expansive and collapsible soils are some of the most widely distributed and costly geologic hazards. Solving these widespread geohazards requires a fundamental understanding of the constitutive behavior of unsaturated soils. In the past six decades, the suction-controlled triaxial test has been established as a standard approach to characterizing constitutive behavior for unsaturated soils. However, this type of test requires costly test equipment and time-consuming testing processes. To overcome these limitations, a photogrammetry-based method has been developed recently to measure the global and localized volume-changes of unsaturated soils during triaxial test. However, this method relies on software to detect coded targets, which often requires tedious manual correction of incorrectly coded target detection information. To address the limitation of the photogrammetry-based method, this study developed a photogrammetric computer vision-based approach for automatic target recognition and 3D reconstruction for volume-changes measurement of unsaturated soils in triaxial tests. Deep learning method was used to improve the accuracy and efficiency of coded target recognition. A photogrammetric computer vision method and ray tracing technique were then developed and validated to reconstruct the three-dimensional models of soil specimen

    Deep-learning based reconstruction of the shower maximum Xmax using the water-Cherenkov detectors of the Pierre Auger Observatory

    Get PDF
    The atmospheric depth of the air shower maximum Xmax is an observable commonly used for the determination of the nuclear mass composition of ultra-high energy cosmic rays. Direct measurements of Xmax are performed using observations of the longitudinal shower development with fluorescence telescopes. At the same time, several methods have been proposed for an indirect estimation of Xmax from the characteristics of the shower particles registered with surface detector arrays. In this paper, we present a deep neural network (DNN) for the estimation of Xmax. The reconstruction relies on the signals induced by shower particles in the ground based water-Cherenkov detectors of the Pierre Auger Observatory. The network architecture features recurrent long short-term memory layers to process the temporal structure of signals and hexagonal convolutions to exploit the symmetry of the surface detector array. We evaluate the performance of the network using air showers simulated with three different hadronic interaction models. Thereafter, we account for long-term detector effects and calibrate the reconstructed Xmax using fluorescence measurements. Finally, we show that the event-by-event resolution in the reconstruction of the shower maximum improves with increasing shower energy and reaches less than 25 g/cm2 at energies above 2×1019 eV

    Heterodyne range imaging as an alternative to photogrammetry

    Get PDF
    Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry
    corecore