26 research outputs found

    Gas Discharge Visualization: An Imaging and Modeling Tool for Medical Biometrics

    Get PDF
    The need for automated identification of a disease makes the issue of medical biometrics very current in our society. Not all biometric tools available provide real-time feedback. We introduce gas discharge visualization (GDV) technique as one of the biometric tools that have the potential to identify deviations from the normal functional state at early stages and in real time. GDV is a nonintrusive technique to capture the physiological and psychoemotional status of a person and the functional status of different organs and organ systems through the electrophotonic emissions of fingertips placed on the surface of an impulse analyzer. This paper first introduces biometrics and its different types and then specifically focuses on medical biometrics and the potential applications of GDV in medical biometrics. We also present our previous experience with GDV in the research regarding autism and the potential use of GDV in combination with computer science for the potential development of biological pattern/biomarker for different kinds of health abnormalities including cancer and mental diseases

    Device profiling analysis in Device-Aware Network

    Get PDF
    As more and more devices with a variety of capabilities are Internet-capable, device independence becomes a big issue when we would like the information that we request to be correctly displayed. This thesis introduces and compares how existing standards create a profile that describes the device capabilities to achieve the goal of device independence. After acknowledging the importance of device independence, this paper utilizes the idea to introduce a Device-Aware Network (DAN). DAN provides the infrastructure support for device-content compatibility matching for data transmission. We identify the major components of the DAN architecture and issues associated with providing this new network service. A Device-Aware Network will improve the network's efficiency by preventing unusable data from consuming host and network resources. The device profile is the key issue to achieve this goal.http://archive.org/details/deviceprofilingn109451301Captain, Taiwan ArmyApproved for public release; distribution is unlimited

    An Investigation of Iris Recognition in Unconstrained Environments

    Get PDF
    Iris biometrics is widely regarded as a reliable and accurate method for personal identification and the continuing advancements in the field have resulted in the technology being widely adopted in recent years and implemented in many different scenarios. Current typical iris biometric deployments, while generally expected to perform well, require a considerable level of co-operation from the system user. Specifically, the physical positioning of the human eye in relation to the iris capture device is a critical factor, which can substantially affect the performance of the overall iris biometric system. The work reported in this study will explore some of the important issues relating to the capture and identification of iris images at varying positions with respect to the capture device, and in particular presents an investigation into the analysis of iris images captured when the gaze angle of a subject is not aligned with the axis of the camera lens. A reliable method of acquiring off-angle iris images will be implemented, together with a study of a database thereby compiled of such images captured methodically. A detailed analysis of these so-called “off-angle” characteristics will be presented, making possible the implementation of new methods whereby significant enhancement of system performance can be achieved. The research carried out in this study suggests that implementing carefully new training methodologies to improve the classification performance can compensate effectively for the problem of off-angle iris images. The research also suggests that acquiring off-angle iris samples during the enrolment process for an iris biometric system and the implementation of the developed training configurations provides an increase in classification performance

    Addressing Situational and Physical Impairments and Disabilities with a Gaze-Assisted, Multi-Modal, Accessible Interaction Paradigm

    Get PDF
    Every day we encounter a variety of scenarios that lead to situationally induced impairments and disabilities, i.e., our hands are assumed to be engaged in a task, and hence unavailable for interacting with a computing device. For example, a surgeon performing an operation, a worker in a factory with greasy hands or wearing thick gloves, a person driving a car, and so on all represent scenarios of situational impairments and disabilities. In such cases, performing point-and-click interactions, text entry, or authentication on a computer using conventional input methods like the mouse, keyboard, and touch is either inefficient or not possible. Unfortunately, individuals with physical impairments and disabilities, by birth or due to an injury, are forced to deal with these limitations every single day. Generally, these individuals experience difficulty or are completely unable to perform basic operations on a computer. Therefore, to address situational and physical impairments and disabilities it is crucial to develop hands-free, accessible interactions. In this research, we try to address the limitations, inabilities, and challenges arising from situational and physical impairments and disabilities by developing a gaze-assisted, multi-modal, hands-free, accessible interaction paradigm. Specifically, we focus on the three primary interactions: 1) point-and-click, 2) text entry, and 3) authentication. We present multiple ways in which the gaze input can be modeled and combined with other input modalities to enable efficient and accessible interactions. In this regard, we have developed a gaze and foot-based interaction framework to achieve accurate “point-and-click" interactions and to perform dwell-free text entry on computers. In addition, we have developed a gaze gesture-based framework for user authentication and to interact with a wide range of computer applications using a common repository of gaze gestures. The interaction methods and devices we have developed are a) evaluated using the standard HCI procedures like the Fitts’ Law, text entry metrics, authentication accuracy and video analysis attacks, b) compared against the speed, accuracy, and usability of other gaze-assisted interaction methods, and c) qualitatively analyzed by conducting user interviews. From the evaluations, we found that our solutions achieve higher efficiency than the existing systems and also address the usability issues. To discuss each of these solutions, first, the gaze and foot-based system we developed supports point-and-click interactions to address the “Midas Touch" issue. The system performs at least as good (time and precision) as the mouse, while enabling hands-free interactions. We have also investigated the feasibility, advantages, and challenges of using gaze and foot-based point-and-click interactions on standard (up to 24") and large displays (up to 84") through Fitts’ Law evaluations. Additionally, we have compared the performance of the gaze input to the other standard inputs like the mouse and touch. Second, to support text entry, we developed a gaze and foot-based dwell-free typing system, and investigated foot-based activation methods like foot-press and foot gestures. We have demonstrated that our dwell-free typing methods are efficient and highly preferred over conventional dwell-based gaze typing methods. Using our gaze typing system the users type up to 14.98 Words Per Minute (WPM) as opposed to 11.65 WPM with dwell-based typing. Importantly, our system addresses the critical usability issues associated with gaze typing in general. Third, we addressed the lack of an accessible and shoulder-surfing resistant authentication method by developing a gaze gesture recognition framework, and presenting two authentication strategies that use gaze gestures. Our authentication methods use static and dynamic transitions of the objects on the screen, and they authenticate users with an accuracy of 99% (static) and 97.5% (dynamic). Furthermore, unlike other systems, our dynamic authentication method is not susceptible to single video iterative attacks, and has a lower success rate with dual video iterative attacks. Lastly, we demonstrated how our gaze gesture recognition framework can be extended to allow users to design gaze gestures of their choice and associate them to appropriate commands like minimize, maximize, scroll, etc., on the computer. We presented a template matching algorithm which achieved an accuracy of 93%, and a geometric feature-based decision tree algorithm which achieved an accuracy of 90.2% in recognizing the gaze gestures. In summary, our research demonstrates how situational and physical impairments and disabilities can be addressed with a gaze-assisted, multi-modal, accessible interaction paradigm

    Validation Study of ReFace (Reality Enhanced Facial Approximation by Computational Estimation)

    Get PDF
    ReFace (Reality Enhancement Facial Approximation by Computational Estimation) is a prototype facial approximation software program developed by the Federal Bureau of Investigation (FBI) in conjunction with GE Global Research. The prototype extrapolates an “approximation” of a face from a skull using a database of computed tomography (CT) scans of living individuals. The test set consisted of CT scans of 53 articulated human skulls from the William M. Bass Donated Skeletal Collection and the William M. Bass Forensic Skeletal Collection, which are curated at the University of Tennessee in Knoxville. Through the Federal Bureau of Investigation’s Visiting Scientist Program, an educational opportunity administered by the Oak Ridge Institute of Science and Education (ORISE), the researcher conducted an independent validation of this software in two phases. Phase 1 tested and evaluated the software performance, resulting in improvements to the software and the development of standardized protocol for articulation, packaging, and preparation of human skulls for CT scans. Phase 2 validated the accuracy of the software in the production of facial approximations from human skulls using face pools and resemblance ratings. In Phase 2, computerized facial approximations were visually compared with antemortem photographs by four participant groups (N = 103). Ten test subjects of European ancestry (six females and four males) were selected for a photographic validation by face pool and resemblance rating validation tests. Participants were asked to choose the face pool photograph that most closely resembled the facial approximation produced by ReFace. In the second test, the same volunteers were asked to rate (on a scale of 1 to 5) how closely ReFace facial approximations of target subjects resembled an antemortem photograph. In the Face Pool Validation Test, nine out of ten target subjects were correctly identified above random chance, and the frequency distribution was statistically above chance expectations for nine out of ten target subjects (p \u3c .01). The mean hit rate for all subjects was 24% (10% above random chance). There were no significant differences in the hit rates between male participants (67%) and females participants (33%), or between participant groups. All participants were non-experts. Male target subjects received higher numbers of correct responses than female target subjects. The overall ratings for the Resemblance Rating Validation Test were 13% none, 24% slight, 22% approximate, 25% close, and 16% strong. The majority of subjects were rated as close resemblance (six subjects), strong resemblance (one subject), approximate resemblance (one subject), and slight resemblance (one subject). The foil comparison received an equal number of ratings for no resemblance (30.5%) and slight resemblance (30.5%)

    Institute for Scientific Computing Research Annual Report: Fiscal Year 2004

    Full text link

    Informational Privacy and Self-Disclosure Online: A Critical Mixed-Methods Approach to Social Media

    Get PDF
    This thesis investigates the multifaceted processes that have contributed to normalising identifiable self-disclosure in online environments and how perceptions of informational privacy and self-disclosure behavioural patterns have evolved in the relatively brief history of online communication. Its investigative mixed-methods approach critically examines a wide and diverse variety of primary and secondary sources and material to bring together aspects of the social dynamics that have contributed to the generalised identifiable self-disclosure. This research also utilises the results of the exploratory statistical as well as qualitative analysis of an extensive online survey completed by UCL students as a snapshot in time. This is combined with arguments developed from an analysis of existing published sources and looks ahead to possible future developments. This study examines the time when people online proved to be more trusting, and how users of the Internet responded to the development of the growing societal need to share personal information online. It addresses issues of privacy ethics and how they evolved over time to allow a persistent association of online self-disclosure to real-life identity that had not been seen before the emergence of social network sites. The resistance to identifiable self-disclosure before the widespread use of social network sites was relatively resolved by a combination of elements and circumstances. Some of these result from the demographics of young users, users' attitudes to deception, ideology and trust-building processes. Social and psychological factors, such as gaining social capital, peer pressure and the overall rewarding and seductive nature of social media, have led users to waive significant parts of their privacy in order to receive the perceived benefits. The sociohistorical context allows this research to relate evolving phenomena like the privacy paradox, lateral surveillance and self-censorship to the revamped ethics of online privacy and self-disclosure

    ISCR Annual Report: Fical Year 2004

    Full text link

    Aerospace Medicine and Biology: A cumulative index to the 1974 issues of a continuing bibliography

    Get PDF
    This publication is a cumulative index to the abstracts contained in supplements 125 through 136 of Aerospace Medicine and Biology: A Continuing Bibliography. It includes three indexes--subject, personal author, and corporate source
    corecore