1,184 research outputs found

    A Review of Voice-Base Person Identification: State-of-the-Art

    Get PDF
    Automated person identification and authentication systems are useful for national security, integrity of electoral processes, prevention of cybercrimes and many access control applications. This is a critical component of information and communication technology which is central to national development. The use of biometrics systems in identification is fast replacing traditional methods such as use of names, personal identification numbers codes, password, etc., since nature bestow individuals with distinct personal imprints and signatures. Different measures have been put in place for person identification, ranging from face, to fingerprint and so on. This paper highlights the key approaches and schemes developed in the last five decades for voice-based person identification systems. Voice-base recognition system has gained interest due to its non-intrusive technique of data acquisition and its increasing method of continually studying and adapting to the person’s changes. Information on the benefits and challenges of various biometric systems are also presented in this paper. The present and prominent voice-based recognition methods are discussed. It was observed that these systems application areas have covered intelligent monitoring, surveillance, population management, election forensics, immigration and border control

    Embarking on the Autonomous Journey: A Strikingly Engineered Car Control System Design

    Get PDF
    openThis thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience.This thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience

    Estimating general motion and intensity from event cameras

    Get PDF
    Robotic vision algorithms have become widely used in many consumer products which enabled technologies such as autonomous vehicles, drones, augmented reality (AR) and virtual reality (VR) devices to name a few. These applications require vision algorithms to work in real-world environments with extreme lighting variations and fast moving objects. However, robotic vision applications rely often on standard video cameras which face severe limitations in fast-moving scenes or by bright light sources which diminish the image quality with artefacts like motion blur or over-saturation. To address these limitations, the body of work presented here investigates the use of alternative sensor devices which mimic the superior perception properties of human vision. Such silicon retinas were proposed by neuromorphic engineering, and we focus here on one such biologically inspired sensor called the event camera which offers a new camera paradigm for real-time robotic vision. The camera provides a high measurement rate, low latency, high dynamic range, and low data rate. The signal of the camera is composed of a stream of asynchronous events at microsecond resolution. Each event indicates when individual pixels registers a logarithmic intensity changes of a pre-set threshold size. Using this novel signal has proven to be very challenging in most computer vision problems since common vision methods require synchronous absolute intensity information. In this thesis, we present for the first time a method to reconstruct an image and es- timation motion from an event stream without additional sensing or prior knowledge of the scene. This method is based on coupled estimations of both motion and intensity which enables our event-based analysis, which was previously only possible with severe limitations. We also present the first machine learning algorithm for event-based unsu- pervised intensity reconstruction which does not depend on an explicit motion estimation and reveals finer image details. This learning approach does not rely on event-to-image examples, but learns from standard camera image examples which are not coupled to the event data. In experiments we show that the learned reconstruction improves upon our handcrafted approach. Finally, we combine our learned approach with motion estima- tion methods and show the improved intensity reconstruction also significantly improves the motion estimation results. We hope our work in this thesis bridges the gap between the event signal and images and that it opens event cameras to practical solutions to overcome the current limitations of frame-based cameras in robotic vision.Open Acces

    Bio-inspired retinal optic flow perception in robotic navigation

    Get PDF
    This thesis concerns the bio-inspired visual perception of motion with emphasis on locomotion targeting robotic systems. By continuously registering moving visual features in the human retina, a sensation of a visual flow cue is created. An interpretation of visual flow cues forms a low-level motion perception more known as retinal optic flow. Retinal optic flow is often mentioned and credited in human locomotor research but only in theory and simulated environments so far. Reconstructing the retinal optic flow fields using existing methods of estimating optic flow and experimental data from naive test subjects provides further insight into how it interacts with intermittent control behavior and dynamic gazing. The retinal optic flow is successfully demonstrated during a vehicular steering task scenario and further supports the idea that humans may use such perception to aid their ability to correct their steering during navigation.To achieve the reconstruction and estimation of the retinal optic flow, a set of optic flow estimators were fairly and systematically evaluated on the criteria on run-time predictability and reliability, and performance accuracy. A formalized methodology using containerization technology for performing the benchmarking was developed to generate the results. Furthermore, the readiness in road vehicles for the adoption of modern robotic software and related software processes were investigated. This was done with special emphasis on real-time computing and introducing containerization and microservice design paradigm. By doing so, continuous integration, continuous deployment, and continuous experimentation were enabled in order to aid further development and research. With the method of estimating retinal optic flow and its interaction with intermittent control, a more complete vision-based bionic steering control model is to be proposed and tested in a live robotic system

    Concurrent fNIRS and EEG for brain function investigation: A systematic, methodology-focused review

    Get PDF
    Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) stand as state-of-the-art techniques for non-invasive functional neuroimaging. On a unimodal basis, EEG has poor spatial resolution while presenting high temporal resolution. In contrast, fNIRS offers better spatial resolution, though it is constrained by its poor temporal resolution. One important merit shared by the EEG and fNIRS is that both modalities have favorable portability and could be integrated into a compatible experimental setup, providing a compelling ground for the development of a multimodal fNIRS-EEG integration analysis approach. Despite a growing number of studies using concurrent fNIRS-EEG designs reported in recent years, the methodological reference of past studies remains unclear. To fill this knowledge gap, this review critically summarizes the status of analysis methods currently used in concurrent fNIRS-EEG studies, providing an up-to-date overview and guideline for future projects to conduct concurrent fNIRS-EEG studies. A literature search was conducted using PubMed and Web of Science through 31 August 2021. After screening and qualification assessment, 92 studies involving concurrent fNIRS-EEG data recordings and analyses were included in the final methodological review. Specifically, three methodological categories of concurrent fNIRS-EEG data analyses, including EEG-informed fNIRS analyses, fNIRS-informed EEG analyses, and parallel fNIRS-EEG analyses, were identified and explained with detailed description. Finally, we highlighted current challenges and potential directions in concurrent fNIRS-EEG data analyses in future research
    • …
    corecore