567 research outputs found

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans

    Towards Energy Efficient Mobile Eye Tracking for AR Glasses through Optical Sensor Technology

    Get PDF
    After the introduction of smartphones and smartwatches, Augmented Reality (AR) glasses are considered the next breakthrough in the field of wearables. While the transition from smartphones to smartwatches was based mainly on established display technologies, the display technology of AR glasses presents a technological challenge. Many display technologies, such as retina projectors, are based on continuous adaptive control of the display based on the user’s pupil position. Furthermore, head-mounted systems require an adaptation and extension of established interaction concepts to provide the user with an immersive experience. Eye-tracking is a crucial technology to help AR glasses achieve a breakthrough through optimized display technology and gaze-based interaction concepts. Available eye-tracking technologies, such as Video Oculography (VOG), do not meet the requirements of AR glasses, especially regarding power consumption, robustness, and integrability. To further overcome these limitations and push mobile eye-tracking for AR glasses forward, novel laser-based eye-tracking sensor technologies are researched in this thesis. The thesis contributes to a significant scientific advancement towards energy-efficientmobile eye-tracking for AR glasses. In the first part of the thesis, novel scanned laser eye-tracking sensor technologies for AR glasses with retina projectors as display technology are researched. The goal is to solve the disadvantages of VOG systems and to enable robust eye-tracking and efficient ambient light and slippage through optimized sensing methods and algorithms. The second part of the thesis researches the use of static Laser Feedback Interferometry (LFI) sensors as low power always-on sensor modality for detection of user interaction by gaze gestures and context recognition through Human Activity Recognition (HAR) for AR glasses. The static LFI sensors can measure the distance to the eye and the eye’s surface velocity with an outstanding sampling rate. Furthermore, they offer high integrability regardless of the display technology. In the third part of the thesis, a model-based eye-tracking approach is researched based on the static LFI sensor technology. The approach leads to eye-tracking with an extremely high sampling rate by fusing multiple LFI sensors, which enables methods for display resolution enhancement such as foveated rendering for AR glasses and Virtual Reality (VR) systems. The scientific contributions of this work lead to a significant advance in the field of mobile eye-tracking for AR glasses through the introduction of novel sensor technologies that enable robust eye tracking in uncontrolled environments in particular. Furthermore, the scientific contributions of this work have been published in internationally renowned journals and conferences

    並列計算アクセラレータへの効率的なアプリケーションマッピングに関する研究

    Get PDF
    長崎大学学位論文 学位記番号:博(工)甲第3号 学位授与年月日:平成26年3月20日Nagasaki University (長崎大学)課程博

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 118

    Get PDF
    This special bibliography lists 338 reports, articles, and other documents introduced into the NASA scientific and technical information system in July 1973

    Pertanika Journal of Science & Technology

    Get PDF

    Ensuring the Take-Over Readiness of the Driver Based on the Gaze Behavior in Conditionally Automated Driving Scenarios

    Get PDF
    Conditional automation is the next step towards the fully automated vehicle. Under prespecified conditions an automated driving function can take-over the driving task and the responsibility for the vehicle, thus enabling the driver to perform secondary tasks. However, performing secondary tasks and the resulting reduced attention towards the road may lead to critical situations in take-over situations. In such situations, the automated driving function reaches its limits, forcing the driver to take-over responsibility and the control of the vehicle again. Thus, the driver represents the fallback level for the conditionally automated system. At this point the question arises as to how it can be ensured that the driver can take-over adequately and timely without restricting the automated driving system or the new freedom of the driver. To answer this question, this work proposes a novel prototype for an advanced driver assistance system which is able to automatically classify the driver’s take-over readiness for keeping the driver ”in-the-loop”. The results show the feasibility of such a classification of the take-over readiness even in the highly dynamic vehicle environment using a machine learning approach. It was verified that far more than half of the drivers performing a low-quality take-over would have been warned shortly before the actual take-over, whereas nearly 90% of the drivers performing a high-quality take-over would not have been interrupted by the driver assistance system during a driving simulator study. The classification of the take-over readiness of the driver is performed by means of machine learning algorithms. The underlying features for this classification are mainly based on the head and eye movement behavior of the driver. It is shown how the secondary tasks currently being performed as well as the glances on the road can be derived from these measured signals. Therefore, novel, online-capable approaches for driver-activity recognition and Eyes-on-Road detection are introduced, evaluated, and compared to each other based on both data of a simulator and real-driving study. These novel approaches are able to deal with multiple challenges of current state-of-the-art methods such as: i) only a coarse separation of driver activities possible, ii) necessity for costly and time-consuming calibrations, and iii) no adaption to conditionally automated driving scenarios.Das hochautomatisierte Fahren bildet den nächsten Schritt in der Evolution der Fahrerassistenzsysteme hin zu vollautomatisierten Fahrzeugen. Unter definierten Bedingungen kann dabei der Fahrer die Fahraufgabe inklusive der Verantwortung über das Fahrzeug einer automatisierten Fahrfunktion übergeben und erhält die Möglichkeit sich anderen Tätigkeiten zu widmen. Um dennoch sicherzustellen, dass der Fahrer bei Bedarf schnellstmöglich die Kontrolle über das Fahrzeug wieder übernehmen kann, stellt sich die Frage, wie die fehlende Aufmerksamkeit gegenüber dem Straßenverkehr kompensiert werden kann ohne dabei die hochautomatisierte Fahrfunktion oder die neu gewonnenen Freiheiten des Fahrers zu beschränken. Um diese Frage zu beantworten wird in der vorliegenden Arbeit ein erstes prototypisches Fahrerassistenzsystem vorgestellt, welches es ermöglicht, die Übernahmebereitschaft des Fahrers automatisiert zu klassifizieren und abhängig davon den Fahrer "in-the-loop" zu halten. Die Ergebnisse zeigen, dass eine automatisierte Klassifikation über maschinelle Lernverfahren selbst in der hochdynamischen Fahrzeugumgebung hervorragende Erkennungsraten ermöglicht. In einer der durchgeführten Fahrsimulatorstudien konnte nachgewiesen werden, dass weit mehr als die Hälfte der Probanden mit einer geringen Übernahmequalität kurz vor der eigentlichen Übernahmesituation gewarnt und nahezu 90% der Probanden mit einer hohen Übernahmequalität in ihrer Nebentätigkeit nicht gestört worden wären. Diese automatisierte Klassifizierung beruht auf Merkmalen, die über Fahrerbeobachtung mittels Innenraumkamera gewonnen werden. Für die Extraktion dieser Merkmale werden Verfahren zur Fahreraktivitätserkennung und zur Detektion von Blicken auf die Straße benötigt, welche aktuell noch mit gewissen Schwachstellen zu kämpfen haben wie: i) Nur eine grobe Unterscheidung von Tätigkeiten möglich, ii) Notwendigkeit von kosten- und zeitintensiven Kalibrationsschritten, iii) fehlende Anpassung an hochautomatisierte Fahrszenarien. Aus diesen Gründen wurden neue Verfahren zur Fahreraktivitätserkennung und zur Detektion von Blicken auf die Straße in dieser Arbeit entwickelt, implementiert und evaluiert. Dabei bildet die Anwendbarkeit der Verfahren unter realistischen Bedingungen im Fahrzeug einen zentralen Aspekt. Zur Evaluation der einzelnen Teilsysteme und des übergeordneten Fahrerassistenzsystems wurden umfangreiche Versuche in einem Fahrsimulator sowie in realen Messfahrzeugen mit Referenz- sowie seriennaher Messtechnik durchgeführt

    Topics in Adaptive Optics

    Get PDF
    Advances in adaptive optics technology and applications move forward at a rapid pace. The basic idea of wavefront compensation in real-time has been around since the mid 1970s. The first widely used application of adaptive optics was for compensating atmospheric turbulence effects in astronomical imaging and laser beam propagation. While some topics have been researched and reported for years, even decades, new applications and advances in the supporting technologies occur almost daily. This book brings together 11 original chapters related to adaptive optics, written by an international group of invited authors. Topics include atmospheric turbulence characterization, astronomy with large telescopes, image post-processing, high power laser distortion compensation, adaptive optics and the human eye, wavefront sensors, and deformable mirrors
    corecore