38 research outputs found
ULTRA CLOSE-RANGE DIGITAL PHOTOGRAMMETRY AS A TOOL TO PRESERVE, STUDY, AND SHARE SKELETAL REMAINS
Skeletal collections around the world hold valuable and intriguing knowledge about humanity. Their potential value could be fully exploited by overcoming current limitations in documenting and sharing them. Virtual anthropology provides effective ways to study and value skeletal collections using three-dimensional (3D) data, e.g. allowing powerful comparative and evolutionary studies, along with specimen preservation and dissemination. CT- and laser scanning are the most used techniques for three-dimensional reconstruction. However, they are resource-intensive and, therefore, difficult to be applied to large samples or skeletal collections.
Ultra close-range digital photogrammetry (UCR-DP) enables photorealistic 3D reconstructions from simple photographs of the specimen. However, it is the least used method in skeletal anthropology and the lack of appropriate protocols often limit the quality of its outcomes.
This Ph.D. thesis explored UCR-DP application in skeletal anthropology. The state-of-the-art of this technique was studied, and a new approach based on cloud computing was proposed and validated against current gold standards. This approach relies on the processing capabilities of remote servers and a free-for-academic use software environment; it proved to produce measurements equivalent to those of osteometry and, in many cases, they were more precise than those of CT-scanning. Cloud-based UCR-DP allowed the processing of multiple 3D models at once, leading to a low-cost, quick, and effective 3D production.
The technique was successfully used to digitally preserve an initial sample of 534 crania from the skeletal collections of the Museo Sardo di Antropologia ed Etnografia (MuSAE, UniversitĂ degli Studi di Cagliari). Best practices in using the technique for skeletal collection dissemination were studied and several applications were developed including MuSAE online virtual tours, virtual physical anthropology labs and distance learning, durable online dissemination, and values-led participatorily designed interactive and immersive exhibitions at the MuSAE. The sample will be used in a future population study of Sardinian skeletal characteristics from the Neolithic to modern times.
In conclusion, cloud-based UCR-DP offers many significant advantages over other 3D scanning techniques: greater versatility in terms of application range and technical implementation, scalability, photorealistic restitution, reduced requirements relating to hardware, labour, time, and cost, and is, therefore, the best choice to document and value effectively large skeletal samples and collections
Vision Sensors and Edge Detection
Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing
Indoor Mapping and Reconstruction with Mobile Augmented Reality Sensor Systems
Augmented Reality (AR) ermöglicht es, virtuelle, dreidimensionale Inhalte direkt
innerhalb der realen Umgebung darzustellen. Anstatt jedoch beliebige virtuelle
Objekte an einem willkĂŒrlichen Ort anzuzeigen, kann AR Technologie auch genutzt
werden, um Geodaten in situ an jenem Ort darzustellen, auf den sich die Daten
beziehen. Damit eröffnet AR die Möglichkeit, die reale Welt durch virtuelle, ortbezogene
Informationen anzureichern. Im Rahmen der vorliegenen Arbeit wird diese
Spielart von AR als "Fused Reality" definiert und eingehend diskutiert.
Der praktische Mehrwert, den dieses Konzept der Fused Reality bietet, lÀsst sich
gut am Beispiel seiner Anwendung im Zusammenhang mit digitalen GebÀudemodellen
demonstrieren, wo sich gebÀudespezifische Informationen - beispielsweise der
Verlauf von Leitungen und Kabeln innerhalb der WĂ€nde - lagegerecht am realen
Objekt darstellen lassen. Um das skizzierte Konzept einer Indoor Fused Reality
Anwendung realisieren zu können, mĂŒssen einige grundlegende Bedingungen erfĂŒllt
sein. So kann ein bestimmtes GebÀude nur dann mit ortsbezogenen Informationen
augmentiert werden, wenn von diesem GebĂ€ude ein digitales Modell verfĂŒgbar ist.
Zwar werden gröĂere Bauprojekt heutzutage oft unter Zuhilfename von Building
Information Modelling (BIM) geplant und durchgefĂŒhrt, sodass ein digitales Modell
direkt zusammen mit dem realen GebÀude ensteht, jedoch sind im Falle Àlterer
BestandsgebĂ€ude digitale Modelle meist nicht verfĂŒgbar. Ein digitales Modell eines
bestehenden GebĂ€udes manuell zu erstellen, ist zwar möglich, jedoch mit groĂem
Aufwand verbunden. Ist ein passendes GebÀudemodell vorhanden, muss ein AR
GerĂ€t auĂerdem in der Lage sein, die eigene Position und Orientierung im GebĂ€ude
relativ zu diesem Modell bestimmen zu können, um Augmentierungen lagegerecht
anzeigen zu können.
Im Rahmen dieser Arbeit werden diverse Aspekte der angesprochenen Problematik
untersucht und diskutiert. Dabei werden zunÀchst verschiedene Möglichkeiten
diskutiert, Indoor-GebĂ€udegeometrie mittels Sensorsystemen zu erfassen. AnschlieĂend
wird eine Untersuchung prÀsentiert, inwiefern moderne AR GerÀte, die
in der Regel ebenfalls ĂŒber eine Vielzahl an Sensoren verfĂŒgen, ebenfalls geeignet
sind, als Indoor-Mapping-Systeme eingesetzt zu werden. Die resultierenden Indoor
Mapping DatensÀtze können daraufhin genutzt werden, um automatisiert
GebÀudemodelle zu rekonstruieren. Zu diesem Zweck wird ein automatisiertes,
voxel-basiertes Indoor-Rekonstruktionsverfahren vorgestellt. Dieses wird auĂerdem
auf der Grundlage vierer zu diesem Zweck erfasster DatensÀtze mit zugehörigen
Referenzdaten quantitativ evaluiert. Desweiteren werden verschiedene
Möglichkeiten diskutiert, mobile AR GerÀte innerhalb eines GebÀudes und des zugehörigen
GebĂ€udemodells zu lokalisieren. In diesem Kontext wird auĂerdem auch
die Evaluierung einer Marker-basierten Indoor-Lokalisierungsmethode prÀsentiert.
AbschlieĂend wird zudem ein neuer Ansatz, Indoor-Mapping DatensĂ€tze an den
Achsen des Koordinatensystems auszurichten, vorgestellt
QualitĂ€tstaxonomie fĂŒr skalierbare Algorithmen von Free Viewpoint Video Objekten
Diese Dissertation beabsichtigt einen Beitrag zur QualitĂ€tsbeurteilung von Algorithmen fĂŒr Bildanalyse und Bildsynthese im Anwendungskontext Videokommunikationssysteme zu leisten. In der vorliegenden Arbeit werden Möglichkeiten und Hindernisse der nutzerzentrierten Definition von subjektiver QualitĂ€tswahrnehmung in diesem speziellen Anwendungsfall untersucht. QualitĂ€tsbeurteilung von aufkommender Visualisierungs-Technologie und neuen Verfahren zur Erzeugung einer dreidimensionalen ReprĂ€sentation unter der Nutzung von Bildinformation zweier Kameras fĂŒr Videokommunikationssysteme wurde bisher noch nicht umfangreich behandelt und passende AnsĂ€tze dazu fehlen. Die Herausforderungen sind es qualitĂ€tsbeeinflussende Faktoren zu definieren, passende MaĂe zu formulieren, sowie die QualitĂ€tsevaluierung mit den Erstellungsalgorithmen, welche noch in Entwicklung sind, zu verbinden. Der Vorteil der Verlinkung von QualitĂ€tswahrnehmung und ServicequalitĂ€t ist die UnterstĂŒtzung der technischen Realisierungsprozesse hinsichtlich ihrer AnpassungsfĂ€higkeit (z.B. an das vom Nutzer verwendete System) und Skalierbarkeit (z.B. Beachtung eines Aufwands- oder Ressourcenlimits) unter BerĂŒcksichtigung des Endnutzers und dessen QualitĂ€tsanforderungen. Die vorliegende Arbeit beschreibt den theoretischen Hintergrund und einen Vorschlag fĂŒr eine QualitĂ€tstaxonomie als verlinkendes Modell. Diese Arbeit beinhaltet eine Beschreibung des Projektes Skalalgo3d, welches den Rahmen der Anwendung darstellt. PrĂ€sentierte Ergebnisse bestehen aus einer systematischen Definition von qualitĂ€tsbeeinflussenden Faktoren inklusive eines Forschungsrahmens und EvaluierungsaktivitĂ€ten die mehr als 350 Testteilnehmer inkludieren, sowie daraus heraus definierte QualitĂ€tsmerkmale der evaluierten QualitĂ€t der visuellen ReprĂ€sentation fĂŒr Videokommunikationsanwendungen. Ein darauf basierendes Modell um diese Ergebnisse mit den technischen Erstellungsschritten zu verlinken wird zum Schluss anhand eines formalisierten QualitĂ€tsmaĂes prĂ€sentiert. Ein Flussdiagramm und ein Richtungsfeld zur grafischen AnnĂ€herung an eine differenzierbare Funktion möglicher ZusammenhĂ€nge werden daraufhin fĂŒr weitere Untersuchungen vorgeschlagen.The thesis intends to make a contribution to the quality assessment of free viewpoint video objects within the context of video communication systems. The current work analyzes opportunities and obstacles, focusing on users' subjective quality of experience in this special case. Quality estimation of emerging free viewpoint video object technology in video communication has not yet been assessed and adequate approaches are missing. The challenges are to define factors that influence quality, to formulate an adequate measure of quality, and to link the quality of experience to the technical realization within an undefined and ever-changing technical realization process. There are two advantages of interlinking the quality of experience with the quality of service: First, it can benefit the technical realization process, in order to allow adaptability (e.g., based on systems used by the end users). Second, it provides an opportunity to support scalability in a user-centered way, e.g., based on a cost or resources limitation. The thesis outlines the theoretical background and introduces a user-centered quality taxonomy in the form of an interlinking model. A description of the related project Skalalgo3d is included, which offered a framework for application. The outlined results consist of a systematic definition of factors that influence quality, including a research framework, and evaluation activities involving more than 350 participants. The thesis includes the presentation of quality features, defined by evaluations of free viewpoint video object quality, for video communication application. Based on these quality features, a model that links these results with the technical creation process, including a formalized quality measure, is presented. Based on this, a flow chart and slope field are proposed. These intend the visualization of these potential relationships and may work as a starting point for further investigations thereon and to differentiate relations in form of functions
Ocular motion classification for mobile device presentation attack detection
Title from PDF of title page viewed February 25, 2021Dissertation advisor: Reza DerakhshanVitaIncludes bibliographical references (page 105-129)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2020As a practical pursuit of quantified uniqueness, biometrics explores the parameters that make us who we are and provides the tools we need to secure the integrity of that identity. In our culture of constant connectivity, an increasing reliance on biometrically secured mobile devices is transforming them into a target for bad actors. While no system will ever prevent all forms of intrusion, even state of the art biometric methods remain vulnerable to spoof attacks. As these attacks become more sophisticated, ocular motion based presentation attack detection (PAD) methods provide a potential deterrent. This dissertation presents the methods and evaluation of a novel optokinetic nystagmus (OKN) based PAD system for mobile device applications which leverages phase-locked temporal features of a unique reflexive behavioral response. Background is provided for historical and literary context of eye motion and ocular tracking to provide context to the objectives and accomplishments of this work. An evaluation of the improved methods for sample processing and sequential stability is provided with highlights for the presented improvements to the stability of convolutional facial landmark localization, and automated spatiotemporal feature extraction and classification models. Insights gleaned from this work are provided to elucidate some of the major challenges of mobile ocular motion feature extraction, as well as additional future considerations for the refinement and application of OKN motion signatures as a novel mobile device based PAD method.Introduction -- Retrospective, Contextual and Contemporary analysis -- Experimental Design -- Methods and Results -- Discussion -- Conclusion
Festschrift zum 60. Geburtstag von Wolfgang Strasser
Die vorliegende Festschrift ist Prof. Dr.-Ing. Dr.-Ing. E.h. Wolfgang StraĂer zu seinem 60. Geburtstag gewidmet. Eine Reihe von Wissenschaftlern auf dem Gebiet der Computergraphik, die alle aus der "TĂŒbinger Schule" stammen, haben - zum Teil zusammen mit ihren SchĂŒlern - AufsĂ€tze zu dieser Schrift beigetragen.
Die BeitrĂ€ge reichen von der Objektrekonstruktion aus Bildmerkmalen ĂŒber die physikalische Simulation bis hin zum Rendering und der Visualisierung, vom theoretisch ausgerichteten Aufsatz bis zur praktischen gegenwĂ€rtigen und zukĂŒnftigen Anwendung. Diese thematische Buntheit verdeutlicht auf anschauliche Weise die Breite und Vielfalt der Wissenschaft von der Computergraphik, wie sie am Lehrstuhl StraĂer in TĂŒbingen betrieben wird.
Schon allein an der Tatsache, daĂ im Bereich der Computergraphik zehn Professoren an UniversitĂ€ten und Fachhochschulen aus TĂŒbingen kommen, zeigt sich der prĂ€gende EinfluĂ Professor StraĂers auf die Computergraphiklandschaft in Deutschland. DaĂ sich darunter mehrere Physiker und Mathematiker befinden, die in TĂŒbingen fĂŒr dieses Fach gewonnen werden konnten, ist vor allem seinem Engagement und seiner Ausstrahlung zu verdanken.
Neben der Hochachtung vor den wissenschaftlichen Leistungen von Professor StraĂer hat sicherlich seine Persönlichkeit einen entscheidenden Anteil an der spontanten Bereischaft der Autoren, zu dieser Festschrift beizutragen. Mit auĂergewöhnlich groĂem persönlichen Einsatz fördert er Studenten, Doktoranden und Habilitanden, vermittelt aus seinen reichen internationalen Beziehungen Forschungskontakte und schafft so auĂerordentlich gute Voraussetzungen fĂŒr selbstĂ€ndige wissenschafliche Arbeit.
Die Autoren wollen mit ihrem Beitrag Wolfgang StraĂer eine Freude bereiten und verbinden mit ihrem Dank den Wunsch, auch weiterhin an seinem fachlich wie menschlich reichen und bereichernden Wirken teilhaben zu dĂŒrfen
DIG-MAN: Integration of digital tools into product development and manufacturing education
General objectives of PRODEM education. Teaching of product development requires various digital tools. Nowadays, the digital
tools usually use computers, which have become a standard element of manufacturing
and teaching environments. In this context, an integration of computer-based technologies
in manufacturing environments plays the crucial and main role, allowing to enrich,
accelerate and integrate different production phases such as product development, design,
manufacturing and inspection. Moreover, the digital tools play important role in management
of production. According to Wdowik and Ratnayake (2019 paper: Open Access
Digital Toolâs Application Potential in Technological Process Planning: SMMEs Perspective,
https://doi.org/10.1007/978-3-030-29996-5_36), the digital tools can be divided
into several main groups such as: machine tools and technological equipment (MTE), devices
(D), internet(intranet)-based tools (I), software (S). The groups are presented in
Fig. 1.1. Machine tools and technological equipment group contains all existing machines and
devices which are commonly used in manufacturing and inspection phase. The group is used in
physical shaping of manufactured products, measurement tasks regarding tools and products,
etc. The next group of devices (D) is proposed to separate the newest trends of using mobile
and computer-based technologies such as smartphones or tablets and indicate the necessity
of increased mobility within production sites. The similar need of separation is in the case of
internet(intranet)-based tools which indicate the growing interest in network-based solutions.
Hence, D and I groups are proposed in order to underline the significance of mobility and
networking. These two groups of the digital tools should also be supported in the nearest
future by the use of 5G networks. The last group of software (S) concerns computer software
produced for the aims of manufacturing environments. There is also a possibility to assign the
defined solutions (e.g. computer programs) to more than one group (e.g. program can be assigned
to software and internet-based tools). The main role of tools allocated inside separate
groups is to support employees, managers and customers of manufacturing firms focused on
abovementioned production phases. The digital tools are being developed in order to increase
efficiency of production, quality of manufactured products and accelerate innovation process
as well as comfort of work. Nowadays, digital also means mobile.
Universities (especially technical), which are focused on higher education and research, have
been continuously developing their teaching programmes since the beginning of industry 3.0
era. They need to prepare their alumni for changing environments of manufacturing enterprises
and new challenges such as Industry 4.0 era, digitalization, networking, remote work,
etc. Most of the teaching environments nowadays, especially those in manufacturing engineering
area, are equipped with many digital tools and meet various challenges regarding an
adaptation, a maintenance and a final usage of the digital tools. The application of these tools
in teaching needs a space, staff and supporting infrastructures. Universities adapt their equipment
and infrastructures to local or national needs of enterprises and the teaching content
is usually focused on currently used technologies. Furthermore, research activities support
teaching process by newly developed innovations.
Figure 1.2 presents how different digital tools are used in teaching environments. Teaching
environments are divided into four groups: lecture rooms, computer laboratories, manufacturing
laboratories and industrial environments. The three groups are characteristic in the
case of universitiesâ infrastructure whilst the fourth one is used for the aims of internships of students or researchers. Nowadays lecture rooms are mainly used for lectures and presentations
which require the direct communication and interaction between teachers and students.
However, such teaching method could also be replaced by the use of remote teaching (e.g.
by the use of e-learning platforms or internet communicators). Unfortunately, remote teaching
leads to limited interaction between people. Nonverbal communication is hence limited.
Computer laboratories (CLs) usually gather students who solve different problems by the use
of software. Most of the CLs enable teachers to display instructions by using projectors. Physical
gathering in one room enables verbal and nonverbal communication between teachers
and students. Manufacturing laboratories are usually used as the demonstrators of real industrial
environments. They are also perfect places for performing of experiments and building
the proficiency in using of infrastructure. The role of manufacturing labs can be divided as:
âą places which demonstrate the real industrial environments,
âą research sites where new ideas can be developed, improved and tested.
Industrial environment has a crucial role in teaching. It enables an enriched student experience
by providing real industrial challenges and problems
Foveation for 3D visualization and stereo imaging
Even though computer vision and digital photogrammetry share a number of goals, techniques, and methods, the potential for cooperation between these fields is not fully exploited. In attempt to help bridging the two, this work brings a well-known computer vision and image processing technique called foveation and introduces it to photogrammetry, creating a hybrid application. The results may be beneficial for both fields, plus the general stereo imaging community, and virtual reality applications.
Foveation is a biologically motivated image compression method that is often used for transmitting videos and images over networks. It is possible to view foveation as an area of interest management method as well as a compression technique. While the most common foveation applications are in 2D there are a number of binocular approaches as well.
For this research, the current state of the art in the literature on level of detail, human visual system, stereoscopic perception, stereoscopic displays, 2D and 3D foveation, and digital photogrammetry were reviewed. After the review, a stereo-foveation model was constructed and an implementation was realized to demonstrate a proof of concept. The conceptual approach is treated as generic, while the implementation was conducted under certain limitations, which are documented in the relevant context.
A stand-alone program called Foveaglyph is created in the implementation process. Foveaglyph takes a stereo pair as input and uses an image matching algorithm to find the parallax values. It then calculates the 3D coordinates for each pixel from the geometric relationships between the object and the camera configuration or via a parallax function. Once 3D coordinates are obtained, a 3D image pyramid is created. Then, using a distance dependent level of detail function, spherical volume rings with varying resolutions throughout the 3D space are created. The user determines the area of interest. The result of the application is a user controlled, highly compressed non-uniform 3D anaglyph image. 2D foveation is also provided as an option.
This type of development in a photogrammetric visualization unit is beneficial for system performance. The research is particularly relevant for large displays and head mounted displays. Although, the implementation, because it is done for a single user, would possibly be best suited to a head mounted display (HMD) application.
The resulting stereo-foveated image can be loaded moderately faster than the uniform original. Therefore, the program can potentially be adapted to an active vision system and manage the scene as the user glances around, given that an eye tracker determines where exactly the eyes accommodate. This exploration may also be extended to robotics and other robot vision applications. Additionally, it can also be used for attention management and the viewer can be directed to the object(s) of interest the demonstrator would like to present (e.g. in 3D cinema).
Based on the literature, we also believe this approach should help resolve several problems associated with stereoscopic displays such as the accommodation convergence problem and diplopia. While the available literature provides some empirical evidence to support the usability and benefits of stereo foveation, further tests are needed. User surveys related to the human factors in using stereo foveated images, such as its possible contribution to prevent user discomfort and virtual simulator sickness (VSS) in virtual environments, are left as future work.reviewe