3,211 research outputs found
Impact of Imaging and Distance Perception in VR Immersive Visual Experience
Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor.
In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured.
The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training.
The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments.
We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization.
The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference.
This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published
The influence of CEO leadership on organizational learning in internationalizing high-tech companies in China
This research explores how CEO leadership affects the learning process of internationalizing high-tech companies. There has been a growing recognition of the role of leadership in the international learning process. For example, scholars have discussed the influence of several factors, such as leaders’ cognition, decision-making style, and entrepreneurship, on international learning process. Moreover, CEO leadership has been treated as an important factor that can affect a company’s organizational learning. However, very few studies have discussed the role of leadership in the organizational learning process of companies’ internationalization. Based on a review of existing research gaps in the role of leadership in organizational and international learning literature, this research seeks to gain rich insights into how leadership influences organizational learning in high-tech companies’ internationalizing in the Chinese context. This research focused on two common leadership styles in China, authoritarian leadership and empowering leadership. These two leadership styles can be explained through Chinese traditional philosophy and from the lens of power, authoritarian leadership and empowering leadership are deserved to be compared.
This research adopts a qualitative approach based on 8 case studies of Chinese high-tech internationalizing companies. Semi-structured interviews with the CEO and at least two senior managers were carried out in each case. This research contributes to international learning process literature. CEO leadership is proposed as a key factor that can influence each construct associated with the international learning process and cause different international learning processes. This research also contributes to both leadership and internationalization literature as it uses organizational learning as a bridge linking leadership and internationalization. Different leadership styles could cause different internationalization outcomes in performance and management perspectives due to different international learning processes. Moreover, CEO leadership could be changed during companies’ internationalization process
Kinetic energy fluctuation-driven locomotor transitions on potential energy landscapes of beam obstacle traversal and self-righting
Despite contending with constraints imposed by the environment, morphology,
and physiology, animals move well by physically interactingwith the environment
to use and transition between modes such as running, climbing, and
self-righting. By contrast, robots struggle to do so in real world.
Understanding the principles of how locomotor transitions emerge from
constrained physical interaction is necessary for robots to move robustly using
similar strategies. Recent studies discovered that discoid cockroaches use and
transition between diverse locomotor modes to traverse beams and self-right on
ground. For both systems, animals probabilistically transitioned between modes
via multiple pathways, while its self-propulsion created kinetic energy
fluctuation. Here, we seek mechanistic explanations for these observations by
adopting a physics-based approach that integrates biological and robotic
studies.
We discovered that animal and robot locomotor transitions during beam
obstacle traversal and ground self-righting are barrier-crossing transitions on
potential energy landscapes. Whereas animals and robot traversed stiff beams by
rolling their body betweenbeam, they pushed across flimsy beams, suggesting a
concept of terradynamic favorability where modes with easier physical
interaction are more likely to occur. Robotic beam traversal revealed that,
system state either remains in a favorable mode or transitions to one when
energy fluctuation is comparable to the transition barrier. Robotic
self-righting transitions occurred similarly and revealed that changing system
parameters lowers barriers over which comparable fluctuation can induce
transitions. Thetransitionsof animalsin both systems mostly occurred similarly,
but sensory feedback may facilitate its beam traversal. Finally, we developed a
method to measure animal movement across large spatiotemporal scales in a
terrain treadmill.Comment: arXiv admin note: substantial text overlap with arXiv:2006.1271
Introduction to Drone Detection Radar with Emphasis on Automatic Target Recognition (ATR) technology
This paper discusses the challenges of detecting and categorizing small
drones with radar automatic target recognition (ATR) technology. The authors
suggest integrating ATR capabilities into drone detection radar systems to
improve performance and manage emerging threats. The study focuses primarily on
drones in Group 1 and 2. The paper highlights the need to consider kinetic
features and signal signatures, such as micro-Doppler, in ATR techniques to
efficiently recognize small drones. The authors also present a comprehensive
drone detection radar system design that balances detection and tracking
requirements, incorporating parameter adjustment based on scattering region
theory. They offer an example of a performance improvement achieved using
feedback and situational awareness mechanisms with the integrated ATR
capabilities. Furthermore, the paper examines challenges related to one-way
attack drones and explores the potential of cognitive radar as a solution. The
integration of ATR capabilities transforms a 3D radar system into a 4D radar
system, resulting in improved drone detection performance. These advancements
are useful in military, civilian, and commercial applications, and ongoing
research and development efforts are essential to keep radar systems effective
and ready to detect, track, and respond to emerging threats.Comment: 17 pages, 14 figures, submitted to a journal and being under revie
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Multifunctional Graphene–Polymer Nanocomposite Sensors Formed by One-Step In Situ Shear Exfoliation of Graphite
Graphene nanocomposites are a promising class of advanced materials for sensing applications; yet, their commercialization is hindered due to impurity incorporation during fabrication and high costs. The aim of this work is to prepare graphene–polysulfone (G−PSU) and graphene–polyvinylidene fluoride (G−PVDF) nanocomposites that perform as multifunctional sensors and are formed using a one-step, in situ exfoliation process whereby graphite is exfoliated into graphene nanoflakes (GNFs) directly within the polymer. This low-cost method creates a nanocomposite while avoiding impurity exposure since the raw materials used in the in situ shear exfoliation process are graphite and polymers. The morphology, structure, thermal properties, and flexural properties were determined for G−PSU and G−PVDF nanocomposites, as well as the electromechanical sensor capability during cyclic flexural loading, temperature sensor testing while heating and cooling, and electrochemical sensor capability to detect dopamine while sensing data wirelessly. G−PSU and G−PVDF nanocomposites show superior mechanical characteristics (gauge factor around 27 and significantly enhanced modulus), thermal characteristics (stability up to 500 °C and 170 °C for G−PSU and G−PVDF, respectively), electrical characteristics (0.1 S/m and 1 S/m conductivity for G−PSU and G−PVDF, respectively), and distinguished resonant peaks for wireless sensing (~212 MHz and ~429 MHz). These uniquely formed G−PMC nanocomposites are promising candidates as strain sensors for structural health monitoring, as temperature sensors for use in automobiles and aerospace applications, and as electrochemical sensors for health care and disease diagnostics
Coarse-grained Localization of In-body Energy-harvesting Nanonodes
Dispositivos a escala nanométrica con capacidades de comunicación inalámbrica en Ter- ahertz (THz) aspiran a implementarse para aplicaciones de detección dentro del torrente sanguíneo humano. Estos dispositivos detectan biomarcadores, permiten la administración dirigida de medicamentos y mejoran el diagnóstico precoz. La introducción de la localización nanométrica guiada por flujo utiliza la comunicación basada en THz para establecer la comunicación entre nanonodos y anclas. Se espera que este enfoque localice con precisión las regiones donde ocurren los eventos mediante el tiempo de circulación del nanodispositivo en el torrente sanguíneo. Esto facilita la identificación precisa de biomarcadores de enfermedades, virus y bacterias, lo que permite una intervención dirigida y la detección temprana de diversas condiciones de salud. Para evitar los desafíos encontrados en la evaluación y estandarización de la localización tradicional, este trabajo presenta un flujo de trabajo para la evaluación del rendimiento estandarizado de la localización nanométrica guiada por flujo. El flujo de trabajo se implementa en forma de un simulador de código abierto, teniendo en cuenta la movilidad del nanodispositivo, la comunicación THz en el cuerpo con anclas externas y las restricciones relacionadas con la energía. El simulador puede generar datos que se pueden utilizar para optimizar diferentes soluciones de localización y establecer puntos de referencia de rendimiento estandarizados. La evaluación se realiza mediante una exploración del espacio de diseño. Los resultados indican que el flujo de trabajo propuesto y el simulador se pueden utilizar para capturar el rendimiento de los enfoques de localización guiados por flujo de manera que permita una comparación objetiva con otros enfoques, sentando así las bases para la evaluación estandarizada de soluciones futuras.Els dispositius a nanoescala amb capacitats de comunicació sense fils en Terahertz (THz) aspiren a implementar-se en aplicacions basades en detecció i actuació dins del torrent sanguini humà. Aquests dispositius detecten biomarcadors, permeten el lliurament precís de fàrmacs i proporcionen un diagnòstic precoç. La introducció de la localització a nanoescala guiada per flux utilitza la comunicació basada en THz per establir la comunicació entre nanonodes i ancoratges. Aquest enfocament aspira a localitzar amb precisió les regions on es produeixen els esdeveniments utilitzant el temps de circulació del nanodispositiu a cada una de les regions. Això permet la identificació precisa de biomarcadors de malalties, virus i bacteris, donant lloc a una intervenció dirigida i a la detecció precoç de diverses condicions de salut. Per evitar els inconvenients que sorgeixen en les primeres etapes d’investigació, aquest treball presenta un flux de treball per a l’avaluació estandarditzada del rendiment de la localització a nanoescala guiada per flux. El flux de treball s’implementa en forma d’un simulador de codi obert, tenint en compte la mobilitat del nanodispositiu, la comunicació THz dins del cos amb els ancoratges situats a la superfície del cos i les limitacions relacionades amb l’energia. El simulador és capaç de generar dades que després de ser processades, ens permeten obtenir una avaluació estandaritzada del sistema . L’avaluació es va realitzar en forma d’exploració espacial de disseny. Els resultats indiquen que el flux de treball proposat i el simulador es poden utilitzar per capturar el rendiment de la solució implementada d’una manera que permet la comparació objectiva amb altres enfocaments, servint d’aquesta manera com a base per a l’avaluació estandarditzada de futures solucions.Nanoscale devices with Terahertz (THz) wireless communication capabilities are envisioned for sensing and actuation-based applications within human bloodstreams. These devices detect biomarkers, enable targeted drug delivery, and improve precision diagnos- tics. The introduction of flow-guided nanoscale localization utilizes THz-based communication to establish communication between nanonodes and anchors. This approach is envisaged to accurately locate regions where events occur by using the nanodevice’s circulation duration in the bloodstream. This enables precise identification of disease biomarkers, viruses, and bacteria, facilitating targeted intervention and early detection of health conditions. To avoid the pitfalls encountered in benchmarking and standardizing traditional indoor localization, this work presents a workflow for standardized performance evaluation of flow-guided nanoscale localization. The workflow is implemented in the form of an open source simulator, considering nanodevice mobility, in-body THz communication with on- body anchors, and energy-related constraints. The simulator is able to generate raw data that can be used to streamline different flow-guided localization solutions and establish standardized performance benchmarks. The evaluation is performed in the form of a design space exploration. The results indicate that the proposed workflow and the simulator can be utilized for capturing the performance of flow-guided localization approaches in a way that allows objective comparison with other approaches serving as the foundation for standardized evaluation of future solutions
Contributions to improve the technologies supporting unmanned aircraft operations
Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge.
Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential.
On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle.
This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies
the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir.
Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio.
Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav
- …