397 research outputs found
Towards the simulation of cooperative perception applications by leveraging distributed sensing infrastructures
With the rapid development of Automated Vehicles (AV), the boundaries of their function alities are being pushed and new challenges are being imposed. In increasingly complex
and dynamic environments, it is fundamental to rely on more powerful onboard sensors and
usually AI. However, there are limitations to this approach. As AVs are increasingly being
integrated in several industries, expectations regarding their cooperation ability is growing,
and vehicle-centric approaches to sensing and reasoning, become hard to integrate. The
proposed approach is to extend perception to the environment, i.e. outside of the vehicle,
by making it smarter, via the deployment of wireless sensors and actuators. This will vastly
improve the perception capabilities in dynamic and unpredictable scenarios and often in a
cheaper way, relying mostly in the use of lower cost sensors and embedded devices, which rely
on their scale deployment instead of centralized sensing abilities. Consequently, to support
the development and deployment of such cooperation actions in a seamless way, we require
the usage of co-simulation frameworks, that can encompass multiple perspectives of control
and communications for the AVs, the wireless sensors and actuators and other actors in the
environment. In this work, we rely on ROS2 and micro-ROS as the underlying technologies
for integrating several simulation tools, to construct a framework, capable of supporting the
development, test and validation of such smart, cooperative environments. This endeavor
was undertaken by building upon an existing simulation framework known as AuNa. We
extended its capabilities to facilitate the simulation of cooperative scenarios by incorporat ing external sensors placed within the environment rather than just relying on vehicle-based
sensors. Moreover, we devised a cooperative perception approach within this framework,
showcasing its substantial potential and effectiveness. This will enable the demonstration of
multiple cooperation scenarios and also ease the deployment phase by relying on the same
software architecture.Com o rápido desenvolvimento dos Veículos Autónomos (AV), os limites das suas funcional idades estão a ser alcançados e novos desafios estão a surgir. Em ambientes complexos
e dinâmicos, é fundamental a utilização de sensores de alta capacidade e, na maioria dos
casos, inteligência artificial. Mas existem limitações nesta abordagem. Como os AVs estão
a ser integrados em várias indústrias, as expectativas quanto à sua capacidade de cooperação estão a aumentar, e as abordagens de perceção e raciocínio centradas no veículo,
tornam-se difíceis de integrar. A abordagem proposta consiste em extender a perceção para
o ambiente, isto é, fora do veículo, tornando-a inteligente, através do uso de sensores e
atuadores wireless. Isto irá melhorar as capacidades de perceção em cenários dinâmicos e
imprevisíveis, reduzindo o custo, pois a abordagem será baseada no uso de sensores low-cost
e sistemas embebidos, que dependem da sua implementação em grande escala em vez da
capacidade de perceção centralizada. Consequentemente, para apoiar o desenvolvimento
e implementação destas ações em cooperação, é necessária a utilização de frameworks de
co-simulação, que abranjam múltiplas perspetivas de controlo e comunicação para os AVs,
sensores e atuadores wireless, e outros atores no ambiente. Neste trabalho será utilizado
ROS2 e micro-ROS como as tecnologias subjacentes para a integração das ferramentas de
simulação, de modo a construir uma framework capaz de apoiar o desenvolvimento, teste e
validação de ambientes inteligentes e cooperativos. Esta tarefa foi realizada com base numa
framework de simulação denominada AuNa. Foram expandidas as suas capacidades para
facilitar a simulação de cenários cooperativos através da incorporação de sensores externos
colocados no ambiente, em vez de depender apenas de sensores montados nos veículos.
Além disso, concebemos uma abordagem de perceção cooperativa usando a framework,
demonstrando o seu potencial e eficácia. Isto irá permitir a demonstração de múltiplos
cenários de cooperação e também facilitar a fase de implementação, utilizando a mesma
arquitetura de software
Recommended from our members
Technological framework for ubiquitous interactions using context–aware mobile devices
This report presents research and development of dedicated system architecture, designed to enable its users to interact with each other as well as to access information on Points of Interest that exist in their immediate environment. This is accomplished through managing personal preferences and contextual information in a distributed manner and in real-time. The advantage of this system architecture is that it uses mobile devices, heterogeneous sensors and a selection of user interface paradigms to produce a sociotechnical framework to enhance the perception of the environment and promote intuitive interactions. The thrust of the work has been on software development and component integration. Iterative prototyping was adopted as a development method in order to effectively implement the users’ feedback and establish a platform for collaboration that closely meets the requirements and aids their decision-making process. The requirement acquisition was followed by the system-modelling phase in order to produce a robust software prototype. The implementation includes component-based development and extensive use of design patterns over native programming. Conclusively, the software product has become the means to evaluate differences in the use of mixed reality technologies in a ubiquitous scenario.
The prototype can query a number of context sources such as sensors, or details of the personal profile, to acquire relevant data. The data (and metadata) is stored in opensource structures, so that they are accessible at every layer of the system architecture and at any time. By proactively processing the acquired context, the system can assist the users in their tasks (e.g. navigation) without explicit input – e.g. by simply creating a gesture with the device. However, advanced interaction with the application via the user interface is available for requests that are more complex.
Representations of the real world objects, their spatial relations and other captured features of interest are visualised on scalable interfaces, ranging from 2D to 3D models and from photorealism to stylised clues and symbols. Two principal modes of operation have been implemented; one, using geo-referenced virtual reality models of the environment, updated in real time, and second, using the overlay of descriptive annotations and graphics on the video images of the surroundings, captured by a video camera. The latter is referred to as augmented reality.
The continuous feed of the device position and orientation data, from the GPS receiver and the digital compass, into the application, makes the framework fit for use in unknown environments and therefore suitable for ubiquitous operation. This is one of the novelties of the proposed framework, because it enables a whole range of social, peer-to-peer interactions to take place. The scenarios of how the system could be employed to pursue these remote interactions and collaborative efforts on mobile devices are addressed in the context of urban navigation. The conceptual design and implementation of the novel location and orientation based algorithm for mobile AR are presented in detail. The system is, however, multifaceted and capable of supporting peer-to-peer exchange of information in a pervasive fashion, usable in various contexts. The modalities of these interactions are explored and laid out in several scenarios, but particularly in the context of user adoption. Two evaluation tasks took place. The preliminary evaluation examined certain aspects that influence user interaction while being immersed in a virtual environment, whereas the second summative evaluation compared the utility and certain usability aspects of the AR and VR interfaces
Engineering Self-Adaptive Collective Processes for Cyber-Physical Ecosystems
The pervasiveness of computing and networking is creating significant opportunities for building valuable socio-technical systems. However, the scale, density, heterogeneity, interdependence, and QoS constraints of many target systems pose severe operational and engineering challenges. Beyond individual smart devices, cyber-physical collectives can provide services or solve complex problems by leveraging a “system effect” while coordinating and adapting to context or environment change. Understanding and building systems exhibiting collective intelligence and autonomic capabilities represent a prominent research goal, partly covered, e.g., by the field of collective adaptive systems. Therefore, drawing inspiration from and building on the long-time research activity on coordination, multi-agent systems, autonomic/self-* systems, spatial computing, and especially on the recent aggregate computing paradigm, this thesis investigates concepts, methods, and tools for the engineering of possibly large-scale, heterogeneous ensembles of situated components that should be able to operate, adapt and self-organise in a decentralised fashion. The primary contribution of this thesis consists of four main parts. First, we define and implement an aggregate programming language (ScaFi), internal to the mainstream Scala programming language, for describing collective adaptive behaviour, based on field calculi. Second, we conceive of a “dynamic collective computation” abstraction, also called aggregate process, formalised by an extension to the field calculus, and implemented in ScaFi. Third, we characterise and provide a proof-of-concept implementation of a middleware for aggregate computing that enables the development of aggregate systems according to multiple architectural styles. Fourth, we apply and evaluate aggregate computing techniques to edge computing scenarios, and characterise a design pattern, called Self-organising Coordination Regions (SCR), that supports adjustable, decentralised decision-making and activity in dynamic environments.Con lo sviluppo di informatica e intelligenza artificiale, la diffusione pervasiva di device computazionali e la crescente interconnessione tra elementi fisici e digitali, emergono innumerevoli opportunità per la costruzione di sistemi socio-tecnici di nuova generazione. Tuttavia, l'ingegneria di tali sistemi presenta notevoli sfide, data la loro complessità—si pensi ai livelli, scale, eterogeneità, e interdipendenze coinvolti. Oltre a dispositivi smart individuali, collettivi cyber-fisici possono fornire servizi o risolvere problemi complessi con un “effetto sistema” che emerge dalla coordinazione e l'adattamento di componenti fra loro, l'ambiente e il contesto. Comprendere e costruire sistemi in grado di esibire intelligenza collettiva e capacità autonomiche è un importante problema di ricerca studiato, ad esempio, nel campo dei sistemi collettivi adattativi. Perciò, traendo ispirazione e partendo dall'attività di ricerca su coordinazione, sistemi multiagente e self-*, modelli di computazione spazio-temporali e, specialmente, sul recente paradigma di programmazione aggregata, questa tesi tratta concetti, metodi, e strumenti per l'ingegneria di
ensemble di elementi situati eterogenei che devono essere in grado di lavorare, adattarsi, e auto-organizzarsi in modo decentralizzato. Il contributo di questa tesi consiste in quattro parti principali. In primo luogo, viene definito e implementato un linguaggio di programmazione aggregata (ScaFi), interno al linguaggio Scala, per descrivere comportamenti collettivi e adattativi secondo l'approccio dei campi computazionali. In secondo luogo, si propone e caratterizza l'astrazione di processo aggregato per rappresentare computazioni collettive dinamiche concorrenti, formalizzata come estensione al field calculus e implementata in ScaFi. Inoltre, si analizza e implementa un prototipo di middleware per sistemi aggregati, in grado di supportare più stili architetturali. Infine, si applicano e valutano tecniche di programmazione aggregata in scenari di edge computing, e si propone un pattern, Self-Organising Coordination Regions, per supportare, in modo decentralizzato, attività decisionali e di regolazione in ambienti dinamici
Mobile Ad hoc networks in the Global system of Interconnected Computer Networks
Computers capable of attaching to the Internet from many places are likely to grow in popularity until they dominate the population of the Internet. Consequently, protocol research has shifted into high gear to develop appropriate network protocols for supporting mobility. This introductory article attempts to outline some of the many promising and interesting research directions. The papers in this special issue indicate the diversity of viewpoints within the research community, and it is part of the purpose of this introduction to frame their place within the overall research area
Applications of Internet of Things
This book introduces the Special Issue entitled “Applications of Internet of Things”, of ISPRS International Journal of Geo-Information. Topics covered in this issue include three main parts: (I) intelligent transportation systems (ITSs), (II) location-based services (LBSs), and (III) sensing techniques and applications. Three papers on ITSs are as follows: (1) “Vehicle positioning and speed estimation based on cellular network signals for urban roads,” by Lai and Kuo; (2) “A method for traffic congestion clustering judgment based on grey relational analysis,” by Zhang et al.; and (3) “Smartphone-based pedestrian’s avoidance behavior recognition towards opportunistic road anomaly detection,” by Ishikawa and Fujinami. Three papers on LBSs are as follows: (1) “A high-efficiency method of mobile positioning based on commercial vehicle operation data,” by Chen et al.; (2) “Efficient location privacy-preserving k-anonymity method based on the credible chain,” by Wang et al.; and (3) “Proximity-based asynchronous messaging platform for location-based Internet of things service,” by Gon Jo et al. Two papers on sensing techniques and applications are as follows: (1) “Detection of electronic anklet wearers’ groupings throughout telematics monitoring,” by Machado et al.; and (2) “Camera coverage estimation based on multistage grid subdivision,” by Wang et al
Ubiquitous Computing
The aim of this book is to give a treatment of the actively developed domain of Ubiquitous computing. Originally proposed by Mark D. Weiser, the concept of Ubiquitous computing enables a real-time global sensing, context-aware informational retrieval, multi-modal interaction with the user and enhanced visualization capabilities. In effect, Ubiquitous computing environments give extremely new and futuristic abilities to look at and interact with our habitat at any time and from anywhere. In that domain, researchers are confronted with many foundational, technological and engineering issues which were not known before. Detailed cross-disciplinary coverage of these issues is really needed today for further progress and widening of application range. This book collects twelve original works of researchers from eleven countries, which are clustered into four sections: Foundations, Security and Privacy, Integration and Middleware, Practical Applications
Advanced Location-Based Technologies and Services
Since the publication of the first edition in 2004, advances in mobile devices, positioning sensors, WiFi fingerprinting, and wireless communications, among others, have paved the way for developing new and advanced location-based services (LBSs). This second edition provides up-to-date information on LBSs, including WiFi fingerprinting, mobile computing, geospatial clouds, geospatial data mining, location privacy, and location-based social networking. It also includes new chapters on application areas such as LBSs for public health, indoor navigation, and advertising. In addition, the chapter on remote sensing has been revised to address advancements
A method for viewing and interacting with medical volumes in virtual reality
The medical field has long benefited from advancements in diagnostic imaging technology. Medical images created through methods such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used by medical professionals to non-intrusively peer into the body to make decisions about surgeries. Over time, the viewing medium of medical images has evolved from X-ray film negatives to stereoscopic 3D displays, with each new development enhancing the viewer’s ability to discern detail or decreasing the time needed to produce and render a body scan. Though doctors and surgeons are trained to view medical images in 2D, some are choosing to view body scans in 3D through volume rendering. While traditional 2D displays can be used to display 3D data, a viewing method that incorporates depth would convey more information to the viewer. One device that has shown promise in medical image viewing applications is the Virtual Reality Head Mounted Display (VR HMD).
VR HMDs have recently increased in popularity, with several commodity devices being released within the last few years. The Oculus Rift, HTC Vive, and Windows Mixed Reality HMDs like the Samsung Odyssey offer higher resolution screens, more accurate motion tracking, and lower prices than earlier HMDs. They also include motion-tracked handheld controllers meant for navigation and interaction in video games. Because of their popularity and low cost, medical volume viewing software that is compatible with these headsets would be accessible to a wide audience. However, the introduction of VR to medical volume rendering presents difficulties in implementing consistent user interactions and ensuring performance.
Though all three headsets require unique driver software, they are compatible with OpenVR, a middleware that standardizes communication between the HMD, the HMD’s controllers, and VR software. However, the controllers included with the HMDs each has a slightly different control layout. Furthermore, buttons, triggers, touchpads, and joysticks that share the same hand position between devices do not report values to OpenVR in the same way. Implementing volume rendering functions like clipping and tissue density windowing on VR controllers could improve the user’s experience over mouse-and-keyboard schemes through the use of tracked hand and finger movements. To create a control scheme that is compatible with multiple HMD’s A way of mapping controls differently depending on the device was developed.
Additionally, volume rendering is a computationally intensive process, and even more so when rendering for an HMD. By using techniques like GPU raytracing with modern GPUs, real-time framerates are achievable on desktop computers with traditional displays. However, the importance of achieving high framerates is even greater when viewing with a VR HMD due to its higher level of immersion. Because the 3D scene occupies most of the user’s field of view, low or choppy framerates contribute to feelings of motion sickness. This was mitigated through a decrease in volume rendering quality in situations where the framerate drops below acceptable levels.
The volume rendering and VR interaction methods described in this thesis were demonstrated in an application developed for immersive viewing of medical volumes. This application places the user and a medical volume in a 3D VR environment, allowing the user to manually place clipping planes, adjust the tissue density window, and move the volume to achieve different viewing angles with handheld motion tracked controllers. The result shows that GPU raytraced medical volumes can be viewed and interacted with in VR using commodity hardware, and that a control scheme can be mapped to allow the same functions on different HMD controllers despite differences in layout
Navigation Recommender:Real-Time iGNSS QoS Prediction for Navigation Services
Global Navigation Satellite Systems (GNSSs), especially Global Positioning System (GPS), have become commonplace in mobile devices and are the most preferred geo-positioning sensors for many location-based applications. Besides GPS, other GNSSs under development or deployment are GLONASS, Galileo, and Compass. These four GNSSs are planned to be integrated in the near future. It is anticipated that integrated GNSSs (iGNSSs) will improve the overall satellite-based geo-positioning performance. However, one major shortcoming of any GNSS and iGNSSs is Quality of Service (QoS) degradation due to signal blockage and attenuation by the surrounding environments, particularly in obstructed areas. GNSS QoS uncertainty is the root cause of positioning ambiguity, poor localization performance, application freeze, and incorrect guidance in navigation applications.
In this research, a methodology, called iGNSS QoS prediction, that can provide GNSS QoS on desired and prospective routes is developed. Six iGNSS QoS parameters suitable for navigation are defined: visibility, availability, accuracy, continuity, reliability, and flexibility. The iGNSS QoS prediction methodology, which includes a set of algorithms, encompasses four modules: segment sampling, point-based iGNSS QoS prediction, tracking-based iGNSS QoS prediction, and iGNSS QoS segmentation. Given that iGNSS QoS prediction is data- and compute-intensive and navigation applications require real-time solutions, an efficient satellite selection algorithm is developed and distributed computing platforms, mainly grids and clouds, for achieving real-time performance are explored. The proposed methodology is unique in several respects: it specifically addresses the iGNSS positioning requirements of navigation systems/services; it provides a new means for route choices and routing in navigation systems/services; it is suitable for different modes of travel such as driving and walking; it takes high-resolution 3D data into account for GNSS positioning; and it is based on efficient algorithms and can utilize high-performance and scalable computing platforms such as grids and clouds to provide real-time solutions.
A number of experiments were conducted to evaluate the developed methodology and the algorithms using real field test data (GPS coordinates). The experimental results show that the methodology can predict iGNSS QoS in various areas, especially in problematic areas
The Design and Use of a Smartphone Data Collection Tool and Accompanying Configuration Language
Understanding human behaviour is key to understanding the spread of epidemics, habit dispersion, and the efficacy of health interventions. Investigation into the patterns of and drivers for human behaviour has often been facilitated by paper tools such as surveys, journals, and diaries. These tools have drawbacks in that they can be forgotten, go unfilled, and depend on often unreliable human memories. Researcher-driven data collection mechanisms, such as interviews and direct observation, alleviate some of these problems while introducing others, such as bias and observer effects. In response to this, technological means such as special-purpose data collection hardware, wireless sensor networks, and apps for smart devices have been built to collect behavioural data. These technologies further reduce the problems experienced by more traditional behavioural research tools, but often experience problems of reliability, generality, extensibility, and ease of configuration.
This document details the construction of a smartphone-based app designed to collect data on human behaviour such that the difficulties of traditional tools are alleviated while still addressing the problems faced by modern supplemental technology. I describe the app's main data collection engine and its construction, architecture, reliability, generality, and extensibility, as well as the programming language developed to configure it and its feature set. To demonstrate the utility of the tool and its configuration language, I describe how they have been used to collect data in the field. Specifically, eleven case studies are presented in which the tool's architecture, flexibility, generality, extensibility, modularity, and ease of configuration have been exploited to facilitate a variety of behavioural monitoring endeavours. I further explain how the engine performs data collection, the major abstractions it employs, how its design and the development techniques used ensure ongoing reliability, and how the engine and its configuration language could be extended in the future to facilitate a greater range of experiments that require behavioural data to be collected. Finally, features and modules of the engine's encompassing system, iEpi, are presented that have not otherwise been documented to give the reader an understanding of where the work fits into the larger data collection and processing endeavour that spawned it
- …