93 research outputs found
Solid and Effective Upper Limb Segmentation in Egocentric Vision
Upper limb segmentation in egocentric vision is a challenging and nearly unexplored task that extends the well-known hand localization problem and can be crucial for a realistic representation of users' limbs in immersive and interactive environments, such as VR/MR applications designed for web browsers that are a general-purpose solution suitable for any device. Existing hand and arm segmentation approaches require a large amount of well-annotated data. Then different annotation techniques were designed, and several datasets were created. Such datasets are often limited to synthetic and semi-synthetic data that do not include the whole limb and differ significantly from real data, leading to poor performance in many realistic cases. To overcome the limitations of previous methods and the challenges inherent in both egocentric vision and segmentation, we trained several segmentation networks based on the state-of-the-art DeepLabv3+ model, collecting a large-scale comprehensive dataset. It consists of 46 thousand real-life and well-labeled RGB images with a great variety of skin colors, clothes, occlusions, and lighting conditions. In particular, we carefully selected the best data from existing datasets and added our EgoCam dataset, which includes new images with accurate labels. Finally, we extensively evaluated the trained networks in unconstrained real-world environments to find the best model configuration for this task, achieving promising and remarkable results in diverse scenarios. The code, the collected egocentric upper limb segmentation dataset, and a video demo of our work will be available on the project page1
Human segmentation in surveillance video with deep learning
Advanced intelligent surveillance systems are able to automatically analyze video of surveillance data without human intervention. These systems allow high accuracy of human activity recognition and then a high-level activity evaluation. To provide such features, an intelligent surveillance system requires a background subtraction scheme for human segmentation that captures a sequence of images containing moving humans from the reference background image. This paper proposes an alternative approach for human segmentation in videos through the use of a deep convolutional neural network. Two specific datasets were created to train our network, using the shapes of 35 different moving actors arranged on background images related to the area where the camera is located, allowing the network to take advantage of the entire site chosen for video surveillance. To assess the proposed approach, we compare our results with an Adobe Photoshop tool called Select Subject, the conditional generative adversarial network Pix2Pix, and the fully-convolutional model for real-time instance segmentation Yolact. The results show that the main benefit of our method is the possibility to automatically recognize and segment people in videos without constraints on camera and people movements in the scene (Video, code and datasets are available at http://graphics.unibas.it/www/HumanSegmentation/index.md.html)
Personalizable edge services for Web accessibility
Web Content Accessibility guidelines by W3C (W3C Recommendation, May 1999. http://www.w3.org/TR/WCAG10/) provide several suggestions for Web designers regarding how to author Web pages in order to make them accessible to everyone. In this context, this paper proposes the use of edge services as an efficient and general
solution to promote accessibility and breaking down the
digital barriers that inhibit users with disabilities to actively
participate to any aspect of society. The idea behind edge
services mainly affect the advantages of a personalized navigation in which contents are tailored according to different issues, such as client’s devices capabilities, communication systems and network conditions and, finally, preferences and/or abilities of the growing number of users that access the Web. To meet these requirements, Web designers have to efficiently provide content adaptation and
personalization functionalities mechanisms in order to guarantee universal access to the Internet content. The so far dominant paradigm of communication on theWWW, due to its simple request/responsemodel, cannot efficiently address such requirements. Therefore, it must be augmented with new components that attempt to enhance the scalability, the performances and the ubiquity of the Web. Edge servers, acting on the HTTP data flow exchanged between client and server, allow on-the-fly content adaptation as well as other complex functionalities beyond the traditional caching and content replication services. These value-added services are called edge services and include personalization and customization, aggregation from multiple sources, geographical personalization of the navigation of pages (with
insertion/emphasis of content that can be related to the user’s
geographical location), translation services, group navigation and awareness for social navigation, advanced services for bandwidth optimization such as adaptive compression and format transcoding, mobility, and ubiquitous access to Internet content. This paper presents Personalizable Accessible Navigation (PAN) that is a set of edge services designed to improveWeb pages accessibility, developed and deployed on top of a programmable intermediary framework. The characteristics and the location of the services,
i.e., provided by intermediaries, as well as the personalization and the opportunities to select multiple profiles make PAN a platform that is especially suitable for accessing the Web seamlessly also from mobile terminals
A Preliminary Investigation into a Deep Learning Implementation for Hand Tracking on Mobile Devices
Hand tracking is an essential component of computer graphics and human-computer interaction applications. The use of RGB camera without specific hardware and sensors (e.g., depth cameras) allows developing solutions for a plethora of devices and platforms. Although various methods were proposed, hand tracking from a single RGB camera is still a challenging research area due to occlusions, complex backgrounds, and various hand poses and gestures. We present a mobile application for 2D hand tracking from RGB images captured by the smartphone camera. The images are processed by a deep neural network, modified specifically to tackle this task and run on mobile devices, looking for a compromise between performance and computational time. Network output is used to show a 2D skeleton on the user's hand. We tested our system on several scenarios, showing an interactive hand tracking level and achieving promising results in the case of variable brightness and backgrounds and small occlusions
Virtual Reality Laboratories in Engineering Blended Learning Environments: Challenges and Opportunities
A great number of educational institutions worldwide have had their activities partially or fully interrupted following the outbreak of the COVID-19 pandemic. Consequently, universities have had to take the necessary steps in order to adapt their teaching, including laboratory workshops, to a fully online or mixed mode of delivery while maintaining their academic standards and providing a high-quality student experience. This transition has required, among other efforts, adequate investments in tools, accessibility, content development, and competences as well as appropriate training for both the teaching and administrative staff. In such a complex scenario, Virtual Reality Laboratories (VRLabs), which in the past already proved themselves to be efficient tools supporting the traditional practical activities, could well represent a valid alternative in the hybrid didactic mode of the contemporary educational landscape, rethinking the educational proposal in light of the indications coming from the scientific literature in the pedagogical field. In this context, the present work carries out a critical review of the existent virtual labs developed in the Engineering departments in the last ten years (2010-2020) and includes a pre-pandemic experience of a VRLab tool-StreamFlowVR-within the Hydraulics course of Basilicata University, Italy. This analysis is aimed at highlighting how ready VRLabs are to be exploited not only in emergency but also in ordinary situations, together with valorising an interdisciplinary dialogue between the pedagogical and technological viewpoints, in order to progressively foster a high-quality and evidence-based educational experience
Freehand-Steering Locomotion Techniques for Immersive Virtual Environments: A Comparative Evaluation
Virtual reality has achieved significant popularity in recent years, and allowing users to move freely within an immersive virtual world has become an important factor critical to realize. The user’s interactions are generally designed to increase the perceived realism, but the locomotion techniques and how these affect the user’s task performance still represent an open issue, much discussed in the literature. In this article, we evaluate the efficiency and effectiveness of, and user preferences relating to, freehand locomotion techniques designed for an immersive virtual environment performed through hand gestures tracked by a sensor placed in the egocentric position and experienced through a head-mounted display. Three freehand locomotion techniques have been implemented and compared with each other, and with a baseline technique based on a controller, through qualitative and quantitative measures. An extensive user study conducted with 60 subjects shows that the proposed methods have a performance comparable to the use of the controller, further revealing the users’ preference for decoupling the locomotion in sub-tasks, even if this means renouncing precision and adapting the interaction to the possibilities of the tracker sensor
Snazer: the simulations and networks analyzer
<p>Abstract</p> <p>Background</p> <p>Networks are widely recognized as key determinants of structure and function in systems that span the biological, physical, and social sciences. They are static pictures of the interactions among the components of complex systems. Often, much effort is required to identify networks as part of particular patterns as well as to visualize and interpret them.</p> <p>From a pure dynamical perspective, simulation represents a relevant <it>way</it>-<it>out</it>. Many simulator tools capitalized on the "noisy" behavior of some systems and used formal models to represent cellular activities as temporal trajectories. Statistical methods have been applied to a fairly large number of replicated trajectories in order to infer knowledge.</p> <p>A tool which both graphically manipulates reactive models and deals with sets of simulation time-course data by aggregation, interpretation and statistical analysis is missing and could add value to simulators.</p> <p>Results</p> <p>We designed and implemented <it>Snazer</it>, the simulations and networks analyzer. Its goal is to aid the processes of visualizing and manipulating reactive models, as well as to share and interpret time-course data produced by stochastic simulators or by any other means.</p> <p>Conclusions</p> <p><it>Snazer </it>is a solid prototype that integrates biological network and simulation time-course data analysis techniques.</p
Hand-draw sketching for image retrieval through fuzzy clustering techniques
Nowadays, the growing of digital media such as images represents an important issue for niultimedia mining applications. Since the traditional information retrieval techniques developed for textual documents do not support adequately these media, new approaches for indexing and retrieval of images are needed. In this paper, we propose an approach for retrieving image by hand-drawn object sketch. For this purpose. we address the classification of images based on shape recognition. The classification is based on the combined use of geometrical and moments features extracted by a given collection of images and achieves shape-based classification through fuzzy clustering techniques. Then, the retrieval is obtained using a hand-draw shape that becomes a query to submit to the system and get ranked similar images
Assessing Think-Pair-Square in Distributed Modeling of Use Case Diagram
In this paper, we propose a new method for the modeling of use case diagrams in the context of global software development. It is based on think-pair-square, a widely used cooperative method for active problem solving. The validity of the developed technology (i.e., the method and its supporting environment) has been assessed through two controlled experiments. In particular, the experiments have been conducted to compare the developed technology with a brainstorming session based on face-to-face interaction. The comparison has been performed with respect to the time needed to model use case diagrams and the quality of the produced models. The data analysis indicates a significant difference in favor of the brainstorming session for the time, with no significant impact on the requirements specification
- …