4,361 research outputs found
Intelligent Indoor Parking
Nowadays positioning based navigation is an integrated part of our everydayâs routine. Hence, it is hard to succeed without a GPS based navigation system in a bigger city today. However, indoor positioning and navigation are still in their infancy, although these services would be desirable in many areas. One obvious application domain is vehicle navigation in a parking garage. The use of an indoor vehicle navigation system is convenient for the drivers, decreases the unnecessary circling in the garage and reduces air pollution. In this paper, we introduce our iParking indoor positioning and navigation system which has been under development. Our system monitors the occupancy of the parking lots, and with the aid of a Wi-Fi based background wireless infrastructure tracks the position of the vehicle entering the parking garage and navigates the driver to an appropriate free parking lot. Lot selection is handled at the entry point of the garage based on simple preferences, eg., the closest disabled parking space. The navigation interface is the driverâs smartphone. Currently, we have been implementing a prototype of our iParking system in a parking garage of a shopping mall for demonstration purposes
Crowd-based cognitive perception of the physical world: Towards the internet of senses
This paper introduces a possible architecture and discusses the research directions for the realization of the Cognitive Perceptual Internet (CPI), which is enabled by the convergence of wired and wireless communications, traditional sensor networks, mobile crowd-sensing, and machine learning techniques. The CPI concept stems from the fact that mobile devices, such as smartphones and wearables, are becoming an outstanding mean for zero-effort world-sensing and digitalization thanks to their pervasive diffusion and the increasing number of embedded sensors. Data collected by such devices provide unprecedented insights into the physical world that can be inferred through cognitive processes, thus originating a digital sixth sense. In this paper, we describe how the Internet can behave like a sensing brain, thus evolving into the Internet of Senses, with network-based cognitive perception and action capabilities built upon mobile crowd-sensing mechanisms. The new concept of hyper-map is envisioned as an efficient geo-referenced repository of knowledge about the physical world. Such knowledge is acquired and augmented through heterogeneous sensors, multi-user cooperation and distributed learning mechanisms. Furthermore, we indicate the possibility to accommodate proactive sensors, in addition to common reactive sensors such as cameras, antennas, thermometers and inertial measurement units, by exploiting massive antenna arrays at millimeter-waves to enhance mobile terminals perception capabilities as well as the range of new applications. Finally, we distillate some insights about the challenges arising in the realization of the CPI, corroborated by preliminary results, and we depict a futuristic scenario where the proposed Internet of Senses becomes true
Recommended from our members
Technological framework for ubiquitous interactions using contextâaware mobile devices
This report presents research and development of dedicated system architecture, designed to enable its users to interact with each other as well as to access information on Points of Interest that exist in their immediate environment. This is accomplished through managing personal preferences and contextual information in a distributed manner and in real-time. The advantage of this system architecture is that it uses mobile devices, heterogeneous sensors and a selection of user interface paradigms to produce a sociotechnical framework to enhance the perception of the environment and promote intuitive interactions. The thrust of the work has been on software development and component integration. Iterative prototyping was adopted as a development method in order to effectively implement the usersâ feedback and establish a platform for collaboration that closely meets the requirements and aids their decision-making process. The requirement acquisition was followed by the system-modelling phase in order to produce a robust software prototype. The implementation includes component-based development and extensive use of design patterns over native programming. Conclusively, the software product has become the means to evaluate differences in the use of mixed reality technologies in a ubiquitous scenario.
The prototype can query a number of context sources such as sensors, or details of the personal profile, to acquire relevant data. The data (and metadata) is stored in opensource structures, so that they are accessible at every layer of the system architecture and at any time. By proactively processing the acquired context, the system can assist the users in their tasks (e.g. navigation) without explicit input â e.g. by simply creating a gesture with the device. However, advanced interaction with the application via the user interface is available for requests that are more complex.
Representations of the real world objects, their spatial relations and other captured features of interest are visualised on scalable interfaces, ranging from 2D to 3D models and from photorealism to stylised clues and symbols. Two principal modes of operation have been implemented; one, using geo-referenced virtual reality models of the environment, updated in real time, and second, using the overlay of descriptive annotations and graphics on the video images of the surroundings, captured by a video camera. The latter is referred to as augmented reality.
The continuous feed of the device position and orientation data, from the GPS receiver and the digital compass, into the application, makes the framework fit for use in unknown environments and therefore suitable for ubiquitous operation. This is one of the novelties of the proposed framework, because it enables a whole range of social, peer-to-peer interactions to take place. The scenarios of how the system could be employed to pursue these remote interactions and collaborative efforts on mobile devices are addressed in the context of urban navigation. The conceptual design and implementation of the novel location and orientation based algorithm for mobile AR are presented in detail. The system is, however, multifaceted and capable of supporting peer-to-peer exchange of information in a pervasive fashion, usable in various contexts. The modalities of these interactions are explored and laid out in several scenarios, but particularly in the context of user adoption. Two evaluation tasks took place. The preliminary evaluation examined certain aspects that influence user interaction while being immersed in a virtual environment, whereas the second summative evaluation compared the utility and certain usability aspects of the AR and VR interfaces
Constructive 3D Visualization techniques on Mobile platform- Empirical Analysis
As per the concept of 3D visualization on mobile devices it is clear that it belongs to two approaches i.e. local and remote approach. According to the technological advances in mobile devices it is possible to handle some complex data locally and visualized it. But still it is a challenging task to manage real entities on mobile devices locally. Remote visualization plays a vital role for 3D visualization on mobile platform in which data comes from server. Remote approach for 3D visualization on mobile platform consist of various techniques, critical analysis of such techniques is focus into this paper. Also the main focus is on network aspects
A Review of Hybrid Indoor Positioning Systems Employing WLAN Fingerprinting and Image Processing
Location-based services (LBS) are a significant permissive technology. One of the main components in indoor LBS is the indoor positioning system (IPS). IPS utilizes many existing technologies such as radio frequency, images, acoustic signals, as well as magnetic sensors, thermal sensors, optical sensors, and other sensors that are usually installed in a mobile device. The radio frequency technologies used in IPS are WLAN, Bluetooth, Zig Bee, RFID, frequency modulation, and ultra-wideband. This paper explores studies that have combined WLAN fingerprinting and image processing to build an IPS. The studies on combined WLAN fingerprinting and image processing techniques are divided based on the methods used. The first part explains the studies that have used WLAN fingerprinting to support image positioning. The second part examines works that have used image processing to support WLAN fingerprinting positioning. Then, image processing and WLAN fingerprinting are used in combination to build IPS in the third part. A new concept is proposed at the end for the future development of indoor positioning models based on WLAN fingerprinting and supported by image processing to solve the effect of people presence around users and the user orientation problem
- âŠ