1,087 research outputs found
Multimodal Content Delivery for Geo-services
This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device
Integrating Haptic Feedback into Mobile Location Based Services
Haptics is a feedback technology that takes advantage of the human sense of touch by
applying forces, vibrations, and/or motions to a haptic-enabled device such as a mobile
phone. Historically, human-computer interaction has been visual - text and images on
the screen. Haptic feedback can be an important additional method especially in Mobile
Location Based Services such as knowledge discovery, pedestrian navigation and notification
systems. A knowledge discovery system called the Haptic GeoWand is a low
interaction system that allows users to query geo-tagged data around them by using
a point-and-scan technique with their mobile device. Haptic Pedestrian is a navigation
system for walkers. Four prototypes have been developed classified according to
the userâs guidance requirements, the user type (based on spatial skills), and overall
system complexity. Haptic Transit is a notification system that provides spatial information
to the users of public transport. In all these systems, haptic feedback is used
to convey information about location, orientation, density and distance by use of the
vibration alarm with varying frequencies and patterns to help understand the physical
environment. Trials elicited positive responses from the users who see benefit in being
provided with a âheads upâ approach to mobile navigation. Results from a memory recall
test show that the users of haptic feedback for navigation had better memory recall
of the region traversed than the users of landmark images. Haptics integrated into a
multi-modal navigation system provides more usable, less distracting but more effective
interaction than conventional systems. Enhancements to the current work could include
integration of contextual information, detailed large-scale user trials and the exploration
of using haptics within confined indoor spaces
Using haptics as an alternative to visual map interfaces for public transport information systems
The use of public transport for daily commutes or for journeys within a new city is
something most people rely on. To ensure users actively use public transport
services the availability and usability of information relevant to the traveler at any
given time is very important. In this paper we describe an interaction model for
users of public transport. The interaction model is divided into two main
components â the web interaction model and the mobile interaction model. The
web interface provides real-time bus information using a website. The mobile
interaction model provides similar information to the user through visual user
interfaces, gesture based querying, and haptic feedback. Improved access to transit
services is very dependent on the effectiveness of communicating information to
existing and potential passengers. We discuss the importance and benefits of our
multi-modal interaction in public transport systems. The importance of the
relatively new mode of haptic feedback is also discussed
Recommended from our members
Mobile assistive technologies for the visually impaired
There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes)
Designing usable mobile interfaces for spatial data
2010 - 2011This
dissertation
deals
mainly
with
the
discipline
of
Human-ÂâComputer
Interaction
(HCI),
with
particular
attention
on
the
role
that
it
plays
in
the
domain
of
modern
mobile
devices.
Mobile
devices
today
offer
a
crucial
support
to
a
plethora
of
daily
activities
for
nearly
everyone.
Ranging
from
checking
business
mails
while
traveling,
to
accessing
social
networks
while
in
a
mall,
to
carrying
out
business
transactions
while
out
of
office,
to
using
all
kinds
of
online
public
services,
mobile
devices
play
the
important
role
to
connect
people
while
physically
apart.
Modern
mobile
interfaces
are
therefore
expected
to
improve
the
user's
interaction
experience
with
the
surrounding
environment
and
offer
different
adaptive
views
of
the
real
world.
The
goal
of
this
thesis
is
to
enhance
the
usability
of
mobile
interfaces
for
spatial
data.
Spatial
data
are
particular
data
in
which
the
spatial
component
plays
an
important
role
in
clarifying
the
meaning
of
the
data
themselves.
Nowadays,
this
kind
of
data
is
totally
widespread
in
mobile
applications.
Spatial
data
are
present
in
games,
map
applications,
mobile
community
applications
and
office
automations.
In
order
to
enhance
the
usability
of
spatial
data
interfaces,
my
research
investigates
on
two
major
issues:
1. Enhancing
the
visualization
of
spatial
data
on
small
screens
2. Enhancing
the
text-Ââinput
methods
I
selected
the
Design Science Research approach
to
investigate
the
above
research
questions.
The
idea
underling
this
approach
is
âyou
build artifact to learn from itâ, in
other
words
researchers
clarify
what
is
new
in
their
design.
The
new
knowledge
carried
out
from
the
artifact
will
be
presented
in
form
of
interaction
design
patterns
in
order
to
support
developers
in
dealing
with
issues
of
mobile
interfaces.
The
thesis
is
organized
as
follows.
Initially
I
present
the
broader
context,
the
research
questions
and
the
approaches
I
used
to
investigate
them.
Then
the
results
are
split
into
two
main
parts.
In
the
first
part
I
present
the
visualization
technique
called
Framy.
The
technique
is
designed
to
support
users
in
visualizing
geographical
data
on
mobile
map
applications.
I
also
introduce
a
multimodal
extension
of
Framy
obtained
by
adding
sounds
and
vibrations.
After
that
I
present
the
process
that
turned
the
multimodal
interface
into
a
means
to
allow
visually
impaired
users
to
interact
with
Framy.
Some
projects
involving
the
design
principles
of
Framy
are
shown
in
order
to
demonstrate
the
adaptability
of
the
technique
in
different
contexts.
The
second
part
concerns
the
issue
related
to
text-Ââinput
methods.
In
particular
I
focus
on
the
work
done
in
the
area
of
virtual
keyboards
for
mobile
devices.
A
new
kind
of
virtual
keyboard
called
TaS
provides
users
with
an
input
system
more
efficient
and
effective
than
the
traditional
QWERTY
keyboard.
Finally,
in
the
last
chapter,
the
knowledge
acquired
is
formalized
in
form
of
interaction
design
patterns. [edited by author]X n.s
Advanced Location-Based Technologies and Services
Since the publication of the first edition in 2004, advances in mobile devices, positioning sensors, WiFi fingerprinting, and wireless communications, among others, have paved the way for developing new and advanced location-based services (LBSs). This second edition provides up-to-date information on LBSs, including WiFi fingerprinting, mobile computing, geospatial clouds, geospatial data mining, location privacy, and location-based social networking. It also includes new chapters on application areas such as LBSs for public health, indoor navigation, and advertising. In addition, the chapter on remote sensing has been revised to address advancements
FlexRDZ: Autonomous Mobility Management for Radio Dynamic Zones
FlexRDZ is an online, autonomous manager for radio dynamic zones (RDZ) that
seeks to enable the safe operation of RDZs through real-time control of
deployed test transmitters. FlexRDZ leverages Hierarchical Task Networks and
digital twin modeling to plan and resolve RDZ violations in near real-time. We
prototype FlexRDZ with GTPyhop and the Terrain Integrated Rough Earth Model
(TIREM). We deploy and evaluate FlexRDZ within a simulated version of the Salt
Lake City POWDER testbed, a potential urban RDZ environment. Our simulations
show that FlexRDZ enables up to a 20 dBm reduction in mobile interference and a
significant reduction in the total power of leaked transmissions while
preserving the overall communication capabilities and uptime of test
transmitters. To our knowledge, FlexRDZ is the first autonomous system for RDZ
management.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Recommended from our members
Interface design for a remote guidance system for the blind: Using dual-screen displays
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The mobility for the visually impaired people is one of the main challenges that researchers are still facing around the world. Although some projects have been conducted to improve the mobility of visually impaired people, further research is still needed. One of these projects is Brunel Remote Guidance System (BRGS). BRGS is aimed to assist visually impaired users in avoiding obstacles and reaching their destinations safely by providing online instructions via a remote sighted guide.
This study comes as continuation of the development process of BRGS; the main aim that has been achieved of this research is the optimisation of the interface design for the system guide terminal. This helps the sighted guide to assist the VIUs to avoid obstacles safely and comfortably in the micro-navigation, as well as to keep them on the right track to reach their destination in the macro-navigation. After using the content analysis, the performance factors and their assessments method were identified in each BRGSâ element, which concluded that there is a lack of research on the guide terminal setup and the assessment method for the sighted guide performance. Furthermore, there are no model to assist the sighted guide performance and two-screen displays used in the literature review and similar projects. A model was designed as a platform to conduct the evaluation on sighted guide performance. Based on this model, the computer-based simulation was established and tested, which made the simulation is ready for next task; the evaluation of the sighted guide performance. The conducted study determined the effects of the two-screen displays on the recognition performance of the 80 participants in the guide terminal. The performance was measured with the context of four different resolution conditions. The study was based on a simulation technique, which is consisted of two key performance elements in order to examine the sighted guide performance; the macro-navigation element and the micro-navigation element. The results show that the two-screen displays have an effect on the performance of the sighted guide. The optimum setup for the two-screen displays for the guide terminal consisted of a big digital map screen display (4CIF [704p x 576p]) and a small video image screen display (CIF [352p x 288p]), which one of the four different resolutions. This interface design has been recommended as a final setup in the guide terminal
Multimodal Sensing for Robust and Energy-Efficient Context Detection with Smart Mobile Devices
Adoption of smart mobile devices (smartphones, wearables, etc.) is rapidly growing. There are already over 2 billion smartphone users worldwide [1] and the percentage of smartphone users is expected to be over 50% in the next five years [2]. These devices feature rich sensing capabilities which allow inferences about mobile device userâs surroundings and behavior. Multiple and diverse sensors common on such mobile devices facilitate observing the environment from different perspectives, which helps to increase robustness of inferences and enables more complex context detection tasks. Though a larger number of sensing modalities can be beneficial for more accurate and wider mobile context detection, integrating these sensor streams is non-trivial.
This thesis presents how multimodal sensor data can be integrated to facilitate ro- bust and energy efficient mobile context detection, considering three important and challenging detection tasks: indoor localization, indoor-outdoor detection and human activity recognition. This thesis presents three methods for multimodal sensor inte- gration, each applied for a different type of context detection task considered in this thesis. These are gradually decreasing in design complexity, starting with a solution based on an engineering approach decomposing context detection to simpler tasks and integrating these with a particle filter for indoor localization. This is followed by man- ual extraction of features from different sensors and using an adaptive machine learn- ing technique called semi-supervised learning for indoor-outdoor detection. Finally, a method using deep neural networks capable of extracting non-intuitive features di- rectly from raw sensor data is used for human activity recognition; this method also provides higher degree of generalization to other context detection tasks.
Energy efficiency is an important consideration in general for battery powered mo- bile devices and context detection is no exception. In the various context detection tasks and solutions presented in this thesis, particular attention is paid to this issue by relying largely on sensors that consume low energy and on lightweight computations. Overall, the solutions presented improve on the state of the art in terms of accuracy and robustness while keeping the energy consumption low, making them practical for use on mobile devices
- âŠ