1,434 research outputs found
Non-Intrusive Subscriber Authentication for Next Generation Mobile Communication Systems
Merged with duplicate record 10026.1/753 on 14.03.2017 by CS (TIS)The last decade has witnessed massive growth in both the technological development, and
the consumer adoption of mobile devices such as mobile handsets and PDAs. The recent
introduction of wideband mobile networks has enabled the deployment of new services
with access to traditionally well protected personal data, such as banking details or
medical records. Secure user access to this data has however remained a function of the
mobile device's authentication system, which is only protected from masquerade abuse by
the traditional PIN, originally designed to protect against telephony abuse.
This thesis presents novel research in relation to advanced subscriber authentication for
mobile devices. The research began by assessing the threat of masquerade attacks on
such devices by way of a survey of end users. This revealed that the current methods of
mobile authentication remain extensively unused, leaving terminals highly vulnerable to
masquerade attack. Further investigation revealed that, in the context of the more
advanced wideband enabled services, users are receptive to many advanced
authentication techniques and principles, including the discipline of biometrics which
naturally lends itself to the area of advanced subscriber based authentication.
To address the requirement for a more personal authentication capable of being applied
in a continuous context, a novel non-intrusive biometric authentication technique was
conceived, drawn from the discrete disciplines of biometrics and Auditory Evoked
Responses. The technique forms a hybrid multi-modal biometric where variations in the
behavioural stimulus of the human voice (due to the propagation effects of acoustic
waves within the human head), are used to verify the identity o f a user. The resulting
approach is known as the Head Authentication Technique (HAT).
Evaluation of the HAT authentication process is realised in two stages. Firstly, the
generic authentication procedures of registration and verification are automated within a
prototype implementation. Secondly, a HAT demonstrator is used to evaluate the
authentication process through a series of experimental trials involving a representative
user community. The results from the trials confirm that multiple HAT samples from
the same user exhibit a high degree of correlation, yet samples between users exhibit a
high degree of discrepancy. Statistical analysis of the prototypes performance realised
early system error rates of; FNMR = 6% and FMR = 0.025%. The results clearly
demonstrate the authentication capabilities of this novel biometric approach and the
contribution this new work can make to the protection of subscriber data in next
generation mobile networks.Orange Personal Communication Services Lt
Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts
The use of context in mobile devices is receiving increasing attention in mobile and ubiquitous computing research. In this article we consider how to augment mobile devices with awareness of their environment and situation as context. Most work to date has been based on integration of generic context sensors, in particular for location and visual context. We propose a different approach based on integration of multiple diverse sensors for awareness of situational context that can not be inferred from location, and targeted at mobile device platforms that typically do not permit processing of visual context. We have investigated multi-sensor context-awareness in a series of projects, and report experience from development of a number of device prototypes. These include development of an awareness module for augmentation of a mobile phone, of the Mediacup exemplifying context-enabled everyday artifacts, and of the Smart-Its platform for aware mobile devices. The prototypes have been explored in various applications to validate the multi-sensor approach to awareness, and to develop new perspectives of how embedded context-awareness can be applied in mobile and ubiquitous computing
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based Features Extracted in Different Acoustic Conditions
ABSTRACT: In the last years, there has a great progress in automatic speech recognition. The challenge now it is not only recognize the semantic content in the speech but also the called "paralinguistic" aspects of the speech, including the emotions, and the personality of the speaker. This research work aims in the development of a methodology for the automatic emotion recognition from speech signals in non-controlled noise conditions. For that purpose, different sets of acoustic, non-linear, and wavelet based features are used to characterize emotions in different databases created for such purpose
Harnessing AI for Speech Reconstruction using Multi-view Silent Video Feed
Speechreading or lipreading is the technique of understanding and getting
phonetic features from a speaker's visual features such as movement of lips,
face, teeth and tongue. It has a wide range of multimedia applications such as
in surveillance, Internet telephony, and as an aid to a person with hearing
impairments. However, most of the work in speechreading has been limited to
text generation from silent videos. Recently, research has started venturing
into generating (audio) speech from silent video sequences but there have been
no developments thus far in dealing with divergent views and poses of a
speaker. Thus although, we have multiple camera feeds for the speech of a user,
but we have failed in using these multiple video feeds for dealing with the
different poses. To this end, this paper presents the world's first ever
multi-view speech reading and reconstruction system. This work encompasses the
boundaries of multimedia research by putting forth a model which leverages
silent video feeds from multiple cameras recording the same subject to generate
intelligent speech for a speaker. Initial results confirm the usefulness of
exploiting multiple camera views in building an efficient speech reading and
reconstruction system. It further shows the optimal placement of cameras which
would lead to the maximum intelligibility of speech. Next, it lays out various
innovative applications for the proposed system focusing on its potential
prodigious impact in not just security arena but in many other multimedia
analytics problems.Comment: 2018 ACM Multimedia Conference (MM '18), October 22--26, 2018, Seoul,
Republic of Kore
A Phone Learning Model for Enhancing Productivity of Visually Impaired Civil Servants
Phone-based learning in civil service is the use of voice technologies to deliver learning and capacity building training services to
government employees. The Internet revolution and advancement in Information and Communications Technology (ICT) have given rise
to online and remote staff training for the purpose of enhancing workers productivity. The need for civil servants in Nigeria to develop
capacity that will enhance knowledge is a key requirement to having competitive advantage in the work place. Existing online learning
platforms (such as web-based learning, mobile learning, etc) did not consider the plight of the visually impaired. These platforms provide
graphical interfaces that require sight to access. The visually impaired civil servants require auditory access to functionalities that exist in
learning management system on the Internet. Thus a gap exist between the able-bodied and visually impaired civil servants on
accessibility to e-learning platform. The objective of this paper is to provide a personalized telephone learning model and a prototype
application that will enhance the productivity of the visually impaired workers in Government establishments in Nigeria. The model was
designed using Unified Modeling Language (UML) diagram. The prototype application was implemented and evaluated. With the
proposed model and application, the visually and mobility impaired worker are able to participate in routine staff training and
consequently enhances their productivity just like their able-bodied counterparts. The prototype application also serves as an alternative
training platform for the able-bodied workers. Future research direction for this study will include biometric authentication of learners
accessing the applicatio
A Phone Learning Model for Enhancing Productivity of Visually Impaired Civil Servants
Phone-based learning in civil service is the use of voice technologies to deliver learning and capacity building training services to
government employees. The Internet revolution and advancement in Information and Communications Technology (ICT) have given rise
to online and remote staff training for the purpose of enhancing workers productivity. The need for civil servants in Nigeria to develop
capacity that will enhance knowledge is a key requirement to having competitive advantage in the work place. Existing online learning
platforms (such as web-based learning, mobile learning, etc) did not consider the plight of the visually impaired. These platforms provide
graphical interfaces that require sight to access. The visually impaired civil servants require auditory access to functionalities that exist in
learning management system on the Internet. Thus a gap exist between the able-bodied and visually impaired civil servants on
accessibility to e-learning platform. The objective of this paper is to provide a personalized telephone learning model and a prototype
application that will enhance the productivity of the visually impaired workers in Government establishments in Nigeria. The model was
designed using Unified Modeling Language (UML) diagram. The prototype application was implemented and evaluated. With the
proposed model and application, the visually and mobility impaired worker are able to participate in routine staff training and
consequently enhances their productivity just like their able-bodied counterparts. The prototype application also serves as an alternative
training platform for the able-bodied workers. Future research direction for this study will include biometric authentication of learners
accessing the applicatio
- …