249 research outputs found

    The role of technology in improving the Customer Experience in the banking sector: a systematic mapping study

    Get PDF
    Information Technology (IT) has revolutionized the way we manage our money. The adoption of innovative technologies in banking scenarios allows to access old and new financial services but in a faster and more secure, comfortable, rewarding and engaging way. The number, the performances and the seamless integration of these innovations is a driver for banks to retain their customers and avoid costly change of hearts. The literature is rich in works reporting on the use of technology with direct or indirect impact on the experience of banking customers. Some mapping studies about the adoption of technologies in the field exist, but they are specific to particular technologies (e.g., only Artificial Intelligence), or vice versa too generic (e.g., reviewing the adoption of technologies to support any kind of banking process). So a specific research effort on the crossed domain of technology and Customer Experience (CX) is missing. This paper aims to overcome the following gaps: the lack of a comprehensive map of the research made in the field in the past decade; a discussion on the current research trends of top publications and journals is missing; the next research challenges are yet to be identified. To face these limitations, we designed and submitted 7 different queries to pull papers out of 4 popular scientific databases. From an initial set of 6,756 results, we identified a set of 89 primary studies that we thoroughly analyzed. A selection of the top 20% works allowed us to seek the most performant technologies as well as other promising ones that have not been experimented yet in the field. Main results prove that the combined study of technology and CX in the banking sector is not approached systematically and thus the development of a new specific research line is needed

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    PROACTIVE BIOMETRIC-ENABLED FORENSIC IMPRINTING SYSTEM

    Get PDF
    Insider threats are a significant security issue. The last decade has witnessed countless instances of data loss and exposure in which leaked data have become publicly available and easily accessible. Losing or disclosing sensitive data or confidential information may cause substantial financial and reputational damage to a company. Therefore, preventing or responding to such incidents has become a challenging task. Whilst more recent research has focused explicitly on the problem of insider misuse, it has tended to concentrate on the information itself—either through its protection or approaches to detecting leakage. Although digital forensics has become a de facto standard in the investigation of criminal activities, a fundamental problem is not being able to associate a specific person with particular electronic evidence, especially when stolen credentials and the Trojan defence are two commonly cited arguments. Thus, it is apparent that there is an urgent requirement to develop a more innovative and robust technique that can more inextricably link the use of information (e.g., images and documents) to the users who access and use them. Therefore, this research project investigates the role that transparent and multimodal biometrics could play in providing this link by leveraging individuals’ biometric information for the attribution of insider misuse identification. This thesis examines the existing literature in the domain of data loss prevention, detection, and proactive digital forensics, which includes traceability techniques. The aim is to develop the current state of the art, having identified a gap in the literature, which this research has attempted to investigate and provide a possible solution. Although most of the existing methods and tools used by investigators to conduct examinations of digital crime help significantly in collecting, analysing and presenting digital evidence, essential to this process is that investigators establish a link between the notable/stolen digital object and the identity of the individual who used it; as opposed to merely using an electronic record or a log that indicates that the user interacted with the object in question (evidence). Therefore, the proposed approach in this study seeks to provide a novel technique that enables capturing individual’s biometric identifiers/signals (e.g. face or keystroke dynamics) and embedding them into the digital objects users are interacting with. This is achieved by developing two modes—a centralised or decentralised manner. The centralised approach stores the mapped information alongside digital object identifiers in a centralised storage repository; the decentralised approach seeks to overcome the need for centralised storage by embedding all the necessary information within the digital object itself. Moreover, no explicit biometric information is stored, as only the correlation that points to those locations within the imprinted object is preserved. Comprehensive experiments conducted to assess the proposed approach show that it is highly possible to establish this correlation even when the original version of the examined object has undergone significant modification. In many scenarios, such as changing or removing part of an image or document, including words and sentences, it was possible to extract and reconstruct the correlated biometric information from a modified object with a high success rate. A reconstruction of the feature vector from unmodified images was possible using the generated imprints with 100% accuracy. This was achieved easily by reversing the imprinting processes. Under a modification attack, in which the imprinted object is manipulated, at least one imprinted feature vector was successfully retrieved from an average of 97 out of 100 images, even when the modification percentage was as high as 80%. For the decentralised approach, the initial experimental results showed that it was possible to retrieve the embedded biometric signals successfully, even when the file (i.e., image) had had 75% of its original status modified. The research has proposed and validated a number of approaches to the embedding of biometric data within digital objects to enable successful user attribution of information leakage attacks.Embassy of Saudi Arabia in Londo

    Mobile user authentication system (MUAS) for e-commerce applications.

    Get PDF
    The rapid growth of e-commerce has many associated security concerns. Thus, several studies to develop secure online authentication systems have emerged. Most studies begin with the premise that the intermediate network is the primary point of compromise. In this thesis, we assume that the point of compromise lies within the end-host or browser; this security threat is called the man-in-the-browser (MITB) attack. MITB attacks can bypass security measures of public key infrastructures (PKI), as well as encryption mechanisms for secure socket layers and transport layer security (SSL/TLS) protocol. This thesis focuses on developing a system that can circumvent MITB attacks using a two-phase secure-user authentication system, with phases that include challenge and response generation. The proposed system represents the first step in conducting an online business transaction.The proposed authentication system design contributes to protect the confidentiality of the initiating client by requesting minimal and non-confidential information to bypass the MITB attack and transition the authentication mechanism from the infected browser to a mobile-based system via a challenge/response mechanism. The challenge and response generation process depends on validating the submitted information and ensuring the mobile phone legitimacy. Both phases within the MUAS context mitigate the denial-of-service (DOS) attack via registration information, which includes the client’s mobile number and the International Mobile Equipment Identity (IMEI) of the client’s mobile phone.This novel authentication scheme circumvents the MITB attack by utilising the legitimate client’s personal mobile phone as a detached platform to generate the challenge response and conduct business transactions. Although the MITB attacker may have taken over the challenge generation phase by failing to satisfy the required security properties, the response generation phase generates a secure response from the registered legitimate mobile phone by employing security attributes from both phases. Thus, the detached challenge- and response generation phases are logically linked

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    An innovative vision system for industrial applications

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 20-11-2015A pesar de que los sistemas de visión por computadora ocupan un puesto predominante en nuestra sociedad, su estructura no sigue ningún estándar. La implementación de aplicaciones de visión requiere de plataformas de alto rendimiento tales como GPUs o FPGAs y el uso de sensores de imagen con características muy distintas a las de la electrónica de consumo. En la actualidad, cada fabricante y equipo de investigación desarrollan sus plataformas de visión de forma independiente y sin ningún tipo de intercompatibilidad. En esta tesis se presenta una nueva plataforma de visión por computador utilizable en un amplio espectro de aplicaciones. Las características de dicha plataforma se han definido tras la implementación de tres aplicaciones de visión, basadas en: SOC, FPGA y GPU, respectivamente. Como resultado, se ha definido una plataforma modular con los siguientes componentes intercambiables: Sensor, procesador de imágenes ”al vuelo”, unidad de procesado principal, acelerador hardware y pila de software. Asimismo, se presenta un algoritmo para realizar transformaciones geométricas, sintetizable en FPGA y con una latencia de tan solo 90 líneas horizontales. Todos los elementos software de esta plataforma están desarrollados con licencias de Software Libre; durante el trascurso de esta tesis se han contribuido y aceptado más de 200 cambios a distintos proyectos de Software Libre, tales como: Linux, YoctoProject y U-boot, entre otros, promoviendo el ecosistema necesario para la creación de una comunidad alrededor de esta tesis.Tras la implementación de la plataforma en un producto comercial, Qtechnology QT5022, y su uso en varias aplicaciones industriales se ha demostrado que es posible el uso de una plataforma genérica de visión que permita reutilizar elementos y comparar resultados objetivamenteDespite the fact that computer vision systems place an important role in our society, its structure does not follow any standard. The implementation of computer vision application require high performance platforms, such as GPUs or FPGAs, and very specialized image sensors. Nowadays, each manufacturer and research lab develops their own vision platform independently without considering any inter-compatibility. This Thesis introduces a new computer vision platform that can be used in a wide spectrum of applications. The characteristics of the platform has been defined after the implementation of three different computer vision applications, based on: SOC, FPGA and GPU respectively. As a result, a new modular platform has been defined with the following interchangeably elements: Sensor, Image Processing Pipeline, Processing Unit, Acceleration unit and Computer Vision Stack. This thesis also presents an FPGA synthetizable algorithm for performing geometric transformations on the fly, with a latency under 90 horizontal lines. All the software elements of this platform have an Open Source licence; over the course of this thesis, more than 200 patches have been contributed and accepted into different Open Source projects like the Linux Kernel, Yocto Project and U-boot, among others, promoting the required ecosystem for the creation of a community around this novel system. The platform has been validated in an industrial product, Qtechnology QT5022, used on diverse industrial applications; demonstrating the great advantages of a generic computer vision system as a platform for reusing elements and comparing results objectivel

    The Future of Information Sciences : INFuture2007 : Digital Information and Heritage

    Get PDF
    corecore