6,944 research outputs found

    Multichannel in a complex world

    Get PDF
    The proliferation of devices and channels has brought new challenges to just about every organisation in delivering consistently good customer experiences and effectively joining up service provision with marketing activity, data and content. A good multichannel strategy and execution is increasingly becoming essential to marketers and customer experience professionals from every sector. This report seeks to identify the key issues, challenges and opportunities that surround multichannel and provide some best practice insight and principles on the elements that are key to multichannel success. As part of the research for this report, we spoke to six experienced customer experience and marketing practitioners from large organisations across different sectors. In Multichannel Marketing: Metrics and Methods for On and Offline Success, Akin Arikan (2008) said: ‘Because customers are multichannel beings and demand relevant, consistent experiences across all channels, businesses need to adopt a multichannel mind-set when listening to their customers.’ It was clear from the companies interviewed for this report that it remains challenging for many organisations to maintain consistency across so many customer touchpoints. Not only that, but the ability to balance consistency with the capability to fully exploit the unique attributes of each channel remains an aspiration for many. The proliferation of devices and digital channels has added complexity to customer journeys, making issues around the joining up of customer experience and the attribution of value of key importance to many. Whilst senior leaders within the organisations spoken to seem to be bought in to multichannel, this buy-in was not always replicated across the rest of the organisation and did not always translate into a cohesive multichannel strategy. A number of companies were undertaking work around customer journey mapping and customer segmentation, using a variety of passive and actively collected data in order to identify specific areas of poor customer experience and create action plans for improvement. Others were undertaking projects using sophisticated tracking and tagging technologies to develop an understanding of the value and role of specific channels and to provide better intelligence to the business on attribution that might be used to inform future investment decisions. A consistent barrier to improving customer experience is the ability to join up many different legacy systems and data in order to provide a single customer view and form the basis for delivery of a more consistent and cohesive multichannel approach. Whilst there remain significant challenges around multichannel, there are some useful technologies allowing businesses to develop better insight into customer motivation and activity. Nonetheless, delivery of seamless multichannel experience remains a work-inprogress for many

    The Architecture of MEG Simulation and Analysis Software

    Full text link
    MEG (Mu to Electron Gamma) is an experiment dedicated to search for the μ+e+γ\mu^+ \rightarrow e^+\gamma decay that is strongly suppressed in the Standard Model but predicted in several Super Symmetric extensions of it at an accessible rate. MEG is a small-size experiment (5060\approx 50-60 physicists at any time) with a life span of about 10 years. The limited human resource available, in particular in the core offline group, emphasized the importance of reusing software and exploiting existing expertise. Great care has been devoted to provide a simple system that hides implementation details to the average programmer. That allowed many members of the collaboration to contribute to the development of the software of the experiment with limited programming skill. The offline software is based on two frameworks: {\bf REM} in FORTRAN 77 used for the event generation and detector simulation package {\bf GEM}, based on GEANT 3, and {\bf ROME} in C++ used in the readout simulation {\bf Bartender} and in the reconstruction and analysis program {\bf Analyzer}. Event display in the simulation is based on GEANT 3 graphic libraries and in the reconstruction on ROOT graphic libraries. Data are stored in different formats in various stage of the processing. The frameworks include utilities for input/output, database handling and format conversion transparent to the user.Comment: Presented at the IEEE NSS Knoxville, 2010 Revised according to referee's remarks Accepted by European Physical Journal Plu

    The SSDC contribution to the improvement of knowledge by means of 3D data projections of minor bodies

    Get PDF
    The latest developments of planetary exploration missions devoted to minor bodies required new solutions to correctly visualize and analyse data acquired over irregularly shaped bodies. ASI Space Science Data Center (SSDC-ASI, formerly ASDC-ASI Science Data Center) worked on this task since early 2013, when started developing the web tool MATISSE (Multi-purpose Advanced Tool for the Instruments of the Solar System Exploration) mainly focused on the Rosetta/ESA space mission data. In order to visualize very high-resolution shape models, MATISSE uses a Python module (vtpMaker), which can also be launched as a stand-alone command-line software. MATISSE and vtpMaker are part of the SSDC contribution to the new challenges imposed by the "orbital exploration" of minor bodies: 1) MATISSE allows to search for specific observations inside datasets and then analyse them in parallel, providing high-level outputs; 2) the 3D capabilities of both tools are critical in inferring information otherwise difficult to retrieve for non-spherical targets and, as in the case for the GIADA instrument onboard Rosetta, to visualize data related to the coma. New tasks and features adding valuable capabilities to the minor bodies SSDC tools are planned for the near future thanks to new collaborations

    Offline Recognition of Malayalam and Kannada Handwritten Documents Using Deep Learning

    Get PDF
    For a variety of reasons, handwritten text can be digitalized. It is used in a variety of government entities, including banks, post offices, and archaeological departments. Handwriting recognition, on the other hand, is a difficult task as everyone has a different writing style. There are essentially two methods for handwritten recognition: a holistic and an analytic approach. The previous methods of handwriting recognition are time- consuming. However, as deep neural networks have progressed, the approach has become more straightforward than previous methods. Furthermore, the bulk of existing solutions are limited to a single language. To recognise multilanguage handwritten manuscripts offline, this work employs an analytic approach. It describes how to convert Malayalam and Kannada handwritten manuscripts into editable text. Lines are separated from the input document first. After that, word segmentation is performed. Finally, each word is broken down into individual characters. An artificial neural network is utilised for feature extraction and classification. After that, the result is converted to a word document

    Statistical Parsing by Machine Learning from a Classical Arabic Treebank

    Get PDF
    Research into statistical parsing for English has enjoyed over a decade of successful results. However, adapting these models to other languages has met with difficulties. Previous comparative work has shown that Modern Arabic is one of the most difficult languages to parse due to rich morphology and free word order. Classical Arabic is the ancient form of Arabic, and is understudied in computational linguistics, relative to its worldwide reach as the language of the Quran. The thesis is based on seven publications that make significant contributions to knowledge relating to annotating and parsing Classical Arabic. Classical Arabic has been studied in depth by grammarians for over a thousand years using a traditional grammar known as i’rāb (إعغاة ). Using this grammar to develop a representation for parsing is challenging, as it describes syntax using a hybrid of phrase-structure and dependency relations. This work aims to advance the state-of-the-art for hybrid parsing by introducing a formal representation for annotation and a resource for machine learning. The main contributions are the first treebank for Classical Arabic and the first statistical dependency-based parser in any language for ellipsis, dropped pronouns and hybrid representations. A central argument of this thesis is that using a hybrid representation closely aligned to traditional grammar leads to improved parsing for Arabic. To test this hypothesis, two approaches are compared. As a reference, a pure dependency parser is adapted using graph transformations, resulting in an 87.47% F1-score. This is compared to an integrated parsing model with an F1-score of 89.03%, demonstrating that joint dependency-constituency parsing is better suited to Classical Arabic. The Quran was chosen for annotation as a large body of work exists providing detailed syntactic analysis. Volunteer crowdsourcing is used for annotation in combination with expert supervision. A practical result of the annotation effort is the corpus website: http://corpus.quran.com, an educational resource with over two million users per year

    Combining Model-Based and Feature-Driven Diagnosis Approaches - A Case Study on Electromechanical Actuators

    Get PDF
    Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed

    Parameter Identification for Thermal Reduced-Order Models in Electric Engines

    Get PDF
    One part of the validation process of electric engines must check for thermal aging and damage of their components due to the high temperatures to which they are exposed. This way, the thermal requirements of the machine can be defined, and specific minimum service life can be guaranteed. For this purpose, drives must be validated against the most critical cases identified through simulations of representative driving scenarios. Since the thermal models require a long computation time to determine the temperatures of the components in each time increment, reduced-order models (ROM s) that can estimate them quickly are preferred instead. Also, there are positions in the engine where the temperature in the thermal model is determined only by sensor data when performing calculations online, like with the coolant temperature. Since it is also relevant to compute these values when performing simulations offline, ROM s can be applied for this purpose as well. This project focuses on creating and comparing different types of ROM s for tempera- ture estimation in electric engines. Several variants of discrete-time state-space models (SSM s) have been developed in the literature, showing promising results but requiring a high level of expert knowledge. This work introduces a set of SSM s that is entirely data- based and does not require knowledge of the physics and dynamics of the motor. They allow the user to adjust the parameters in different engines and create customized variants. Three models were developed for each engine temperature to be estimated. A preprocess- ing of the driving data divides it into three possible domains, and each model estimates the temperatures in their respective one. Model discretization based on different scenarios has shown an improvement in estimation accuracy. Finally, black-box approaches based on artificial neural networks (ANN s) were designed since they showed high potential in literature. Regression and Long Short-Term Memory (LSTM ) models were created, and their hyperparameter s were optimized, but the results were of low performance compared to the SSM
    corecore