6,694 research outputs found

    From wallet to mobile: exploring how mobile payments create customer value in the service experience

    Get PDF
    This study explores how mobile proximity payments (MPP) (e.g., Apple Pay) create customer value in the service experience compared to traditional payment methods (e.g. cash and card). The main objectives were firstly to understand how customer value manifests as an outcome in the MPP service experience, and secondly to understand how the customer activities in the process of using MPP create customer value. To achieve these objectives a conceptual framework is built upon the Grönroos-Voima Value Model (Grönroos and Voima, 2013), and uses the Theory of Consumption Value (Sheth et al., 1991) to determine the customer value constructs for MPP, which is complimented with Script theory (Abelson, 1981) to determine the value creating activities the consumer does in the process of paying with MPP. The study uses a sequential exploratory mixed methods design, wherein the first qualitative stage uses two methods, self-observations (n=200) and semi-structured interviews (n=18). The subsequent second quantitative stage uses an online survey (n=441) and Structural Equation Modelling analysis to further examine the relationships and effect between the value creating activities and customer value constructs identified in stage one. The academic contributions include the development of a model of mobile payment services value creation in the service experience, introducing the concept of in-use barriers which occur after adoption and constrains the consumers existing use of MPP, and revealing the importance of the mobile in-hand momentary condition as an antecedent state. Additionally, the customer value perspective of this thesis demonstrates an alternative to the dominant Information Technology approaches to researching mobile payments and broadens the view of technology from purely an object a user interacts with to an object that is immersed in consumers’ daily life

    Consolidation of Urban Freight Transport – Models and Algorithms

    Get PDF
    Urban freight transport is an indispensable component of economic and social life in cities. Compared to other types of transport, however, it contributes disproportionately to the negative impacts of traffic. As a result, urban freight transport is closely linked to social, environmental, and economic challenges. Managing urban freight transport and addressing these issues poses challenges not only for local city administrations but also for companies, such as logistics service providers (LSPs). Numerous policy measures and company-driven initiatives exist in the area of urban freight transport to overcome these challenges. One central approach is the consolidation of urban freight transport. This dissertation focuses on urban consolidation centers (UCCs) which are a widely studied and applied measure in urban freight transport. The fundamental idea of UCCs is to consolidate freight transport across companies in logistics facilities close to an urban area in order to increase the efficiency of vehicles delivering goods within the urban area. Although the concept has been researched and tested for several decades and it was shown that it can reduce the negative externalities of freight transport in cities, in practice many UCCs struggle with a lack of business participation and financial difficulties. This dissertation is primarily focused on the costs and savings associated with the use of UCCs from the perspective of LSPs. The cost-effectiveness of UCC use, which is also referred to as cost attractiveness, can be seen as a crucial condition for LSPs to be interested in using UCC systems. The overall objective of this dissertation is two-fold. First, it aims to develop models to provide decision support for evaluating the cost-effectiveness of using UCCs. Second, it aims to analyze the impacts of urban freight transport regulations and operational characteristics on the cost attractiveness of using UCCs from the perspective of LSPs. In this context, a distinction is made between UCCs that are jointly operated by a group of LSPs and UCCs that are operated by third parties who offer their urban transport service for a fee. The main body of this dissertation is based on three research papers. The first paper focuses on jointly-operated UCCs that are operated by a group of cooperating LSPs. It presents a simulation model to analyze the financial impacts on LSPs participating in such a scheme. In doing so, a particular focus is placed on urban freight transport regulations. A case study is used to analyze the operation of a jointly-operated UCC for scenarios involving three freight transport regulations. The second and third papers take on a different perspective on UCCs by focusing on third-party operated UCCs. In contrast to the first paper, the second and third papers present an evaluation approach in which the decision to use UCCs is integrated with the vehicle route planning of LSPs. In addition to addressing the basic version of this integrated routing problem, known as the vehicle routing problem with transshipment facilities (VRPTF), the second paper presents problem extensions that incorporate time windows, fleet size and mix decisions, and refined objective functions. To heuristically solve the basic problem and the new problem variants, an adaptive large neighborhood search (ALNS) heuristic with embedded local search heuristic and set partitioning problem (SPP) is presented. Furthermore, various factors influencing the cost attractiveness of UCCs, including time windows and usage fees, are analyzed using a real-world case study. The third paper extends the work of the second paper and incorporates daily and entrance-based city toll schemes and enables multi-trip routing. A mixed-integer linear programming (MILP) formulation of the resulting problem is proposed, as well as an ALNS solution heuristic. Moreover, a real-world case study with three European cities is used to analyze the impact of the two city toll systems in different operational contexts

    Hunting Wildlife in the Tropics and Subtropics

    Get PDF
    The hunting of wild animals for their meat has been a crucial activity in the evolution of humans. It continues to be an essential source of food and a generator of income for millions of Indigenous and rural communities worldwide. Conservationists rightly fear that excessive hunting of many animal species will cause their demise, as has already happened throughout the Anthropocene. Many species of large mammals and birds have been decimated or annihilated due to overhunting by humans. If such pressures continue, many other species will meet the same fate. Equally, if the use of wildlife resources is to continue by those who depend on it, sustainable practices must be implemented. These communities need to remain or become custodians of the wildlife resources within their lands, for their own well-being as well as for biodiversity in general. This title is also available via Open Access on Cambridge Core

    Scalable software and models for large-scale extracellular recordings

    Get PDF
    The brain represents information about the world through the electrical activity of populations of neurons. By placing an electrode near a neuron that is firing (spiking), it is possible to detect the resulting extracellular action potential (EAP) that is transmitted down an axon to other neurons. In this way, it is possible to monitor the communication of a group of neurons to uncover how they encode and transmit information. As the number of recorded neurons continues to increase, however, so do the data processing and analysis challenges. It is crucial that scalable software and analysis tools are developed and made available to the neuroscience community to keep up with the large amounts of data that are already being gathered. This thesis is composed of three pieces of work which I develop in order to better process and analyze large-scale extracellular recordings. My work spans all stages of extracellular analysis from the processing of raw electrical recordings to the development of statistical models to reveal underlying structure in neural population activity. In the first work, I focus on developing software to improve the comparison and adoption of different computational approaches for spike sorting. When analyzing neural recordings, most researchers are interested in the spiking activity of individual neurons, which must be extracted from the raw electrical traces through a process called spike sorting. Much development has been directed towards improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, I develop SpikeInterface, an open-source, Python framework designed to unify preexisting spike sorting technologies into a single toolkit and to facilitate straightforward benchmarking of different approaches. With this framework, I demonstrate that modern, automated spike sorters have low agreement when analyzing the same dataset, i.e. they find different numbers of neurons with different activity profiles; This result holds true for a variety of simulated and real datasets. Also, I demonstrate that utilizing a consensus-based approach to spike sorting, where the outputs of multiple spike sorters are combined, can dramatically reduce the number of falsely detected neurons. In the second work, I focus on developing an unsupervised machine learning approach for determining the source location of individually detected spikes that are recorded by high-density, microelectrode arrays. By localizing the source of individual spikes, my method is able to determine the approximate position of the recorded neuriii ons in relation to the microelectrode array. To allow my model to work with large-scale datasets, I utilize deep neural networks, a family of machine learning algorithms that can be trained to approximate complicated functions in a scalable fashion. I evaluate my method on both simulated and real extracellular datasets, demonstrating that it is more accurate than other commonly used methods. Also, I show that location estimates for individual spikes can be utilized to improve the efficiency and accuracy of spike sorting. After training, my method allows for localization of one million spikes in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of massive extracellular datasets. In my third and final presented work, I focus on developing an unsupervised machine learning model that can uncover patterns of activity from neural populations associated with a behaviour being performed. Specifically, I introduce Targeted Neural Dynamical Modelling (TNDM), a statistical model that jointly models the neural activity and any external behavioural variables. TNDM decomposes neural dynamics (i.e. temporal activity patterns) into behaviourally relevant and behaviourally irrelevant dynamics; the behaviourally relevant dynamics constitute all activity patterns required to generate the behaviour of interest while behaviourally irrelevant dynamics may be completely unrelated (e.g. other behavioural or brain states), or even related to behaviour execution (e.g. dynamics that are associated with behaviour generally but are not task specific). Again, I implement TNDM using a deep neural network to improve its scalability and expressivity. On synthetic data and on real recordings from the premotor (PMd) and primary motor cortex (M1) of a monkey performing a center-out reaching task, I show that TNDM is able to extract low-dimensional neural dynamics that are highly predictive of behaviour without sacrificing its fit to the neural data

    The problem of hyperbolic discounting

    Get PDF

    Essays in cryptocurrencies’ forecasting and trading with technical analysis and advanced machine learning methods

    Get PDF
    This thesis mainly emphasizes two prediction fields in the cryptocurrency market: factor analysis and model examination. The first section summarises the general introduction, theoretical background, and description of performance metrics used in the empirical study (Chapter 3-5) are summarized in the first section (Chapter 1-2). Then, in Chapters 3 and 4, technical analysis and fundamental factors combined with statistical models are employed to explore the forecasting ability and profitability in the cryptocurrency market. Finally, in Chapter 5, advanced machine learning algorithms combined with leverage trading strategies and narrative sentiments are used to predict the Bitcoin (BTC) market. Chapter 3 examines technical analysis’s profitability and predictive power on cryptocurrency markets. This Chapter adopts the universe of technical rules proposed by Sullivan et al. (1999), while for data snooping purposes, I apply the Lucky Factors (LF) method proposed by Harvey and Liu (2021). Six mainstream cryptocurrencies and one cryptocurrency index from 2013 to 2018 are examined. The results demonstrate that short-term signals generated by technical rules outperform the traditional buy-and-hold strategy. However, the LF methodology shows that none of the top-performing rules in terms of profitability is consistent with actual forecasting performance. The purpose of Chapter 4 is to investigate the prediction of cryptocurrency returns by applying a large pool of factors from both technical and fundamental aspects. The results find that most trading rules perform better than the buy-and-hold strategy, especially the moving average rules. However, this profitability may not be genuine but comes from data-snooping bias. In this way, a larger pool of factors from several aspects, including blockchain information, technical indicators, online sentiment indices, and conventional financial and economic indicators, is implemented from 08/08/2015 to 08/12/2018. The overall results suggest the new proposed technical indicator, Log-price Moving Average (PMA) ratio, a moving-average likely ratio has significant forecasting ability in cryptocurrencies after taking data-snooping bias into account. Chapter 5 explores the forecasting ability of machine learning (ML) algorithms in the BTC market by combining the narrative sentiments and leverage trading strategy. First, the forecasting framework starts by selecting a pool of individual models. Secondly, ML algorithms are used further to improve the predictive performance of the individual model pool. Thirdly, both the best single predictor and ML models are fed into the process of forecasting ability examination, constructed by three different metrics. This step also takes data-snooping bias into account. At last, leverage trading strategies combined with narrative sentiments are applied to all forecasting models to examine their profitability. The results suggest that ML models consistently outperform the best individual model in forecasting ability and profitability. Gradient Boost Decision Tree (GBDT)-the family has the best performance

    Abstracts

    Get PDF

    Synthetic modifications of metal organic framework adsorbents for environmental remediation

    Get PDF
    The widespread usage of organic chemicals has led to an unprecedented level of pollution and associated health risk concerns. Although activated carbon (AC) based adsorption is commonly used in wastewater treatment and air purification processes, now, new metal-organic frameworks (MOFs) adsorbents with superior surface areas, chemically tuneable structures, and excellent reusability are available. This thesis evaluates the application of selected MOFs and their modified variants for removing organic pollutants from aqueous and humid air streams, reporting adsorption capacity and kinetics. For the removal of aqueous phase pollutant 2-chlorophenol, the higher surface area of MIL-101 (Cr) even with improved surface amination, gave inferior adsorption capacity compared to the hydrophobic AC, indicating the importance of MOF’s hydrophobicity. A hydrophobic MIL-101 (Cr) was synthesized using a PDMS vapour coating protocol, creating a new material with the same surface area and pore volume as pristine MIL-101 (Cr). For 0.5% toluene P/P0 vapour co-adsorption at 40% RH, this composite showed a 60% higher uptake capacity and a 34% higher aggregate adsorption rate compared to pristine MIL-101 (Cr), and 360% faster kinetics relative to AC. A solution-based treatment for MIL-100 (Fe) was developed using calixarene, producing super hydrophobic surfaces, which at 40% RH and 0.5% toluene P/P0, exhibited a 68% higher aggregate uptake rate, despite having lower pore volumes and surface areas. Finally, MIL-96 (Al) was modified using hydrolysed polyacrylamide polymer, which enlarged the 3.2-”m particles by 225% and transformed their crystal morphology. The polymer also contains amide with NH2 moieties which improved the modified MIL-96 (Al)’s uptake capacity of perfluorooctanoic acid. Overall, this thesis highlights the complexity of co-adsorption when hydrophobic and hydrophilic adsorbates are both present. It also recalls the importance of adsorption kinetics, in contrast with current MOF research emphases, which are on surface area and adsorption capacity. Faster adsorption kinetics may be preferred over a slow-diffusing adsorbate, even if the final uptake capacity is superior, for some industrial applications.Open Acces
    • 

    corecore