3,735 research outputs found
A proposed NFC payment application
This article has been made available through the Brunel Open Access Publishing Fund. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Near Field Communication (NFC) technology is based on a short range radio communication channel which enables users to exchange data between devices. With NFC technology, mobile services establish a contactless transaction system to make the payment methods easier for people. Although NFC mobile services have great potential for growth, they have raised several issues which have concerned the researches and prevented the adoption of this technology within societies. Reorganizing and describing what is required for the success of this technology have motivated us to extend the current NFC ecosystem models to accelerate the development of this business area. In this paper, we introduce a new NFC payment application, which is based on our previous “NFC Cloud Wallet” model [1] to demonstrate a reliable structure of NFC ecosystem. We also describe the step by step execution of the proposed protocol in order to carefully analyse the payment application and our main focus will be on the Mobile Network Operator (MNO) as the main player within the ecosystem
Recommended from our members
Trusted integration of cloud-based NFC transaction players
Near Field Communication (NFC) is a short range wireless technology that provides contactless transmission of data between devices. With an NFC enabled device, users can exchange information from one device to another, make payments and use their NFC enabled device as their identity. As the main payment ecosystem players such as service providers and secure element issuers have crucial roles in a multi-application mobile environment similar to NFC, managing such an environment has become very challenging. One of the technologies that can be used to ensure secure NFC transaction is cloud computing which offers wide range of advantages compare to the use of a Secure Element (SE) as a single entity in an NFC enabled phone. This approach provides a comprehensive leadership of the cloud provider towards managing and controlling customer's information where it allows the SE which is stored within an NFC phone to deal with authentication mechanisms rather than storing and managing sensitive transaction information. This paper discusses the NFC cloud Wallet model which has been proposed by us previously [1] and introduces a different insight that defines a new integrated framework based on a trusted relationship between the vendor and the Mobile Network Operator (MNO). We then carry out an analysis of such a relationship to investigate different possibilities that arise from this approach
Recommended from our members
Mobile transactions over NFC and GSM
Dynamic relationships between Near Field Communication (NFC) ecosystem players in a monetary transaction make them partners in a way that they sometimes require to share access permission to applications that are running in the service environment. One of the technologies that can be used to ensure secure NFC transactions is cloud computing. This offers a wider range of advantages than the use of only a Secure Element (SE) in an NFC enabled mobile phone. In this paper, we propose a protocol for NFC mobile payments over NFC using Global System for Mobile Communications (GSM) authentication. In our protocol, the SE in the mobile device is used for customer authentication whereas the customer's banking credentials are stored in a cloud under the control of the Mobile Network Operator (MNO). The proposed protocol eliminates the requirement for a shared secret between the Point of Sale (PoS) and the MNO before execution of the protocol, a mandatory requirement in the earlier version of this protocol. This elimination makes the protocol more practicable and user friendly. A detailed analysis of the protocol discusses multiple attack scenarios
ChIP-Array: Combinatory analysis of ChIP-seq/chip and microarray gene expression data to discover direct/indirect targets of a transcription factor
Chromatin immunoprecipitation (ChIP) coupled with high-throughput techniques (ChIP-X), such as next generation sequencing (ChIP-Seq) and microarray (ChIP-chip), has been successfully used to map active transcription factor binding sites (TFBS) of a transcription factor (TF). The targeted genes can be activated or suppressed by the TF, or are unresponsive to the TF. Microarray technology has been used to measure the actual expression changes of thousands of genes under the perturbation of a TF, but is unable to determine if the affected genes are direct or indirect targets of the TF. Furthermore, both ChIP-X and microarray methods produce a large number of false positives. Combining microarray expression profiling and ChIP-X data allows more effective TFBS analysis for studying the function of a TF. However, current web servers only provide tools to analyze either ChIP-X or expression data, but not both. Here, we present ChIP-Array, a web server that integrates ChIP-X and expression data from human, mouse, yeast, fruit fly and Arabidopsis. This server will assist biologists to detect direct and indirect target genes regulated by a TF of interest and to aid in the functional characterization of the TF. ChIP-Array is available at http://jjwanglab.hku.hk/ChIP-Array, with free access to academic users. © 2011 The Author(s).published_or_final_versio
Two- and three-point functions in two-dimensional Landau-gauge Yang-Mills theory: Continuum results
We investigate the Dyson-Schwinger equations for the gluon and ghost
propagators and the ghost-gluon vertex of Landau-gauge gluodynamics in two
dimensions. While this simplifies some aspects of the calculations as compared
to three and four dimensions, new complications arise due to a mixing of
different momentum regimes. As a result, the solutions for the propagators are
more sensitive to changes in the three-point functions and the ansaetze used
for them at the leading order in a vertex a expansion. Here, we therefore go
beyond this common truncation by including the ghost-gluon vertex
self-consistently for the first time, while using a model for the three-gluon
vertex which reproduces the known infrared asymptotics and the zeros at
intermediate momenta as observed on the lattice. A separate computation of the
three-gluon vertex from the results is used to confirm the stability of this
behavior a posteriori. We also present further arguments for the absence of the
decoupling solution in two dimensions. Finally, we show how in general the
infrared exponent kappa of the scaling solutions in two, three and four
dimensions can be changed by allowing an angle dependence and thus an essential
singularity of the ghost-gluon vertex in the infrared.Comment: 24 pages; added references, improved choices of parameters for vertex
models; identical to version published in JHE
Application of Deep Learning Long Short-Term Memory in Energy Demand Forecasting
The smart metering infrastructure has changed how electricity is measured in
both residential and industrial application. The large amount of data collected
by smart meter per day provides a huge potential for analytics to support the
operation of a smart grid, an example of which is energy demand forecasting.
Short term energy forecasting can be used by utilities to assess if any
forecasted peak energy demand would have an adverse effect on the power system
transmission and distribution infrastructure. It can also help in load
scheduling and demand side management. Many techniques have been proposed to
forecast time series including Support Vector Machine, Artificial Neural
Network and Deep Learning. In this work we use Long Short Term Memory
architecture to forecast 3-day ahead energy demand across each month in the
year. The results show that 3-day ahead demand can be accurately forecasted
with a Mean Absolute Percentage Error of 3.15%. In addition to that, the paper
proposes way to quantify the time as a feature to be used in the training phase
which is shown to affect the network performance
Distance metric choice can both reduce and induce collinearity in geographically weighted regression
This paper explores the impact of different distance metrics on collinearity in local regression models such as geographically weighted regression. Using a case study of house price data collected in Hà Nội, Vietnam, and by fully varying both power and rotation parameters to create different Minkowski distances, the analysis shows that local collinearity can be both negatively and positively affected by distance metric choice. The Minkowski distance that maximised collinearity in a geographically weighted regression was approximate to a Manhattan distance with (power = 0.70) with a rotation of 30°, and that which minimised collinearity was parameterised with power = 0.05 and a rotation of 70°. The results indicate that distance metric choice can provide a useful extra tuning component to address local collinearity issues in spatially varying coefficient modelling and that understanding the interaction of distance metric and collinearity can provide insight into the nature and structure of the data relationships. The discussion considers first, the exploration and selection of different distance metrics to minimise collinearity as an alternative to localised ridge regression, lasso and elastic net approaches. Second, it discusses the how distance metric choice could extend the methods that additionally optimise local model fit (lasso and elastic net) by selecting a distance metric that further helped minimise local collinearity. Third, it identifies the need to investigate the relationship between kernel bandwidth, distance metrics and collinearity as an area of further work
A Personalized Self-Management Rehabilitation System with an Intelligent Shoe for Stroke Survivors: A Realist Evaluation
Background: In the United Kingdom, stroke is the most significant cause of adult disability. Stroke survivors are frequently
left with physical and psychological changes that can profoundly affect their functional ability, independence, and social
participation. Research suggests that long-term, intense, task- and context-specific rehabilitation that is goal-oriented and
environmentally enriched improves function, independence, and quality of life after a stroke. It is recommended that rehabilitation
should continue until maximum recovery has been achieved. However, the increasing demand on services and financial constraints
means that needs cannot be met through traditional face-to-face delivery of rehabilitation. Using a participatory design methodology,
we developed an information communication technology–enhanced Personalized Self-Managed rehabilitation System (PSMrS)
for stroke survivors with integrated insole sensor technology within an “intelligent shoe.”. The intervention model was based
around a rehabilitation paradigm underpinned by theories of motor relearning and neuroplastic adaptation, motivational feedback,
self-efficacy, and knowledge transfer.
Objective: To understand the conditions under which this technology-based rehabilitation solution would most likely have an
impact on the motor behavior of the user, what would work for whom, in what context, and how. We were interested in what
aspects of the system would work best to facilitate the motor behavior change associated with self-managed rehabilitation and
which user characteristics and circumstances of use could promote improved functional outcomes.
Methods: We used a Realist Evaluation (RE) framework to evaluate the final prototype PSMrS with the assumption that the
intervention consists of a series of configurations that include the Context of use, the underlying Mechanisms of change and the
potential Outcomes or impacts (CMOs). We developed the CMOs from literature reviews and engagement with clinicians, users,
and caregivers during a series of focus groups and home visits. These CMOs were then tested in five in-depth case studies with
stroke survivors and their caregivers.
Results: While two new propositions emerged, the second importantly related to the self-management aspects of the system.
The study revealed that the system should also encourage independent use and the setting of personalized goals or activities.
Conclusions: Information communication technology that purports to support the self-management of stroke rehabilitation
should give significant consideration to the need for motivational feedback that provides quantitative, reliable, accurate,
context-specific, and culturally sensitive information about the achievement of personalized goal-based activities
A theory-grounded framework of Open Source Software adoption in SMEs
This is a post-peer-review, pre-copyedit version of an article published in European Journal of Information Systems. The definitive publisher-authenticated version Macredie, RD and Mijinyawa, K (2011), "A theory-grounded framework of Open Source Software adoption in SMEs", European Journal of Informations Systems, 20(2), 237-250 is available online at: http://www.palgrave-journals.com/ejis/journal/v20/n2/abs/ejis201060a.html.The increasing popularity and use of Open Source Software (OSS) has led to significant interest from research communities and enterprise practitioners, notably in the small business sector where this type of software offers particular benefits given the financial and human capital constraints faced. However, there has been little focus on developing valid frameworks that enable critical evaluation and common understanding of factors influencing OSS adoption. This paper seeks to address this shortcoming by presenting a theory-grounded framework for exploring these factors and explaining their influence on OSS adoption, with the context of study being small- to medium-sized Information Technology (IT) businesses in the U.K. The framework has implications for this type of business – and, we will suggest, more widely – as a frame of reference for understanding, and as tool for evaluating benefits and challenges in, OSS adoption. It also offers researchers a structured way of investigating adoption issues and a base from which to develop models of OSS adoption. The study reported in this paper used the Decomposed Theory of Planned Behaviour (DTPB) as a basis for the research propositions, with the aim of: (i) developing a framework of empirical factors that influence OSS adoption; and (ii) appraising it through case study evaluation with 10 U.K. Small- to medium-sized enterprises in the IT sector. The demonstration of the capabilities of the framework suggests that it is able to provide a reliable explanation of the complex and subjective factors that influence attitudes, subjective norms and control over the use of OSS. The paper further argues that the DTPB proved useful in this research area and that it can provide a variety of situation-specific insights related to factors that influence the adoption of OSS
- …