1,914 research outputs found
Investigating Factors Affecting Electronic Word-Of-Mouth In The Open Market Context: A Mixed Methods Approach
Electronic Word-of-Mouth (eWOM) has been identified as one of key factors affecting online sales. There has been, however, lack of understanding about the factors leading to eWOM in the open market context. As many Internet vendors have adopted the open market business, it is essential to understand the factors leading to eWOM for the success of open market business. This study investigates factors affecting eWOM in the open market context based on a sequential combination of qualitative and quantitative research methods. The exploratory findings in the qualitative study become the basis for the quantitative study, survey research. The findings from the mixed methods explain the significance of three new factors (information sharing desire, self-presentation desire, and open market reward) and two other factors (open market satisfaction and open market loyalty) affecting eWOM directly and indirectly. This study contributes to research by adding to the broader literature on eWOM. The findings also can inform open market providers on how to promote and manage eWOM for their online business success
Revisiting the Hybrid attack on sparse and ternary secret LWE
In the practical use of the Learning With Error (LWE) based cryptosystems,
it is quite common to choose the secret to be extremely small:
one popular choice is ternary () coefficient vector,
and some further use ternary vector having only small numbers of nonzero coefficient,
what is called sparse and ternary vector.
This use of small secret also benefits to attack algorithms against LWE,
and currently LWE-based cryptosystems including homomorphic encryptions (HE) set parameters
based on the attack complexity of those improved attacks.
In this work, we revisit the well-known Howgrave-Graham\u27s hybrid attack, which was originally designed to solve the NTRU problem, with respect to sparse and ternary secret LWE case,
and also refine the previous analysis for the hybrid attack in line with LWE setting.
Moreover, upon our analysis we estimate attack complexity of the hybrid attack for several LWE parameters.
As a result, we argue the currently used HE parameters should be raised to maintain the same security level by considering the hybrid attack;
for example, the parameter set with Hamming weight of secret key
which was estimated to satisfy bit-security by the previously considered attacks,
is newly estimated to provide only bit-security by the hybrid attack
Practical FHE parameters against lattice attacks
We give secure parameter suggestions to use sparse secret vectors in LWE based encryption schemes. This should replace existing security parameters, because homomorphic encryption(HE) schemes use quite different variables from the existing parameters. In particular HE schemes using sparse secrets should be supported by experimental analysis, here we summarize existing attacks to be considered and security levels for each attacks. Based on the analysis and experiments, we compute optimal scaling factors for CKKS
Disentangled representation learning for multilingual speaker recognition
The goal of this paper is to learn robust speaker representation for
bilingual speaking scenario. The majority of the world's population speak at
least two languages; however, most speaker recognition systems fail to
recognise the same speaker when speaking in different languages.
Popular speaker recognition evaluation sets do not consider the bilingual
scenario, making it difficult to analyse the effect of bilingual speakers on
speaker recognition performance. In this paper, we publish a large-scale
evaluation set named VoxCeleb1-B derived from VoxCeleb that considers bilingual
scenarios.
We introduce an effective disentanglement learning strategy that combines
adversarial and metric learning-based methods. This approach addresses the
bilingual situation by disentangling language-related information from speaker
representation while ensuring stable speaker representation learning. Our
language-disentangled learning method only uses language pseudo-labels without
manual information.Comment: Interspeech 202
- …