102 research outputs found

    Nonlinearity and efficiency dynamics of foreign exchange markets: evidence from multifractality and volatility of major exchange rates

    Get PDF
    This study investigates the efficiencies of the exchange markets for four major currencies—the euro (EUR), the pound (GBP), the Canadian dollar (CAD) and the Japanese yen (JPY)—from 2005 to 2019 by using multifractal detrended fluctuation analysis (MFDFA). This study also investigates the causes of these efficiencies. Significant multifractal properties are demonstrated by the four markets, and long-range correlation and fat-tail distribution properties are the main causes. We calculate and compare the multifractal degrees in three subsamples, which are classified based on their temporal relation to two economic events: the 2008 financial crisis and the announcement by the Federal Reserve of its withdrawal from the quantitative easing policy in 2014. Empirical results suggest that multifractal properties exist at different levels in the subsamples, thus showing that these events affect foreign exchange market efficiencies in terms of statistics and the fractal market. The JPY exchange market has the fewest multifractal properties, thus indicating that this exchange market has the highest market efficiency among these four exchange markets. The empirical results have implications for the nonlinear mechanism and efficiency in foreign exchange markets, which may help investors effectively manage market risks and benefit a stable global economy

    Differentiable Retrieval Augmentation via Generative Language Modeling for E-commerce Query Intent Classification

    Full text link
    Retrieval augmentation, which enhances downstream models by a knowledge retriever and an external corpus instead of by merely increasing the number of model parameters, has been successfully applied to many natural language processing (NLP) tasks such as text classification, question answering and so on. However, existing methods that separately or asynchronously train the retriever and downstream model mainly due to the non-differentiability between the two parts, usually lead to degraded performance compared to end-to-end joint training. In this paper, we propose Differentiable Retrieval Augmentation via Generative lANguage modeling(Dragan), to address this problem by a novel differentiable reformulation. We demonstrate the effectiveness of our proposed method on a challenging NLP task in e-commerce search, namely query intent classification. Both the experimental results and ablation study show that the proposed method significantly and reasonably improves the state-of-the-art baselines on both offline evaluation and online A/B test.Comment: 5 pages, 2 figures; accepted by CIKM202

    MedGen3D: A Deep Generative Framework for Paired 3D Image and Mask Generation

    Full text link
    Acquiring and annotating sufficient labeled data is crucial in developing accurate and robust learning-based models, but obtaining such data can be challenging in many medical image segmentation tasks. One promising solution is to synthesize realistic data with ground-truth mask annotations. However, no prior studies have explored generating complete 3D volumetric images with masks. In this paper, we present MedGen3D, a deep generative framework that can generate paired 3D medical images and masks. First, we represent the 3D medical data as 2D sequences and propose the Multi-Condition Diffusion Probabilistic Model (MC-DPM) to generate multi-label mask sequences adhering to anatomical geometry. Then, we use an image sequence generator and semantic diffusion refiner conditioned on the generated mask sequences to produce realistic 3D medical images that align with the generated masks. Our proposed framework guarantees accurate alignment between synthetic images and segmentation maps. Experiments on 3D thoracic CT and brain MRI datasets show that our synthetic data is both diverse and faithful to the original data, and demonstrate the benefits for downstream segmentation tasks. We anticipate that MedGen3D's ability to synthesize paired 3D medical images and masks will prove valuable in training deep learning models for medical imaging tasks.Comment: Submitted to MICCAI 2023. Project Page: https://krishan999.github.io/MedGen3D

    Diffeomorphic Image Registration with Neural Velocity Field

    Full text link
    Diffeomorphic image registration, offering smooth transformation and topology preservation, is required in many medical image analysis tasks.Traditional methods impose certain modeling constraints on the space of admissible transformations and use optimization to find the optimal transformation between two images. Specifying the right space of admissible transformations is challenging: the registration quality can be poor if the space is too restrictive, while the optimization can be hard to solve if the space is too general. Recent learning-based methods, utilizing deep neural networks to learn the transformation directly, achieve fast inference, but face challenges in accuracy due to the difficulties in capturing the small local deformations and generalization ability. Here we propose a new optimization-based method named DNVF (Diffeomorphic Image Registration with Neural Velocity Field) which utilizes deep neural network to model the space of admissible transformations. A multilayer perceptron (MLP) with sinusoidal activation function is used to represent the continuous velocity field and assigns a velocity vector to every point in space, providing the flexibility of modeling complex deformations as well as the convenience of optimization. Moreover, we propose a cascaded image registration framework (Cas-DNVF) by combining the benefits of both optimization and learning based methods, where a fully convolutional neural network (FCN) is trained to predict the initial deformation, followed by DNVF for further refinement. Experiments on two large-scale 3D MR brain scan datasets demonstrate that our proposed methods significantly outperform the state-of-the-art registration methods.Comment: WACV 202

    Hybrid-CSR: Coupling Explicit and Implicit Shape Representation for Cortical Surface Reconstruction

    Full text link
    We present Hybrid-CSR, a geometric deep-learning model that combines explicit and implicit shape representations for cortical surface reconstruction. Specifically, Hybrid-CSR begins with explicit deformations of template meshes to obtain coarsely reconstructed cortical surfaces, based on which the oriented point clouds are estimated for the subsequent differentiable poisson surface reconstruction. By doing so, our method unifies explicit (oriented point clouds) and implicit (indicator function) cortical surface reconstruction. Compared to explicit representation-based methods, our hybrid approach is more friendly to capture detailed structures, and when compared with implicit representation-based methods, our method can be topology aware because of end-to-end training with a mesh-based deformation module. In order to address topology defects, we propose a new topology correction pipeline that relies on optimization-based diffeomorphic surface registration. Experimental results on three brain datasets show that our approach surpasses existing implicit and explicit cortical surface reconstruction methods in numeric metrics in terms of accuracy, regularity, and consistency

    A mobile anchor assisted localization algorithm based on regular hexagon in wireless sensor networks

    Get PDF
    Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system(GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution.The work is supported by the Natural Science Foundation of Jiangsu Province of China, no. BK20131137; the Applied Basic Research Program of Nantong Science and Technology Bureau, no. BK2013032; and the Guangdong University of Petrochemical Technology's Internal Project, no. 2012RC0106. Jaime Lloret's work has been partially supported by the "Ministerio de Ciencia e Innovacion," through the "Plan Nacional de I+D+i 2008-2011" in the "Subprograma de Proyectos de Investigacion Fundamental," Project TEC2011-27516. Joel J. P. C. Rodrigues's work has been supported by "Instituto de Telecomunicacoes," Next Generation Networks and Applications Group (NetGNA), Covilha Delegation, by national funding from the Fundacao para a Ciencia e a Tecnologia (FCT) through the Pest-OE/EEI/LA0008/2013 Project.Han, G.; Zhang, C.; Lloret, J.; Shu, L.; Rodrigues, JJPC. (2014). A mobile anchor assisted localization algorithm based on regular hexagon in wireless sensor networks. Scientific World Journal. https://doi.org/10.1155/2014/219371SLiu, Y., Yang, Z., Wang, X., & Jian, L. (2010). Location, Localization, and Localizability. Journal of Computer Science and Technology, 25(2), 274-297. doi:10.1007/s11390-010-9324-2Akcan, H., Kriakov, V., Brönnimann, H., & Delis, A. (2010). Managing cohort movement of mobile sensors via GPS-free and compass-free node localization. Journal of Parallel and Distributed Computing, 70(7), 743-757. doi:10.1016/j.jpdc.2010.03.007Akyildiz, I. F., Weilian Su, Sankarasubramaniam, Y., & Cayirci, E. (2002). A survey on sensor networks. IEEE Communications Magazine, 40(8), 102-114. doi:10.1109/mcom.2002.1024422Vupputuri, S., Rachuri, K. K., & Siva Ram Murthy, C. (2010). Using mobile data collectors to improve network lifetime of wireless sensor networks with reliability constraints. Journal of Parallel and Distributed Computing, 70(7), 767-778. doi:10.1016/j.jpdc.2010.03.010Zeng, Y., Cao, J., Hong, J., Zhang, S., & Xie, L. (2010). Secure localization and location verification in wireless sensor networks: a survey. The Journal of Supercomputing, 64(3), 685-701. doi:10.1007/s11227-010-0501-4Han, G., Xu, H., Duong, T. Q., Jiang, J., & Hara, T. (2011). Localization algorithms of Wireless Sensor Networks: a survey. Telecommunication Systems, 52(4), 2419-2436. doi:10.1007/s11235-011-9564-7Al-Fuqaha, A. (2013). A Precise Indoor Localization Approach based on Particle Filter and Dynamic Exclusion Techniques. Network Protocols and Algorithms, 5(2), 50. doi:10.5296/npa.v5i2.3717Chaurasiya, V. K., Jain, N., & Nandi, G. C. (2014). A novel distance estimation approach for 3D localization in wireless sensor network using multi dimensional scaling. Information Fusion, 15, 5-18. doi:10.1016/j.inffus.2013.06.003Diallo, O., Rodrigues, J. J. P. C., & Sene, M. (2012). Real-time data management on wireless sensor networks: A survey. Journal of Network and Computer Applications, 35(3), 1013-1021. doi:10.1016/j.jnca.2011.12.006Amundson, I., & Koutsoukos, X. D. (2009). A Survey on Localization for Mobile Wireless Sensor Networks. Lecture Notes in Computer Science, 235-254. doi:10.1007/978-3-642-04385-7_16Ding, Y., Wang, C., & Xiao, L. (2010). Using mobile beacons to locate sensors in obstructed environments. Journal of Parallel and Distributed Computing, 70(6), 644-656. doi:10.1016/j.jpdc.2010.03.002Chenji, H., & Stoleru, R. (2010). Mobile Sensor Network Localization in Harsh Environments. Lecture Notes in Computer Science, 244-257. doi:10.1007/978-3-642-13651-1_18Campos, A. N., Souza, E. L., Nakamura, F. G., Nakamura, E. F., & Rodrigues, J. J. P. C. (2012). On the Impact of Localization and Density Control Algorithms in Target Tracking Applications for Wireless Sensor Networks. Sensors, 12(6), 6930-6952. doi:10.3390/s120606930Ou, C.-H., & He, W.-L. (2013). Path Planning Algorithm for Mobile Anchor-Based Localization in Wireless Sensor Networks. IEEE Sensors Journal, 13(2), 466-475. doi:10.1109/jsen.2012.2218100Koutsonikolas, D., Das, S. M., & Hu, Y. C. (2007). Path planning of mobile landmarks for localization in wireless sensor networks. Computer Communications, 30(13), 2577-2592. doi:10.1016/j.comcom.2007.05.048Cui, H., & Wang, Y. (2012). Four-mobile-beacon assisted localization in three-dimensional wireless sensor networks. Computers & Electrical Engineering, 38(3), 652-661. doi:10.1016/j.compeleceng.2011.10.012Ssu, K.-F., Ou, C.-H., & Jiau, H. C. (2005). Localization With Mobile Anchor Points in Wireless Sensor Networks. IEEE Transactions on Vehicular Technology, 54(3), 1187-1197. doi:10.1109/tvt.2005.844642Guo, Z., Guo, Y., Hong, F., Jin, Z., He, Y., Feng, Y., & Liu, Y. (2010). Perpendicular Intersection: Locating Wireless Sensors With Mobile Beacon. IEEE Transactions on Vehicular Technology, 59(7), 3501-3509. doi:10.1109/tvt.2010.2049391Bin Xiao, Hekang Chen, & Shuigeng Zhou. (2008). Distributed Localization Using a Moving Beacon in Wireless Sensor Networks. IEEE Transactions on Parallel and Distributed Systems, 19(5), 587-600. doi:10.1109/tpds.2007.70773Lee, S., Kim, E., Kim, C., & Kim, K. (2009). Localization with a mobile beacon based on geometric constraints in wireless sensor networks. IEEE Transactions on Wireless Communications, 8(12), 5801-5805. doi:10.1109/twc.2009.12.090319Han, G., Choi, D., & Lim, W. (2009). Reference node placement and selection algorithm based on trilateration for indoor sensor networks. Wireless Communications and Mobile Computing, 9(8), 1017-1027. doi:10.1002/wcm.65

    Multimodal transformer augmented fusion for speech emotion recognition

    Get PDF
    Speech emotion recognition is challenging due to the subjectivity and ambiguity of emotion. In recent years, multimodal methods for speech emotion recognition have achieved promising results. However, due to the heterogeneity of data from different modalities, effectively integrating different modal information remains a difficulty and breakthrough point of the research. Moreover, in view of the limitations of feature-level fusion and decision-level fusion methods, capturing fine-grained modal interactions has often been neglected in previous studies. We propose a method named multimodal transformer augmented fusion that uses a hybrid fusion strategy, combing feature-level fusion and model-level fusion methods, to perform fine-grained information interaction within and between modalities. A Model-fusion module composed of three Cross-Transformer Encoders is proposed to generate multimodal emotional representation for modal guidance and information fusion. Specifically, the multimodal features obtained by feature-level fusion and text features are used to enhance speech features. Our proposed method outperforms existing state-of-the-art approaches on the IEMOCAP and MELD dataset
    corecore