451 research outputs found

    Latent Semantic Diffusion-based Channel Adaptive De-Noising SemCom for Future 6G Systems

    Full text link
    Compared with the current Shannon's Classical Information Theory (CIT) paradigm, semantic communication (SemCom) has recently attracted more attention, since it aims to transmit the meaning of information rather than bit-by-bit transmission, thus enhancing data transmission efficiency and supporting future human-centric, data-, and resource-intensive intelligent services in 6G systems. Nevertheless, channel noises are common and even serious in 6G-empowered scenarios, limiting the communication performance of SemCom, especially when Signal-to-Noise (SNR) levels during training and deployment stages are different, but training multi-networks to cover the scenario with a broad range of SNRs is computationally inefficient. Hence, we develop a novel De-Noising SemCom (DNSC) framework, where the designed de-noiser module can eliminate noise interference from semantic vectors. Upon the designed DNSC architecture, we further combine adversarial learning, variational autoencoder, and diffusion model to propose the Latent Diffusion DNSC (Latent-Diff DNSC) scheme to realize intelligent online de-noising. During the offline training phase, noises are added to latent semantic vectors in a forward Markov diffusion manner and then are eliminated in a reverse diffusion manner through the posterior distribution approximated by the U-shaped Network (U-Net), where the semantic de-noiser is optimized by maximizing evidence lower bound (ELBO). Such design can model real noisy channel environments with various SNRs and enable to adaptively remove noises from noisy semantic vectors during the online transmission phase. The simulations on open-source image datasets demonstrate the superiority of the proposed Latent-Diff DNSC scheme in PSNR and SSIM over different SNRs than the state-of-the-art schemes, including JPEG, Deep JSCC, and ADJSCC.Comment: 6 pages, 7 figure

    Efficient Gaussian Process Classification-based Physical-Layer Authentication with Configurable Fingerprints for 6G-Enabled IoT

    Full text link
    Physical-Layer Authentication (PLA) has been recently believed as an endogenous-secure and energy-efficient technique to recognize IoT terminals. However, the major challenge of applying the state-of-the-art PLA schemes directly to 6G-enabled IoT is the inaccurate channel fingerprint estimation in low Signal-Noise Ratio (SNR) environments, which will greatly influence the reliability and robustness of PLA. To tackle this issue, we propose a configurable-fingerprint-based PLA architecture through Intelligent Reflecting Surface (IRS) that helps create an alternative wireless transmission path to provide more accurate fingerprints. According to Baye's theorem, we propose a Gaussian Process Classification (GPC)-based PLA scheme, which utilizes the Expectation Propagation (EP) method to obtain the identities of unknown fingerprints. Considering that obtaining sufficient labeled fingerprint samples to train the GPC-based authentication model is challenging for future 6G systems, we further extend the GPC-based PLA to the Efficient-GPC (EGPC)-based PLA through active learning, which requires fewer labeled fingerprints and is more feasible. We also propose three fingerprint selecting algorithms to choose fingerprints, whose identities are queried to the upper-layers authentication mechanisms. For this reason, the proposed EGPC-based scheme is also a lightweight cross-layer authentication method to offer a superior security level. The simulations conducted on synthetic datasets demonstrate that the IRS-assisted scheme reduces the authentication error rate by 98.69% compared to the non-IRS-based scheme. Additionally, the proposed fingerprint selection algorithms reduce the authentication error rate by 65.96% to 86.93% and 45.45% to 70.00% under perfect and imperfect channel estimation conditions, respectively, when compared with baseline algorithms.Comment: 12 pages, 9 figure

    Traffic Sign Interpretation in Real Road Scene

    Full text link
    Most existing traffic sign-related works are dedicated to detecting and recognizing part of traffic signs individually, which fails to analyze the global semantic logic among signs and may convey inaccurate traffic instruction. Following the above issues, we propose a traffic sign interpretation (TSI) task, which aims to interpret global semantic interrelated traffic signs (e.g.,~driving instruction-related texts, symbols, and guide panels) into a natural language for providing accurate instruction support to autonomous or assistant driving. Meanwhile, we design a multi-task learning architecture for TSI, which is responsible for detecting and recognizing various traffic signs and interpreting them into a natural language like a human. Furthermore, the absence of a public TSI available dataset prompts us to build a traffic sign interpretation dataset, namely TSI-CN. The dataset consists of real road scene images, which are captured from the highway and the urban way in China from a driver's perspective. It contains rich location labels of texts, symbols, and guide panels, and the corresponding natural language description labels. Experiments on TSI-CN demonstrate that the TSI task is achievable and the TSI architecture can interpret traffic signs from scenes successfully even if there is a complex semantic logic among signs. The TSI-CN dataset and the source code of the TSI architecture will be publicly available after the revision process

    DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple Experts Fine-tuning

    Full text link
    We propose Multiple Experts Fine-tuning Framework to build a financial large language model (LLM), DISC-FinLLM. Our methodology improves general LLMs by endowing them with multi-turn question answering abilities, domain text processing capabilities, mathematical computation skills, and retrieval-enhanced generation capabilities. We build a financial instruction-tuning dataset named DISC-FIN-SFT, including instruction samples of four categories (consulting, NLP tasks, computing and retrieval-augmented generation). Evaluations conducted on multiple benchmarks demonstrate that our model performs better than baseline models in various financial scenarios. Further resources can be found at https://github.com/FudanDISC/DISC-FinLLM.Comment: 18 pages, 13 figures, 7 table

    A deep learning model adjusting for infant gender, age, height, and weight to determine whether the individual infant suit ultrasound examination of developmental dysplasia of the hip (DDH)

    Get PDF
    ObjectiveTo examine the correlation between specific indicators and the quality of hip joint ultrasound images in infants and determine whether the individual infant suit ultrasound examination for developmental dysplasia of the hip (DDH).MethodWe retrospectively selected infants aged 0–6 months, undergone ultrasound imaging of the left hip joint between September 2021 and March 2022 at Shenzhen Children’s Hospital. Using the entropy weighting method, weights were assigned to anatomical structures. Moreover, prospective data was collected from infants aged 5–11 months. The left hip joint was imaged, scored and weighted as before. The correlation between the weighted image quality scores and individual indicators were studied, with the last weighted image quality score used as the dependent variable and the individual indicators used as independent variables. A Long-short term memory (LSTM) model was used to fit the data and evaluate its effectiveness. Finally, The randomly selected images were manually measured and compared to measurements made using artificial intelligence (AI).ResultsAccording to the entropy weight method, the weights of each anatomical structure as follows: bony rim point 0.29, lower iliac limb point 0.41, and glenoid labrum 0.30. The final weighted score for ultrasound image quality is calculated by multiplying each score by its respective weight. Infant gender, age, height, and weight were found to be significantly correlated with the final weighted score of image quality (P < 0.05). The LSTM fitting model had a coefficient of determination (R2) of 0.95. The intra-class correlation coefficient (ICC) for the α and β angles between manual measurement and AI measurement was 0.98 and 0.93, respectively.ConclusionThe quality of ultrasound images for infants can be influenced by the individual indicators (gender, age, height, and weight). The LSTM model showed good fitting efficiency and can help clinicians select whether the individual infant suit ultrasound examination of DDH

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Search for heavy resonances decaying to two Higgs bosons in final states containing four b quarks

    Get PDF
    A search is presented for narrow heavy resonances X decaying into pairs of Higgs bosons (H) in proton-proton collisions collected by the CMS experiment at the LHC at root s = 8 TeV. The data correspond to an integrated luminosity of 19.7 fb(-1). The search considers HH resonances with masses between 1 and 3 TeV, having final states of two b quark pairs. Each Higgs boson is produced with large momentum, and the hadronization products of the pair of b quarks can usually be reconstructed as single large jets. The background from multijet and t (t) over bar events is significantly reduced by applying requirements related to the flavor of the jet, its mass, and its substructure. The signal would be identified as a peak on top of the dijet invariant mass spectrum of the remaining background events. No evidence is observed for such a signal. Upper limits obtained at 95 confidence level for the product of the production cross section and branching fraction sigma(gg -> X) B(X -> HH -> b (b) over barb (b) over bar) range from 10 to 1.5 fb for the mass of X from 1.15 to 2.0 TeV, significantly extending previous searches. For a warped extra dimension theory with amass scale Lambda(R) = 1 TeV, the data exclude radion scalar masses between 1.15 and 1.55 TeV

    Search for supersymmetry in events with one lepton and multiple jets in proton-proton collisions at root s=13 TeV

    Get PDF
    Peer reviewe

    Measurement of the top quark mass using charged particles in pp collisions at root s=8 TeV

    Get PDF
    Peer reviewe

    Measurement of the Splitting Function in &ITpp &ITand Pb-Pb Collisions at root&ITsNN&IT=5.02 TeV

    Get PDF
    Data from heavy ion collisions suggest that the evolution of a parton shower is modified by interactions with the color charges in the dense partonic medium created in these collisions, but it is not known where in the shower evolution the modifications occur. The momentum ratio of the two leading partons, resolved as subjets, provides information about the parton shower evolution. This substructure observable, known as the splitting function, reflects the process of a parton splitting into two other partons and has been measured for jets with transverse momentum between 140 and 500 GeV, in pp and PbPb collisions at a center-of-mass energy of 5.02 TeV per nucleon pair. In central PbPb collisions, the splitting function indicates a more unbalanced momentum ratio, compared to peripheral PbPb and pp collisions.. The measurements are compared to various predictions from event generators and analytical calculations.Peer reviewe
    corecore