50 research outputs found

    Nowhere to Hide: Cross-modal Identity Leakage between Biometrics and Devices

    Get PDF
    Along with the benefits of Internet of Things (IoT) come potential privacy risks, since billions of the connected devices are granted permission to track information about their users and communicate it to other parties over the Internet. Of particular interest to the adversary is the user identity which constantly plays an important role in launching attacks. While the exposure of a certain type of physical biometrics or device identity is extensively studied, the compound effect of leakage from both sides remains unknown in multi-modal sensing environments. In this work, we explore the feasibility of the compound identity leakage across cyber-physical spaces and unveil that co-located smart device IDs (e.g., smartphone MAC addresses) and physical biometrics (e.g., facial/vocal samples) are side channels to each other. It is demonstrated that our method is robust to various observation noise in the wild and an attacker can comprehensively profile victims in multi-dimension with nearly zero analysis effort. Two real-world experiments on different biometrics and device IDs show that the presented approach can compromise more than 70\% of device IDs and harvests multiple biometric clusters with ~94% purity at the same time

    Learning the Degradation Distribution for Blind Image Super-Resolution

    Full text link
    Synthetic high-resolution (HR) \& low-resolution (LR) pairs are widely used in existing super-resolution (SR) methods. To avoid the domain gap between synthetic and test images, most previous methods try to adaptively learn the synthesizing (degrading) process via a deterministic model. However, some degradations in real scenarios are stochastic and cannot be determined by the content of the image. These deterministic models may fail to model the random factors and content-independent parts of degradations, which will limit the performance of the following SR models. In this paper, we propose a probabilistic degradation model (PDM), which studies the degradation D\mathbf{D} as a random variable, and learns its distribution by modeling the mapping from a priori random variable z\mathbf{z} to D\mathbf{D}. Compared with previous deterministic degradation models, PDM could model more diverse degradations and generate HR-LR pairs that may better cover the various degradations of test images, and thus prevent the SR model from over-fitting to specific ones. Extensive experiments have demonstrated that our degradation model can help the SR model achieve better performance on different datasets. The source codes are released at \url{[email protected]:greatlog/UnpairedSR.git}.Comment: Accepted to CVRP202

    End-to-end Alternating Optimization for Real-World Blind Super Resolution

    Full text link
    Blind Super-Resolution (SR) usually involves two sub-problems: 1) estimating the degradation of the given low-resolution (LR) image; 2) super-resolving the LR image to its high-resolution (HR) counterpart. Both problems are ill-posed due to the information loss in the degrading process. Most previous methods try to solve the two problems independently, but often fall into a dilemma: a good super-resolved HR result requires an accurate degradation estimation, which however, is difficult to be obtained without the help of original HR information. To address this issue, instead of considering these two problems independently, we adopt an alternating optimization algorithm, which can estimate the degradation and restore the SR image in a single model. Specifically, we design two convolutional neural modules, namely \textit{Restorer} and \textit{Estimator}. \textit{Restorer} restores the SR image based on the estimated degradation, and \textit{Estimator} estimates the degradation with the help of the restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, both \textit{Restorer} and \textit{Estimator} could get benefited from the intermediate results of each other, and make each sub-problem easier. Moreover, \textit{Restorer} and \textit{Estimator} are optimized in an end-to-end manner, thus they could get more tolerant of the estimation deviations of each other and cooperate better to achieve more robust and accurate final results. Extensive experiments on both synthetic datasets and real-world images show that the proposed method can largely outperform state-of-the-art methods and produce more visually favorable results. The codes are rleased at \url{https://github.com/greatlog/RealDAN.git}.Comment: Extension of our previous NeurIPS paper. Accepted to IJC

    Differentiable Radio Frequency Ray Tracing for Millimeter-Wave Sensing

    Full text link
    Millimeter wave (mmWave) sensing is an emerging technology with applications in 3D object characterization and environment mapping. However, realizing precise 3D reconstruction from sparse mmWave signals remains challenging. Existing methods rely on data-driven learning, constrained by dataset availability and difficulty in generalization. We propose DiffSBR, a differentiable framework for mmWave-based 3D reconstruction. DiffSBR incorporates a differentiable ray tracing engine to simulate radar point clouds from virtual 3D models. A gradient-based optimizer refines the model parameters to minimize the discrepancy between simulated and real point clouds. Experiments using various radar hardware validate DiffSBR's capability for fine-grained 3D reconstruction, even for novel objects unseen by the radar previously. By integrating physics-based simulation with gradient optimization, DiffSBR transcends the limitations of data-driven approaches and pioneers a new paradigm for mmWave sensing

    Machine learning application in complicated burning plasmas for future magnetic fusion exploration

    Get PDF

    Intelligent control for predicting and mitigating major disruptions in magnetic confinement fusion

    Get PDF
    Magnetic confinement fusion is believed to be one of the promising paths that provides us with an infinite supply of an environment-friendly energy source, naturally contributing to a green economy and low-carbon development. Nevertheless, the major disruption of high temperature plasmas, a big threat to fusion devices, is still in the way of mankind accessing to fusion energy. Although a bunch of individual techniques have been proved to be feasible for the control, mitigation, and prediction of disruptions, complicated experimental environments make it hard to decide on specific control strategies. The traditional control approach, designing a series of independent controllers in a nested structure, cannot meet the needs of real-time complicated plasma control, which requires extended engineering expertise and complicated evaluation of system states referring to multiple plasma parameters. Fortunately, artificial intelligence (AI) offers potential solutions towards entirely resolving this troublesome issue. To simplify the control system, a radically novel idea for designing controllers via AI is brought forward in this work. Envisioned intelligent controllers should be developed to replace the traditional nested structure. The successful development of intelligent control is expected to effectively predict and mitigate major disruptions, which would definitely enhance fusion performance, and thus offers inspiring odds to improve the accessibility of sustainable fusion energy
    corecore