588 research outputs found

    Adaptive Network Coding for Scheduling Real-time Traffic with Hard Deadlines

    Full text link
    We study adaptive network coding (NC) for scheduling real-time traffic over a single-hop wireless network. To meet the hard deadlines of real-time traffic, it is critical to strike a balance between maximizing the throughput and minimizing the risk that the entire block of coded packets may not be decodable by the deadline. Thus motivated, we explore adaptive NC, where the block size is adapted based on the remaining time to the deadline, by casting this sequential block size adaptation problem as a finite-horizon Markov decision process. One interesting finding is that the optimal block size and its corresponding action space monotonically decrease as the deadline approaches, and the optimal block size is bounded by the "greedy" block size. These unique structures make it possible to narrow down the search space of dynamic programming, building on which we develop a monotonicity-based backward induction algorithm (MBIA) that can solve for the optimal block size in polynomial time. Since channel erasure probabilities would be time-varying in a mobile network, we further develop a joint real-time scheduling and channel learning scheme with adaptive NC that can adapt to channel dynamics. We also generalize the analysis to multiple flows with hard deadlines and long-term delivery ratio constraints, devise a low-complexity online scheduling algorithm integrated with the MBIA, and then establish its asymptotical throughput-optimality. In addition to analysis and simulation results, we perform high fidelity wireless emulation tests with real radio transmissions to demonstrate the feasibility of the MBIA in finding the optimal block size in real time.Comment: 11 pages, 13 figure

    La conmutación cognitiva afecta la selección de estrategia aritmética: Evidencia de patrones de mirada y medidas conductuales

    Get PDF
    Although many studies of cognitive switching have been conducted, little is known about whether and how cognitive switching affects individuals’ use of arithmetic strategies. We used estimation and numerical comparison tasks within the operand recognition paradigm and the choice/no-choice paradigm to explore the effects of cognitive switching on the process of arithmetic strategy selection. Results showed that individuals’ performance in the baseline task was superior to that in the switching task. Presentation mode and cognitive switching clearly influenced eye-gaze patterns during strategy selection, with longer fixation duration in the number presentation mode than in the clock presentation mode. Furthermore, the number of fixation was greater in the switching task than it was in the the baseline task. These results indicate that the effects of cognitive switching on arithmetic strategy selection are clearly constrained by the manner in which numbers are presented. Aunque se han realizado muchos estudios sobre el cambio cognitivo, se sabe poco acerca de si el cambio cognitivo afecta el uso de las estrategias aritméticas por parte de las personas y cómo lo hace. Utilizamos las tareas de estimación y comparación numérica dentro del paradigma de reconocimiento de operandos y el paradigma de elección / no elección para explorar los efectos del cambio cognitivo en el proceso de selección de estrategia aritmética. Los resultados mostraron que el rendimiento de los individuos en la tarea de referencia fue superior al de la tarea de cambio. El modo de presentación y la conmutación cognitiva influyeron claramente en los patrones de la mirada durante la selección de estrategia, con duraciones de fijación más largas en el modo de presentación numérica que en el modo de presentación de reloj. Además, el número de fijaciones fue mayor en la tarea de conmutación que en la tarea de línea de base. Estos resultados indican que los efectos del cambio cognitivo en la selección de la estrategia aritmética están claramente limitados por la forma en que se presentan los números

    TetCNN: Convolutional Neural Networks on Tetrahedral Meshes

    Full text link
    Convolutional neural networks (CNN) have been broadly studied on images, videos, graphs, and triangular meshes. However, it has seldom been studied on tetrahedral meshes. Given the merits of using volumetric meshes in applications like brain image analysis, we introduce a novel interpretable graph CNN framework for the tetrahedral mesh structure. Inspired by ChebyNet, our model exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over commonly used graph Laplacian which lacks the Riemannian metric information of 3D manifolds. For pooling adaptation, we introduce new objective functions for localized minimum cuts in the Graclus algorithm based on the LBO. We employ a piece-wise constant approximation scheme that uses the clustering assignment matrix to estimate the LBO on sampled meshes after each pooling. Finally, adapting the Gradient-weighted Class Activation Mapping algorithm for tetrahedral meshes, we use the obtained heatmaps to visualize discovered regions-of-interest as biomarkers. We demonstrate the effectiveness of our model on cortical tetrahedral meshes from patients with Alzheimer's disease, as there is scientific evidence showing the correlation of cortical thickness to neurodegenerative disease progression. Our results show the superiority of our LBO-based convolution layer and adapted pooling over the conventionally used unitary cortical thickness, graph Laplacian, and point cloud representation.Comment: Accepted as a conference paper to Information Processing in Medical Imaging (IPMI 2023) conferenc

    Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation

    Full text link
    Pretraining CNN models (i.e., UNet) through self-supervision has become a powerful approach to facilitate medical image segmentation under low annotation regimes. Recent contrastive learning methods encourage similar global representations when the same image undergoes different transformations, or enforce invariance across different image/patch features that are intrinsically correlated. However, CNN-extracted global and local features are limited in capturing long-range spatial dependencies that are essential in biological anatomy. To this end, we present a keypoint-augmented fusion layer that extracts representations preserving both short- and long-range self-attention. In particular, we augment the CNN feature map at multiple scales by incorporating an additional input that learns long-range spatial self-attention among localized keypoint features. Further, we introduce both global and local self-supervised pretraining for the framework. At the global scale, we obtain global representations from both the bottleneck of the UNet, and by aggregating multiscale keypoint features. These global features are subsequently regularized through image-level contrastive objectives. At the local scale, we define a distance-based criterion to first establish correspondences among keypoints and encourage similarity between their features. Through extensive experiments on both MRI and CT segmentation tasks, we demonstrate the architectural advantages of our proposed method in comparison to both CNN and Transformer-based UNets, when all architectures are trained with randomly initialized weights. With our proposed pretraining strategy, our method further outperforms existing SSL methods by producing more robust self-attention and achieving state-of-the-art segmentation results. The code is available at https://github.com/zshyang/kaf.git.Comment: Camera ready for NeurIPS 2023. Code available at https://github.com/zshyang/kaf.gi

    A compactness based saliency approach for leakages detection in fluorescein angiogram

    Get PDF
    This study has developed a novel saliency detection method based on compactness feature for detecting three common types of leakage in retinal fluorescein angiogram: large focal, punctate focal, and vessel segment leakage. Leakage from retinal vessels occurs in a wide range of retinal diseases, such as diabetic maculopathy and paediatric malarial retinopathy. The proposed framework consists of three major steps: saliency detection, saliency refinement and leakage detection. First, the Retinex theory is adapted to address the illumination inhomogeneity problem. Then two saliency cues, intensity and compactness, are proposed for the estimation of the saliency map of each individual superpixel at each level. The saliency maps at different levels over the same cues are fused using an averaging operator. Finally, the leaking sites can be detected by masking the vessel and optic disc regions. The effectiveness of this framework has been evaluated by applying it to different types of leakage images with cerebral malaria. The sensitivity in detecting large focal, punctate focal and vessel segment leakage is 98.1, 88.2 and 82.7 %, respectively, when compared to a reference standard of manual annotations by expert human observers. The developed framework will become a new powerful tool for studying retinal conditions involving retinal leakage

    Intensity and Compactness Enabled Saliency Estimation for Leakage Detection in Diabetic and Malarial Retinopathy

    Get PDF
    Leakage in retinal angiography currently is a key feature for confirming the activities of lesions in the management of a wide range of retinal diseases, such as diabetic maculopathy and paediatric malarial retinopathy. This paper proposes a new saliency-based method for the detection of leakage in fluorescein angiography. A superpixel approach is firstly employed to divide the image into meaningful patches (or superpixels) at different levels. Two saliency cues, intensity and compactness, are then proposed for the estimation of the saliency map of each individual superpixel at each level. The saliency maps at different levels over the same cues are fused using an averaging operator. The two saliency maps over different cues are fused using a pixel-wise multiplication operator. Leaking regions are finally detected by thresholding the saliency map followed by a graph-cut segmentation. The proposed method has been validated using the only two publicly available datasets: one for malarial retinopathy and the other for diabetic retinopathy. The experimental results show that it outperforms one of the latest competitors and performs as well as a human expert for leakage detection and outperforms several state-of-the-art methods for saliency detection

    Weaknesses of the Boyd-Mao Deniable Authenticated key Establishment for Internet Protocols

    Get PDF
    In 2003, Boyd and Mao proposed two deniable authenticated key establishment protocols using elliptic curve pairings for Internet protocols, one is based on Diffie-Hellman key exchange and the other is based on Public-Key Encryption approach. For the use of elliptic curve pairings, they declared that their schemes could be more efficient than the existing Internet Key Exchange (IKE), nowadays. However in this paper, we will show that both of Boyd-Mao¡¦s protocols suffer from the key-Compromise Impersonation attack
    • …
    corecore