43 research outputs found

    iFlask: Isolate flask security system from dangerous execution environment by using ARM TrustZone

    Get PDF
    Security is essential in mobile computing. And, therefore, various access control modules have been introduced. However, the complicated mobile runtime environment may directly impact on the integrity of these security modules, or even compels them to make wrong access control decisions. Therefore, for a trusted Flask based security system, it needs to be isolated from the dangerous mobile execution environment at runtime. In this paper, we propose an isolated Flask security architecture called iFlask to solve this problem for the Flask-based mandatory access control (MAC) system. iFlask puts its security server subsystem into the enclave provided by the ARM TrustZone so as to avert the negative impacts of the malicious environment. In the meanwhile, iFlask’s object manager subsystems which run in the mobile system kernel use a built-in supplicant proxy to effectively lookup policy decisions made by the back-end security server residing in the enclave, and to enforce these rules on the system with trustworthy behaviors. Moreover, to protect iFlask’s components which are not protected by the enclave, we not only provide an exception trap mechanism that enables TrustZone to enlarge its protection scope to protect selected memory regions from the malicious system, but also establish a secure communication channel to the enclave as well. The prototype is implemented on SELinux, which is the widely used Flask-based MAC system, and the base of SEAndroid. The experimental results show that SELinux receives reliable protection, because it resists all known vulnerabilities (e.g., CVE-2015-1815) and remains unaffected by the attacks in the test set. The propose architecture have very slight impact on the performance, it shows a performance degradation ranges between 0.53% to 6.49% compared to the naked system

    Learning RGB-D Salient Object Detection using background enclosure, depth contrast, and top-down features

    Full text link
    Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7% better than the second place

    Automatic Generation of Grounded Visual Questions

    Full text link
    In this paper, we propose the first model to be able to generate visually grounded questions with diverse types for a single image. Visual question generation is an emerging topic which aims to ask questions in natural language based on visual input. To the best of our knowledge, it lacks automatic methods to generate meaningful questions with various types for the same visual input. To circumvent the problem, we propose a model that automatically generates visually grounded questions with varying types. Our model takes as input both images and the captions generated by a dense caption model, samples the most probable question types, and generates the questions in sequel. The experimental results on two real world datasets show that our model outperforms the strongest baseline in terms of both correctness and diversity with a wide margin.Comment: VQ

    GTAV-NightRain: Photometric Realistic Large-scale Dataset for Night-time Rain Streak Removal

    Full text link
    Rain is transparent, which reflects and refracts light in the scene to the camera. In outdoor vision, rain, especially rain streaks degrade visibility and therefore need to be removed. In existing rain streak removal datasets, although density, scale, direction and intensity have been considered, transparency is not fully taken into account. This problem is particularly serious in night scenes, where the appearance of rain largely depends on the interaction with scene illuminations and changes drastically on different positions within the image. This is problematic, because unrealistic dataset causes serious domain bias. In this paper, we propose GTAV-NightRain dataset, which is a large-scale synthetic night-time rain streak removal dataset. Unlike existing datasets, by using 3D computer graphic platform (namely GTA V), we are allowed to infer the three dimensional interaction between rain and illuminations, which insures the photometric realness. Current release of the dataset contains 12,860 HD rainy images and 1,286 corresponding HD ground truth images in diversified night scenes. A systematic benchmark and analysis are provided along with the dataset to inspire further research
    corecore