3,548 research outputs found

    Actor-network procedures: Modeling multi-factor authentication, device pairing, social interactions

    Full text link
    As computation spreads from computers to networks of computers, and migrates into cyberspace, it ceases to be globally programmable, but it remains programmable indirectly: network computations cannot be controlled, but they can be steered by local constraints on network nodes. The tasks of "programming" global behaviors through local constraints belong to the area of security. The "program particles" that assure that a system of local interactions leads towards some desired global goals are called security protocols. As computation spreads beyond cyberspace, into physical and social spaces, new security tasks and problems arise. As networks are extended by physical sensors and controllers, including the humans, and interlaced with social networks, the engineering concepts and techniques of computer security blend with the social processes of security. These new connectors for computational and social software require a new "discipline of programming" of global behaviors through local constraints. Since the new discipline seems to be emerging from a combination of established models of security protocols with older methods of procedural programming, we use the name procedures for these new connectors, that generalize protocols. In the present paper we propose actor-networks as a formal model of computation in heterogenous networks of computers, humans and their devices; and we introduce Procedure Derivation Logic (PDL) as a framework for reasoning about security in actor-networks. On the way, we survey the guiding ideas of Protocol Derivation Logic (also PDL) that evolved through our work in security in last 10 years. Both formalisms are geared towards graphic reasoning and tool support. We illustrate their workings by analysing a popular form of two-factor authentication, and a multi-channel device pairing procedure, devised for this occasion.Comment: 32 pages, 12 figures, 3 tables; journal submission; extended references, added discussio

    Cascaded Scene Flow Prediction using Semantic Segmentation

    Full text link
    Given two consecutive frames from a pair of stereo cameras, 3D scene flow methods simultaneously estimate the 3D geometry and motion of the observed scene. Many existing approaches use superpixels for regularization, but may predict inconsistent shapes and motions inside rigidly moving objects. We instead assume that scenes consist of foreground objects rigidly moving in front of a static background, and use semantic cues to produce pixel-accurate scene flow estimates. Our cascaded classification framework accurately models 3D scenes by iteratively refining semantic segmentation masks, stereo correspondences, 3D rigid motion estimates, and optical flow fields. We evaluate our method on the challenging KITTI autonomous driving benchmark, and show that accounting for the motion of segmented vehicles leads to state-of-the-art performance.Comment: International Conference on 3D Vision (3DV), 2017 (oral presentation

    The scale of sense : spatial extent and multimodal urban design

    Get PDF
    This paper is derived from the work of the UK AHRC/EPSRC 'Designing for the 21st Century' research project Multimodal Representation of Urban Space. This research group seeks to establish a new form of notation for urban design which pays attention to our entire sensory experience of place. This paper addresses one of the most important aspects of this endeavour: scale. Scale is of course a familiar abstraction to all architects and urban designers, allowing for representations tailored to different levels of detail and allowing drawings to be translated into build structures. Scale is also a factor in human experience: the spatial extent of each of our senses is different. Many forms of architectonic representation are founded upon the extension of the visual modality, and designs are accordingly tuned towards this sense. We can all speak from our own experience, however, that urban environments are a feast for all the senses. The visceral quality of walking down a wide tree-lined boulevard differs greatly from the subterranean crowds of the subway, or the meandering pause invited by the city square. Similarly, our experience of hearing and listening is more than just a passive observation by virtue of our own power of voice and the feedback created by our percussive movements across a surface or through a medium. Taste and smell are also excited by the urban environment, the social importance of food preparation and the associations between smell and public health are issues of sensory experience. The tactile experience of space, felt with the entire body as well as our more sensitive hands, allowing for direct manipulation and interactions as well as sensations of mass, heat, proximity and texture. Our project team shall present a series of tools for designers which explore the variety of sensory modalities and their associated scales. This suite of notations and analytical frameworks turn our attention to the sensory experience of places, and offers a method and pattern book for more holistic multi-sensory and multi-modal urban design

    A PUF-and biometric-based lightweight hardware solution to increase security at sensor nodes

    Get PDF
    Security is essential in sensor nodes which acquire and transmit sensitive data. However, the constraints of processing, memory and power consumption are very high in these nodes. Cryptographic algorithms based on symmetric key are very suitable for them. The drawback is that secure storage of secret keys is required. In this work, a low-cost solution is presented to obfuscate secret keys with Physically Unclonable Functions (PUFs), which exploit the hardware identity of the node. In addition, a lightweight fingerprint recognition solution is proposed, which can be implemented in low-cost sensor nodes. Since biometric data of individuals are sensitive, they are also obfuscated with PUFs. Both solutions allow authenticating the origin of the sensed data with a proposed dual-factor authentication protocol. One factor is the unique physical identity of the trusted sensor node that measures them. The other factor is the physical presence of the legitimate individual in charge of authorizing their transmission. Experimental results are included to prove how the proposed PUF-based solution can be implemented with the SRAMs of commercial Bluetooth Low Energy (BLE) chips which belong to the communication module of the sensor node. Implementation results show how the proposed fingerprint recognition based on the novel texture-based feature named QFingerMap16 (QFM) can be implemented fully inside a low-cost sensor node. Robustness, security and privacy issues at the proposed sensor nodes are discussed and analyzed with experimental results from PUFs and fingerprints taken from public and standard databases.Ministerio de EconomĂ­a, Industria y Competitividad TEC2014-57971-R, TEC2017-83557-

    HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness

    Full text link
    RGB-D saliency detection aims to fuse multi-modal cues to accurately localize salient regions. Existing works often adopt attention modules for feature modeling, with few methods explicitly leveraging fine-grained details to merge with semantic cues. Thus, despite the auxiliary depth information, it is still challenging for existing models to distinguish objects with similar appearances but at distinct camera distances. In this paper, from a new perspective, we propose a novel Hierarchical Depth Awareness network (HiDAnet) for RGB-D saliency detection. Our motivation comes from the observation that the multi-granularity properties of geometric priors correlate well with the neural network hierarchies. To realize multi-modal and multi-level fusion, we first use a granularity-based attention scheme to strengthen the discriminatory power of RGB and depth features separately. Then we introduce a unified cross dual-attention module for multi-modal and multi-level fusion in a coarse-to-fine manner. The encoded multi-modal features are gradually aggregated into a shared decoder. Further, we exploit a multi-scale loss to take full advantage of the hierarchical information. Extensive experiments on challenging benchmark datasets demonstrate that our HiDAnet performs favorably over the state-of-the-art methods by large margins
    • 

    corecore