9,746 research outputs found
Rethinking De-Perimeterisation: Problem Analysis And Solutions
For businesses, the traditional security approach is the hard-shell model: an organisation secures all its assets using a fixed security border, trusting the inside, and distrusting the outside. However, as technologies and business processes change, this model looses its attractiveness. In a networked world, âinsideâ and âoutsideâ can no longer be clearly distinguished. The Jericho Forum - an industry consortium part of the Open Group â coined this process deperimeterisation and suggested an approach aimed at securing data rather than complete systems and infrastructures. We do not question the reality of de-perimeterisation; however, we believe that the existing analysis of the exact problem, as well as the usefulness of the proposed solutions have fallen short: first, there is no linear process of blurring boundaries, in which security mechanisms are placed at lower and lower levels, until they only surround data. To the contrary, we experience a cyclic process of connecting and disconnecting of systems. As conditions change, the basic trade-off between accountability and business opportunities is made (and should be made) every time again. Apart from that, data level security has several limitations to start with, and there is a big potential for solving security problems differently: by rearranging the responsibilities between businesses and individuals. The results of this analysis can be useful for security professionals who need to trade off different security mechanisms for their organisations and their information systems
Natural Notation for the Domestic Internet of Things
This study explores the use of natural language to give instructions that
might be interpreted by Internet of Things (IoT) devices in a domestic `smart
home' environment. We start from the proposition that reminders can be
considered as a type of end-user programming, in which the executed actions
might be performed either by an automated agent or by the author of the
reminder. We conducted an experiment in which people wrote sticky notes
specifying future actions in their home. In different conditions, these notes
were addressed to themselves, to others, or to a computer agent.We analyse the
linguistic features and strategies that are used to achieve these tasks,
including the use of graphical resources as an informal visual language. The
findings provide a basis for design guidance related to end-user development
for the Internet of Things.Comment: Proceedings of the 5th International symposium on End-User
Development (IS-EUD), Madrid, Spain, May, 201
De-perimeterisation as a cycle: tearing down and rebuilding security perimeters
If an organisation wants to secure its IT assets, where should the security mechanisms be placed? The traditional view is the hard-shell model, where an organisation secures all its assets using a fixed security border: What is inside the security perimeter is more or less trusted, what is outside is not. Due to changes in technologies, business processes and their legal environments this approach is not adequate anymore.\ud
This paper examines this process, which was coined de-perimeterisation by the Jericho Forum.\ud
In this paper we analyse and define the concepts of perimeter and de-perimeterisation, and show that there is a long term trend in which de-perimeterisation is iteratively accelerated and decelerated. In times of accelerated de-perimeterisation, technical and organisational changes take place by which connectivity between organisations and their environment scales up significantly. In times of deceleration, technical and organisational security measures are taken to decrease the security risks that come with de-perimeterisation, a movement that we call re-perimeterisation. We identify the technical and organisational mechanisms that facilitate de-perimeterisation and re-perimeterisation, and discuss the forces that cause organisations to alternate between these two movements
CamFlow: Managed Data-sharing for Cloud Services
A model of cloud services is emerging whereby a few trusted providers manage
the underlying hardware and communications whereas many companies build on this
infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS
applications. From the start, strong isolation between cloud tenants was seen
to be of paramount importance, provided first by virtual machines (VM) and
later by containers, which share the operating system (OS) kernel. Increasingly
it is the case that applications also require facilities to effect isolation
and protection of data managed by those applications. They also require
flexible data sharing with other applications, often across the traditional
cloud-isolation boundaries; for example, when government provides many related
services for its citizens on a common platform. Similar considerations apply to
the end-users of applications. But in particular, the incorporation of cloud
services within `Internet of Things' architectures is driving the requirements
for both protection and cross-application data sharing.
These concerns relate to the management of data. Traditional access control
is application and principal/role specific, applied at policy enforcement
points, after which there is no subsequent control over where data flows; a
crucial issue once data has left its owner's control by cloud-hosted
applications and within cloud-services. Information Flow Control (IFC), in
addition, offers system-wide, end-to-end, flow control based on the properties
of the data. We discuss the potential of cloud-deployed IFC for enforcing
owners' dataflow policy with regard to protection and sharing, as well as
safeguarding against malicious or buggy software. In addition, the audit log
associated with IFC provides transparency, giving configurable system-wide
visibility over data flows. [...]Comment: 14 pages, 8 figure
Data Confidentiality in Mobile Ad hoc Networks
Mobile ad hoc networks (MANETs) are self-configuring infrastructure-less
networks comprised of mobile nodes that communicate over wireless links without
any central control on a peer-to-peer basis. These individual nodes act as
routers to forward both their own data and also their neighbours' data by
sending and receiving packets to and from other nodes in the network. The
relatively easy configuration and the quick deployment make ad hoc networks
suitable the emergency situations (such as human or natural disasters) and for
military units in enemy territory. Securing data dissemination between these
nodes in such networks, however, is a very challenging task. Exposing such
information to anyone else other than the intended nodes could cause a privacy
and confidentiality breach, particularly in military scenarios. In this paper
we present a novel framework to enhance the privacy and data confidentiality in
mobile ad hoc networks by attaching the originator policies to the messages as
they are sent between nodes. We evaluate our framework using the Network
Simulator (NS-2) to check whether the privacy and confidentiality of the
originator are met. For this we implemented the Policy Enforcement Points
(PEPs), as NS-2 agents that manage and enforce the policies attached to packets
at every node in the MANET.Comment: 12 page
Recommended from our members
Why Not Privacy By Default?
We live in a Track-Me world, one from which opting out is often not possible. Firms collect reams of data about all of us, quietly tracking our mobile devices, our web surfing, and our email for marketing, pricing, product development, and other purposes. Most consumers both oppose tracking and want the benefits tracking can provide. In response, policymakers have proposed that consumers be given significant control over when, how, and by whom they are tracked through a system of defaults (i.e., âTrack-Meâ or âDo-Not-Trackâ) from which consumers can opt out.
The use of a default scheme is premised on three assumptions. First, that for consumers with weak or conflicted preferences, any default chosen will be âsticky,â meaning that more consumers will stay in the default position than would choose it if an affirmative action were required to reach the position. Second, that those consumers with a fairly strong preference for the opt-out positionâand only those consumersâwill opt out. Third, that where firms oppose the default position, they will be forced to explain it in the course of trying to convince consumers to opt out, resulting in well-informed decisions by consumers.
This article demonstrates that for tracking defaults, these assumptions may not consistently hold. Past experience with the use of defaults in policymaking teaches that Track-Me defaults are likely to be too sticky, Do-Not-Track defaults are likely to be too slippery, and neither are likely to be information-forcing.
These conclusions should inform the âDo-Not-Trackâ policy discussions actively taking place in the U.S., in the E.U., and at the World Wide Web Consortium. They also cast doubt on the privacy and behavioral economics literatures that advocate the use of ânudgesâ to improve consumer decisions about privacy
- âŠ