22,357 research outputs found
On the inability of existing security models to cope with data mobility in dynamic organizations
Modeling tools play an important role in identifying threats in traditional\ud
IT systems, where the physical infrastructure and roles are assumed\ud
to be static. In dynamic organizations, the mobility of data outside the\ud
organizational perimeter causes an increased level of threats such as the\ud
loss of confidential data and the loss of reputation. We show that current\ud
modeling tools are not powerful enough to help the designer identify the\ud
emerging threats due to mobility of data and change of roles, because they\ud
do not include the mobility of IT systems nor the organizational dynamics\ud
in the security model. Researchers have proposed security models that\ud
particularly focus on data mobility and the dynamics of modern organizations,\ud
such as frequent role changes of a person. We show that none\ud
of the current security models simultaneously considers the data mobility\ud
and organizational dynamics to a satisfactory extent. As a result, none\ud
of the current security models effectively identifies the potential security\ud
threats caused by data mobility in a dynamic organization
Combining behavioural types with security analysis
Today's software systems are highly distributed and interconnected, and they
increasingly rely on communication to achieve their goals; due to their
societal importance, security and trustworthiness are crucial aspects for the
correctness of these systems. Behavioural types, which extend data types by
describing also the structured behaviour of programs, are a widely studied
approach to the enforcement of correctness properties in communicating systems.
This paper offers a unified overview of proposals based on behavioural types
which are aimed at the analysis of security properties
Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"
According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient.
The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself.
Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: âą The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners.
âą The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another.
âą The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion.
The behaviour of the entities may vary over time.
âą The systems operate with incomplete information about the environment.
For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered.
The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems.
This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative.
We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration
Big Data Privacy Context: Literature Effects On Secure Informational Assets
This article's objective is the identification of research opportunities in
the current big data privacy domain, evaluating literature effects on secure
informational assets. Until now, no study has analyzed such relation. Its
results can foster science, technologies and businesses. To achieve these
objectives, a big data privacy Systematic Literature Review (SLR) is performed
on the main scientific peer reviewed journals in Scopus database. Bibliometrics
and text mining analysis complement the SLR. This study provides support to big
data privacy researchers on: most and least researched themes, research
novelty, most cited works and authors, themes evolution through time and many
others. In addition, TOPSIS and VIKOR ranks were developed to evaluate
literature effects versus informational assets indicators. Secure Internet
Servers (SIS) was chosen as decision criteria. Results show that big data
privacy literature is strongly focused on computational aspects. However,
individuals, societies, organizations and governments face a technological
change that has just started to be investigated, with growing concerns on law
and regulation aspects. TOPSIS and VIKOR Ranks differed in several positions
and the only consistent country between literature and SIS adoption is the
United States. Countries in the lowest ranking positions represent future
research opportunities.Comment: 21 pages, 9 figure
Context-Aware and Secure Workflow Systems
Businesses do evolve. Their evolution necessitates the re-engineering of their existing "business processesâ, with the objectives of reducing costs, delivering services on time, and enhancing their profitability in a competitive market. This is generally true and particularly in domains such as manufacturing, pharmaceuticals and education). The central objective of workflow technologies is to separate business policies (which normally are encoded in business logics) from the underlying business applications. Such a separation is desirable as it improves the evolution of business processes and, more often than not, facilitates the re-engineering at the organisation level without the need to detail knowledge or analyses of the application themselves. Workflow systems are currently used by many organisations with a wide range of interests and specialisations in many domains. These include, but not limited to, office automation, finance and banking sector, health-care, art, telecommunications, manufacturing and education. We take the view that a workflow is a set of "activitiesâ, each performs a piece of functionality within a given "contextâ and may be constrained by some security requirements. These activities are coordinated to collectively achieve a required business objective. The specification of such coordination is presented as a set of "execution constraintsâ which include parallelisation (concurrency/distribution), serialisation, restriction, alternation, compensation and so on. Activities within workflows could be carried out by humans, various software based application programs, or processing entities according to the organisational rules, such as meeting deadlines or performance improvement. Workflow execution can involve a large number of different participants, services and devices which may cross the boundaries of various organisations and accessing variety of data.
This raises the importance of
_ context variations and context-awareness and
_ security (e.g. access control and privacy).
The specification of precise rules, which prevent unauthorised participants from executing sensitive tasks and also to prevent tasks from accessing unauthorised services or (commercially) sensitive information, are crucially important. For example, medical scenarios will require that:
_ only authorised doctors are permitted to perform certain tasks,
_ a patient medical records are not allowed to be accessed by anyone without
the patient consent and
_ that only specific machines are used to perform given tasks at a given time.
If a workflow execution cannot guarantee these requirements, then the flow will
be rejected. Furthermore, features/characteristics of security requirement are both
temporal- and/or event-related. However, most of the existing models are of a
static nature â for example, it is hard, if not impossible, to express security requirements which are:
_ time-dependent (e.g. A customer is allowed to be overdrawn by 100 pounds
only up-to the first week of every month.
_ event-dependent (e.g. A bank account can only be manipulated by its owner unless there is a change in the law or after six months of his/her death).
Currently, there is no commonly accepted model for secure and context-aware workflows or even a common agreement on which features a workflow security model should support. We have developed a novel approach to design, analyse and validate workflows. The approach has the following components:
= A modelling/design language (known as CS-Flow).
The language has the following features:
â support concurrency;
â context and context awareness are first-class citizens;
â supports mobility as activities can move from one context to another;
â has the ability to express timing constrains: delay, deadlines, priority and schedulability;
â allows the expressibility of security policies (e.g. access control and privacy) without the need for extra linguistic complexities; and
â enjoy sound formal semantics that allows us to animate designs and compare various designs.
= An approach known as communication-closed layer is developed, that allows us to serialise a highly distributed workflow to produce a semantically equivalent quasi-sequential flow which is easier to understand and analyse. Such re-structuring, gives us a mechanism to design fault-tolerant workflows as layers are atomic activities and various existing forward and backward error recovery techniques can be deployed.
= Provide a reduction semantics to CS-Flow that allows us to build a tool support to animate a specifications and designs. This has been evaluated on a Health care scenario, namely the Context Aware Ward (CAW) system. Health care provides huge amounts of business workflows, which will benefit from workflow adaptation and support through pervasive computing systems. The evaluation takes two complementary strands:
â provide CS-Flowâs models and specifications and
â formal verification of time-critical component of a workflow
- âŠ