13 research outputs found
Towards a Security Engineering Process Model for Electronic Business Processes
Business process management (BPM) and accompanying systems aim at enabling
enterprises to become adaptive. In spite of the dependency of enterprises on
secure business processes, BPM languages and techniques provide only little
support for security. Several complementary approaches have been proposed for
security in the domain of BPM. Nevertheless, support for a systematic procedure
for the development of secure electronic business processes is still missing.
In this paper, we pinpoint the need for a security engineering process model in
the domain of BPM and identify key requirements for such process model.Comment: Ninth European Dependable Computing Conference (EDCC 2012
Verifying for Compliance to Data Constraints in Collaborative Business Processes.
Production processes are nowadays fragmented across different companies and organized in global collaborative networks. This is the result of the first wave of globalization that, among the various factors, was enabled by the diffusion of Internet-based Information and Communication Technologies (ICTs) at the beginning of the years 2000. The recent wave of new technologies possibly leading to the fourth industrial revolution – the so-called Industry 4.0 – is further multiplying opportunities. Accessing global customers opens great opportunities for organizations, including small and medium enterprises (SMEs), but it requires the ability to adapt to different requirements and conditions, volatile demand patterns and fast-changing technologies. Regardless of the industrial sector, the processes used in an organization must be compliant to rules, standards, laws and regulations. Non-compliance subjects enterprises to litigation and financial fines. Thus, compliance verification is a major concern, not only to keep pace with changing regulations but also to address the rising concerns of security, product and service quality and data privacy. The software, in particular process automation, used must be designed accordingly. In relation to process management, we propose a new way to pro-actively check the compliance of current running business processes using Descriptive Logic and Linear Temporal Logic to describe the constraints related to data. Related algorithms are presented to detect the potential violations
Design of an Integrated Role-Based Access Control Infrastructure for Adaptive Workflow Systems
With increasing numbers of organizations automating their business processes by using workflow systems, security aspects of workflow systems has become a heavily researched area. Also, most workflow processes nowadays need to be adaptive, i.e., constantly changing, to meet changing business conditions. However, little attention has been paid to integrating Security and Adaptive Workflow. In this paper, we investigate this important research topic, with emphasis on Role Based Access Control (RBAC) in Adaptive Workflow. Based on our earlier work on a 3-tier adaptive workflow architecture, we present the design of a similar 3-tier RBAC infrastructure, and we show that it conceptually mirrors our adaptive workflow architecture. We also describe the mappings between them, and we show how this mapping can be used to manage organizational RBAC constraints when the workflows are being adapted continuously. We illustrate our ideas throughout the paper with a simple yet non-trivial example
Scalable And Secure Provenance Querying For Scientific Workflows And Its Application In Autism Study
In the era of big data, scientific workflows have become essential to automate scientific experiments and guarantee repeatability. As both data and workflow increase in their scale, requirements for having a data lineage management system commensurate with the complexity of the workflow also become necessary, calling for new scalable storage, query, and analytics infrastructure. This system that manages and preserves the derivation history and morphosis of data, known as provenance system, is essential for maintaining quality and trustworthiness of data products and ensuring reproducibility of scientific discoveries. With a flurry of research and increased adoption of scientific workflows in processing sensitive data, i.e., health and medication domain, securing information flow and instrumenting access privileges in the system have become a fundamental precursor to deploying large-scale scientific workflows. That has become more important now since today team of scientists around the world can collaborate on experiments using globally distributed sensitive data sources. Hence, it has become imperative to augment scientific workflow systems as well as the underlying provenance management systems with data security protocols. Provenance systems, void of data security protocol, are susceptible to vulnerability. In this dissertation research, we delineate how scientific workflows can improve therapeutic practices in autism spectrum disorders. The data-intensive computation inherent in these workflows and sensitive nature of the data, necessitate support for scalable, parallel and robust provenance queries and secured view of data. With that in perspective, we propose , a parallel, robust, reliable and scalable provenance query language and introduce the concept of access privilege inheritance in the provenance systems. We characterize desirable properties of role-based access control protocol in scientific workflows and demonstrate how the qualities are integrated into the workflow provenance systems as well. Finally, we describe how these concepts fit within the DATAVIEW workflow management system
ENABLING MOBILE DEVICES TO HOST CONSUMERS AND PROVIDERS OF RESTFUL WEB SERVICES
The strong growth in the use of mobile devices such as smartphones and tablets in Enterprise Information Systems has led to growing research in the area of mobile Web services. Web services are applications that are developed based on network standards such as Services Oriented Architecture and Representational State Transfer (REST). The mobile research community mostly focused on facilitating the mobile devices as client consumers especially in heterogeneous Web services. However, with the advancement in mobile device capabilities in terms of processing power and storage, this thesis seeks to utilize these devices as hosts of REST Web services.
In order to host services on mobile devices, some key challenges have to be addressed. Since data and services accessibility is facilitated by the mobile devices which communicate via unstable wireless networks, the challenges of network latency and synchronization of data (i.e. the Web resources) among the mobile participants must be addressed.
To address these challenges, this thesis proposes a cloud-based middleware that enables reliable communication between the mobile hosts in unreliable Wi-Fi networks. The middleware employs techniques such as message routing and Web resources state changes detection in order to push data to the mobile participants in real time. Additionally, to ensure high availability of data, the proposed middleware has a cache component which stores the replicas of the mobile hosts’ Web resources. As a result, in case a mobile host is disconnected, the Web resources of the host can be accessed on the middleware. The key contributions of this thesis are the identification of mobile devices as hosts of RESTful Web services and the implementation of middleware frameworks that support mobile communication in unreliable networks
Recommended from our members
Vulnerability Identification Errors in Security Risk Assessments
At present, companies rely on information technology systems to achieve their business objectives, making them vulnerable to cybersecurity threats. Information security risk assessments help organisations to identify their risks and vulnerabilities. An accurate identification of risks and vulnerabilities is a challenge, because the input data is uncertain. So-called ’vulnerability identification errors‘ can occur if false positive vulnerabilities are identified, or if vulnerabilities remain unidentified (false negatives). ‘Accurate identification’ in this context means that all vulnerabilities identified do indeed pose a risk of a security breach for the organisation. An experiment performed with German IT security professionals in 2011 confirmed that vulnerability identification errors do occur in practice. In particular, false positive vulnerabilities were identified by participants.
In information security (IS) risk assessments, security experts analyze the organisation’s assets in order to identify vulnerabilities. Methods such as brainstorming, checklists, scenario-analysis, impact-analysis, and cause-analysis (ISO, 2009b) are used to identify vulnerabilities. These methods use uncertain input data for vulnerability identification, because the probabilities, effects and losses of vulnerabilities cannot be determined exactly (Fenz and Ekelhart, 2011). Furthermore, business security needs are not considered properly; the security checklists and standards used to identify vulnerabilities do not consider company-specific security requirements (Siponen and Willison, 2009). In addition, the intentional behaviour of an attacker when exploiting vulnerabilities for malicious purposes further increases the uncertainty, because predicting human behaviour is not just about existing vulnerabilities and their consequences (Pieters and Consoli, 2009), rather than preparing for future attacks. As a result, current approaches determine risks and vulnerabilities under a high degree of uncertainty, which can lead to errors.
This thesis proposes an approach to resolve vulnerability identification errors using security requirements and business process models. Security requirements represent the business security needs and determine whether any given vulnerability is a security risk for the business. Information assets’ security requirements are evaluated in the context of the business process model, in order to determine whether security functions are implemented and operating correctly. Systems, personnel and physical parts of business processes, as well as IT processes, are considered in the security requirement evaluation, and this approach is validated in three steps. Firstly, the systematic procedure is compared to two best-practice approaches. Secondly, the risk result accuracy is compared to a best-practice risk-assessment approach, as applied to several real-world examples within an insurance company. Thirdly, the capability to determine risk more accurately by using business processes and security requirements is tested in a quasi-experiment, using security professionals.
This thesis demonstrates that risk assessment methods can benefit from explicit evaluation of security requirements in the business context during risk identification, in order to resolve vulnerability identification errors and to provide a criterion for security
A unified framework for security visualization and enforcement in business process driven environments
Service-oriented architecture offers a promising approach for supporting interoperability and flexibility in the context of increasingly dynamic and rapidly changing requirements in the business world. However, encapsulation of business functionalities as self-contained services, as one of the main concepts in a SOA, brings new challenges. While business experts concentrate on the domain-specific aspects, other non-functional requirements such as security remain mostly neglected, if all understood. Costs for security administration may increase, business-driven security requirements may not be addressed and security configurations may not match at all internal and external regulations and guidelines. Based on these needs, we propose a technology-independent framework that provides graphical concepts for incorporating the security demands, facilitating the handling of security requirements from the specification to their realization
Distribuição de tarefas em sistemas de workflow usando lógica nebulosa
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação
Tools and techniques for analysing the impact of information security
PhD ThesisThe discipline of information security is employed by organisations to protect the confidentiality,
integrity and availability of information, often communicated in the form of
information security policies. A policy expresses rules, constraints and procedures to guard
against adversarial threats and reduce risk by instigating desired and secure behaviour of
those people interacting with information legitimately. To keep aligned with a dynamic threat
landscape, evolving business requirements, regulation updates, and new technologies a policy
must undergo periodic review and change. Chief Information Security Officers (CISOs) are
the main decision makers on information security policies within an organisation. Making
informed policy modifications involves analysing and therefore predicting the impact of those
changes on the success rate of business processes often expressed as workflows. Security
brings an added burden to completing a workflow. Adding a new security constraint may
reduce success rate or even eliminate it if a workflow is always forced to terminate early. This
can increase the chances of employees bypassing or violating a security policy. Removing an
existing security constraint may increase success rate but may may also increase the risk to
security. A lack of suitably aimed impact analysis tools and methodologies for CISOs means
impact analysis is currently a somewhat manual and ambiguous procedure. Analysis can
be overwhelming, time consuming, error prone, and yield unclear results, especially when
workflows are complex, have a large workforce, and diverse security requirements. This
thesis considers the provision of tools and more formal techniques specific to CISOs to help
them analyse the impact modifying a security policy has on the success rate of a workflow.
More precisely, these tools and techniques have been designed to efficiently compare the
impact between two versions of a security policy applied to the same workflow, one before,
the other after a policy modification.
This work focuses on two specific types of security impact analysis. The first is quantitative
in nature, providing a measure of success rate for a security constrained workflow
which must be executed by employees who may be absent at runtime. This work considers
quantifying workflow resiliency which indicates a workflow’s expected success rate assuming
the availability of employees to be probabilistic. New aspects of quantitative resiliency are introduced in the form of workflow metrics, and risk management techniques to manage
workflows that must work with a resiliency below acceptable levels. Defining these risk
management techniques has led to exploring the reduction of resiliency computation time and
analysing resiliency in workflows with choice. The second area of focus is more qualitative,
in terms of facilitating analysis of how people are likely to behave in response to security
and how that behaviour can impact the success rate of a workflow at a task level. Large
amounts of information from disparate sources exists on human behavioural factors in a
security setting which can be aligned with security standards and structured within a single
ontology to form a knowledge base. Consultations with two CISOs have been conducted,
whose responses have driven the implementation of two new tools, one graphical, the other
Web-oriented allowing CISOs and human factors experts to record and incorporate their
knowledge directly within an ontology. The ontology can be used by CISOs to assess the
potential impact of changes made to a security policy and help devise behavioural controls
to manage that impact. The two consulted CISOs have also carried out an evaluation of the
Web-oriented tool.
vii