403,734 research outputs found
Process assessment for use in very small enterprises: the NOEMI assess-ment methodology
This paper relates to the development and the experimentation of an IT process assessment methodology especially designed to be used in very small enterprises (VSEs). This methodology, called NOEMI , has been developed as a critical part of a public research project of the Centre de Recherche Public Henri Tudor (Luxembourg). Initially the main objective of the NOEMI process assessment methodology was to contribute directly to the implementation of a collaborative IT-sourcing model, developed in the same re-search project. The process portfolio aims at a whole coverage of the usual IT-practices in VSEs. It is business value-driven and designed in five process areas: infrastructure, service support, management, security, and documentation. The processes themselves are based on a combined approach of ISO/IEC 15504 and the IT Infrastructure Library. The capability model defined in the NOEMI methodology explores the gap between level 0 and level 1 of ISO/IEC 15504 in order to match in a more accurate manner with the reality of VSEs. The capability profile has four levels and is performed for the process areas and not for the processes themselves, so allowing easy comparison between VSEs. We are now performing the seventh experimentation of the NOEMI assessment methodology. Each case has been a success according to the feedback of the VSEs. And we are considering the transfer of our methodology to French and Belgian partners through dissemination projects. It leads us to promote the NOEMI assessment methodology as a public package tool especially designed for use in a VSE context, which aims to enhance business value through IT. This paper introduces the methodology and considerations based on case studies.Assessment methodology, capability model, process portfolio, very small enterprise, service management, SPICE, improvement program, ITIL.
Modeling and estimation of multi-source clustering in crime and security data
While the presence of clustering in crime and security event data is well
established, the mechanism(s) by which clustering arises is not fully
understood. Both contagion models and history independent correlation models
are applied, but not simultaneously. In an attempt to disentangle contagion
from other types of correlation, we consider a Hawkes process with background
rate driven by a log Gaussian Cox process. Our inference methodology is an
efficient Metropolis adjusted Langevin algorithm for filtering of the intensity
and estimation of the model parameters. We apply the methodology to property
and violent crime data from Chicago, terrorist attack data from Northern
Ireland and Israel, and civilian casualty data from Iraq. For each data set we
quantify the uncertainty in the levels of contagion vs. history independent
correlation.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS647 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A Comprehensive Firewall Testing Methodology
This paper proposes an all encompassing test methodology for firewalls. It extends the life cycle model to revisit the major phases of the life cycle after a firewall is in service as foundations for the tests. The focus of the tests is to show that the firewall is, or isn’t, still fit for purpose. It also focuses on the traceability between business requirements through to policy, rule sets, physical design, implementation, egress and ingress testing, monitoring and auditing. The guidelines are provided by a Test and Evaluation Master Plan (TEMP). The methodology is very much process driven and in keeping with the Security Systems Engineering Capability Maturity Model (SSECMM). This provides multiple advantages, including the capture of configuration errors, results are measurable and repeatable, assurance is developed and it can be used as a roadmap for process improvement. Sample tests are provided in the paper, but act merely as a guideline. It would be expected that the test and evaluation master plan be tailored for any specific organisation
User-friendly Formal Methods for Security-aware Applications and Protocols
Formal support in the design and implementation of security-aware applications increases the assurance in the final artifact. Formal methods techniques work by
setting a model that unambiguously defines attacker capabilities, protocol parties behavior, and expected security properties.
Rigorous reasoning can be done on the model about the interaction of the external attacker with the protocol parties, assessing whether the security
properties hold or not.
Unfortunately, formal verification requires a high level of expertise to be used properly and, in complex systems, the model analysis requires an amount of resources (memory and time) that are not available with current technologies.
The aim of this thesis is to propose new interfaces and methodologies that facilitate the usage of formal verification techniques applied to security-aware protocols and distributed applications. In particular, this thesis presents: (i) Spi2JavaGUI, a framework for the model-driven development of security protocols, that combines (for the first time in literature) an intuitive user interface, automated formal verification and code generation; (ii) a new methodology that enables the model-driven development and the automated formal analysis of distributed applications, which requires less resources and formal verification knowledge to complete the verification process, when compared to previous approaches; (iii) the formal verification of handover procedures defined by the Long Term Evolution (LTE) standard for mobile communication networks, including the results and all the translation rules from specification documents to formal models, that facilitates the application of formal verification to other parts of the standard in the future
Process-Aware Defenses for Cyber-Physical Systems
The increasing connectivity is exposing safety-critical systems to cyberattacks that can cause real physical damage and jeopardize human lives. With billions of IoT devices added to the Internet every year, the cybersecurity landscape is drastically shifting from IT systems and networks to systems that comprise both cyber and physical components, commonly referred to as cyber-physical systems (CPS). The difficulty of applying classical IT security solutions in CPS environments has given rise to new security techniques known as process-aware defense mechanisms, which are designed to monitor and protect industrial processes supervised and controlled by cyber elements from sabotage attempts via cyberattacks. In this thesis, we critically examine the emerging CPS-driven cybersecurity landscape and investigate how process-aware defenses can contribute to the sustainability of highly connected cyber-physical systems by making them less susceptible to crippling cyberattacks. We introduce a novel data-driven model-free methodology for real-time monitoring of physical processes to detect and report suspicious behaviour before damage occurs. We show how our model-free approach is very lightweight, does not require detailed specifications, and is applicable in various CPS environments including IoT systems and networks. We further design, implement, evaluate, and deploy process-aware techniques, study their efficacy and applicability in real-world settings, and address their deployment challenges
Towards the Model-Driven Engineering of Secure yet Safe Embedded Systems
We introduce SysML-Sec, a SysML-based Model-Driven Engineering environment
aimed at fostering the collaboration between system designers and security
experts at all methodological stages of the development of an embedded system.
A central issue in the design of an embedded system is the definition of the
hardware/software partitioning of the architecture of the system, which should
take place as early as possible. SysML-Sec aims to extend the relevance of this
analysis through the integration of security requirements and threats. In
particular, we propose an agile methodology whose aim is to assess early on the
impact of the security requirements and of the security mechanisms designed to
satisfy them over the safety of the system. Security concerns are captured in a
component-centric manner through existing SysML diagrams with only minimal
extensions. After the requirements captured are derived into security and
cryptographic mechanisms, security properties can be formally verified over
this design. To perform the latter, model transformation techniques are
implemented in the SysML-Sec toolchain in order to derive a ProVerif
specification from the SysML models. An automotive firmware flashing procedure
serves as a guiding example throughout our presentation.Comment: In Proceedings GraMSec 2014, arXiv:1404.163
Incorporating Agile with MDA Case Study: Online Polling System
Nowadays agile software development is used in greater extend but for small
organizations only, whereas MDA is suitable for large organizations but yet not
standardized. In this paper the pros and cons of Model Driven Architecture
(MDA) and Extreme programming have been discussed. As both of them have some
limitations and cannot be used in both large scale and small scale
organizations a new architecture has been proposed. In this model it is tried
to opt the advantages and important values to overcome the limitations of both
the software development procedures. In support to the proposed architecture
the implementation of it on Online Polling System has been discussed and all
the phases of software development have been explained.Comment: 14 pages,1 Figure,1 Tabl
Service Level Agreement-based GDPR Compliance and Security assurance in (multi)Cloud-based systems
Compliance with the new European General Data Protection Regulation (Regulation (EU) 2016/679) and security
assurance are currently two major challenges of Cloud-based systems. GDPR compliance implies both privacy and security
mechanisms definition, enforcement and control, including evidence collection. This paper presents a novel DevOps
framework aimed at supporting Cloud consumers in designing, deploying and operating (multi)Cloud systems that include
the necessary privacy and security controls for ensuring transparency to end-users, third parties in service provision (if any)
and law enforcement authorities. The framework relies on the risk-driven specification at design time of privacy and security
level objectives in the system Service Level Agreement (SLA) and in their continuous monitoring and enforcement at runtime.The research leading to these results has received
funding from the European Union’s Horizon 2020 research
and innovation programme under grant agreement No 644429
and No 780351, MUSA project and ENACT project,
respectively. We would also like to acknowledge all the
members of the MUSA Consortium and ENACT Consortium
for their valuable help
- …