353,397 research outputs found
A Study on Data Protection in Cloud Environment
Data protection in the online environment pertains to the safeguarding of sensitive or important data kept, analyzed, or sent in cloud-based systems. It entails assuring data confidentiality, integrity, and availability, as well as adhering to appropriate data protection requirements. In a nutshell, cloud data protection seeks to protect data against unauthorized access, deletion, or breaches while retaining its accuracy and accessible to authorized users. This is accomplished in the cloud environment using various security measures, encryption approaches, access controls, disaster recovery and backup processes, and constant monitoring and threat detection. The research significance of data protection in the cloud environment can be summarized as follows: Security and Privacy: Research in data protection in the cloud helps address the security and privacy concerns associated with storing and processing sensitive data in cloud-based systems. It explores and develops advanced security mechanisms, encryption techniques, and access controls to protect data from unauthorized access, data breaches, and privacy violations. Trust and Confidence: Research in data protection contributes to building trust and confidence in cloud computing. By developing robust security solutions and demonstrating their effectiveness, research helps alleviate concerns about data security and privacy, fostering greater adoption of cloud services by organizations and individuals. Compliance and Regulations: Cloud computing often involves compliance with data protection regulations and industry standards. Research in this area explores the legal and regulatory aspects of data protection in the cloud and helps organizations understand and comply with relevant requirements. Data Resilience and Recovery: Research in data protection focuses on ensuring data resilience and developing efficient data recovery mechanisms in the cloud. It explores backup and disaster recovery strategies, data replication techniques, and data loss prevention methods to minimize downtime, recover data promptly, and maintain business continuity in the event of system failures or disasters. By addressing these research areas, studies on data protection in the cloud environment contribute to enhancing security, privacy, compliance, and resilience in cloud computing. They provide valuable insights, practical solutions, and guidelines for organizations and service providers to protect data effectively and maintain the trust of users in cloud-based services. The weighted product method approach is commonly used to choose the best data protection in cloud environment. CCSS1, CCSS2, CCSS3, CCSS4, CCSS5 data visibility, data integrity, Maintains compliance, Data security, Data storage. From the result it is seen that CCSS2 got highest rank whereas CCSS5 got lowest rank According to the results, CCSS2 was ranked first
Misconfiguration in Firewalls and Network Access Controls: Literature Review
Firewalls and network access controls play important roles in security control and protection. Those firewalls may create an incorrect sense or state of protection if they are improperly configured. One of the major configuration problems in firewalls is related to misconfiguration in the access control roles added to the firewall that will control network traffic. In this paper, we evaluated recent research trends and open challenges related to firewalls and access controls in general and misconfiguration problems in particular. With the recent advances in next-generation (NG) firewalls, firewall roles can be auto-generated based on networks and threats. Nonetheless, and due to the large number of roles in any medium to large networks, roles’ misconfiguration may occur for several reasons and will impact the performance of the firewall and overall network and protection efficiency
Informacijos saugos reikalavimų harmonizavimo, analizės ir įvertinimo automatizavimas
The growing use of Information Technology (IT) in daily operations of enterprises requires an ever-increasing level of protection over organization’s assets and information from unauthorised access, data leakage or any other type of information security breach. Because of that, it becomes vital to ensure the necessary level of protection. One of the best ways to achieve this goal is to implement controls defined in Information security documents. The problems faced by different organizations are related to the fact that often, organizations are required to be aligned with multiple Information security documents and their requirements. Currently, the organization’s assets and information protection are based on Information security specialist’s knowledge, skills and experience. Lack of automated tools for multiple Information security documents and their requirements harmonization, analysis and visualization lead to the situation when Information security is implemented by organizations in ineffective ways, causing controls duplication or increased cost of security implementation. An automated approach for Information security documents analysis, mapping and visualization would contribute to solving this issue. The dissertation consists of an introduction, three main chapters and general conclusions. The first chapter introduces existing Information security regulatory documents, current harmonization techniques, information security implementation cost evaluation methods and ways to analyse Information security requirements by applying graph theory optimisation algorithms (Vertex cover and Graph isomorphism). The second chapter proposes ways to evaluate information security implementation and costs through a controls-based approach. The effectiveness of this method could be improved by implementing automated initial data gathering from Business processes diagrams. In the third chapter, adaptive mapping on the basis of Security ontology is introduced for harmonization of different security documents; such an approach also allows to apply visualization techniques for harmonization results presentation. Graph optimization algorithms (vertex cover algorithm and graph isomorphism algorithm) for Minimum Security Baseline identification and verification of achieved results against controls implemented in small and medium-sized enterprises were proposed. It was concluded that the proposed methods provide sufficient data for adjustment and verification of security controls applicable by multiple Information security documents.Dissertatio
A FIREWALL MODEL OF FILE SYSTEM SECURITY
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected.
The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection.
We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux
Securing identity information with image watermarks
In this paper, we describe the requirements for embedding watermarks in images used for identity verification and demonstrate a proof of concept in security sciences. The watermarking application is designed for verifying the rightful ownership of a driving license or similar identity object. The tool we built and tested embeds and extracts watermarks that contain verification information of the rightful owner. We used the human finger print of the rightful owner as the watermark. Such information protection mechanisms add an extra layer of security to the information system and improve verification of identification attributes by providing strong security. The issues of usability and cost are also discussed in the context of the social acceptability of access controls
Integrity, security aspects of data management
One of the problems of data storage on computers is protecting the
information from loss, corruption, or unauthorised access.
The security and integrity problems of information storage on
computers in the IRAQI AIR FORCE have been examined. In particular, the
security and integrity requirements for the proposed data base for stock
control and aircraft maintenance, have been analysed.
In this thesis the design of a security and integrity system for a
relational data base is described. Controlling access to the data base has
been examined in more detail. A model system has been constructed in which
the security controls check the authority of the user to deal with the
different kinds of enquiries and permit the user to access the system, or
prevent the user from doing so:
New security procedures have been designed to provide:
a. More sophisticated authentication procedures.
b. Access protection on individual items and fields within the data base.
c. Monitoring of access to the data base to detect, in an early stage,
the actual and potential security ,violations.
Models of some features of the design, have been developed on a
PDP 11/40 computer and these have been used to investigate the effectiveness
of the proposed model security system.
The features of the models are described and the database design
is discussed, as are the additional precautions that would need to be taken to fully meet the requirements of the IRAQI AIR FORCE in this area
combining traditional interviews with delphi method
Ruivo, P., Santos, V., & Oliveira, T. (2019). Success factors for data protection in services and support roles: combining traditional interviews with delphi method. In Censorship, Surveillance, and Privacy: Concepts, Methodologies, Tools, and Applications (Vol. 2, pp. 814-829). IGI Global. DOI: 10.4018/978-1-5225-7113-1.ch042The transformation of today’s information and communications technology (ICT) firms requires the services and support organizations to think differently about customers data protection. Data protection represents one of the security and privacy areas considered to be the next “blue ocean” in leveraging the creation of business opportunities. Based in contemporary literature, the authors conducted a two phases’ qualitative methodology - the expert’s interviews and Delphi method to identify and rank 12 factors on which service and support professionals should follow in their daily tasks to ensure customer data protection: 1) Data classification, 2) Encryption, 3) Password protection, 4) Approved tools, 5) Access controls, 6) How many access data, 7) Testing data, 8) Geographic rules, 9) Data retention, 10) Data minimization, 11) Escalating issues, and 12) Readiness and training. This paper contribute to the growing body of knowledge of data protection filed. The authors provide directions for future work for practitioners and researchers.authorsversionpublishe
Applying data protection part of ISO 27001 to patient and user data produced by medical devices – Case: disease specific quality registers
Data protection may be considered a subset of information security, consisting of the rules that define who may have access to what data and under what conditions. Rules concerning the handling of personally identifiable information have also become a major topic of discussion with regulation such as the GDPR by the European Union. To improve data protection of personally identifiable information, initiatives such as MyData and IHAN have been developed. In the field of information security, standards such as ISO 27001 exist to improve and unify information security in organizations.
This thesis studies the requirements that the data protection initiatives MyData and IHANimpose on organizations processing personally identifiable information, as well as the requirements imposed by the ISO 27001 standard. The requirements of MyData and IHAN are compared to the ISO 27001 standard, along with a case study that looks at the requirements of both in the context of patient data stored and processed in disease specific quality registers. A gap analysis of the ISO 27001 - security controls is performed to evaluate the current situation against the standards requirements. Suggestions for measures to meet the different potential requirements of MyData and IHAN are also given, along with discussion of their relevance to disease specific quality registers. Considerations of legal aspects of the protection of patient data related to these are however omitted
- …