12,353 research outputs found
Deep Learning meets Blockchain for Automated and Secure Access Control
Access control is a critical component of computer security, governing access
to system resources. However, designing policies and roles in traditional
access control can be challenging and difficult to maintain in dynamic and
complex systems, which is particularly problematic for organizations with
numerous resources. Furthermore, traditional methods suffer from issues such as
third-party involvement, inefficiency, and privacy gaps, making transparent and
dynamic access control an ongoing research problem. Moreover detecting
malicious activities and identifying users who are not behaving appropriately
can present notable difficulties. To address these challenges, we propose
DLACB, a Deep Learning Based Access Control Using Blockchain, as a solution to
decentralized access control. DLACB uses blockchain to provide transparency,
traceability, and reliability in various domains such as medicine, finance, and
government while taking advantage of deep learning to not rely on predefined
policies and eventually automate access control. With the integration of
blockchain and deep learning for access control, DLACB can provide a general
framework applicable to various domains, enabling transparent and reliable
logging of all transactions. As all data is recorded on the blockchain, we have
the capability to identify malicious activities. We store a list of malicious
activities in the storage system and employ a verification algorithm to
cross-reference it with the blockchain. We conduct measurements and comparisons
of the smart contract processing time for the deployed access control system in
contrast to traditional access control methods, determining the time overhead
involved. The processing time of DLBAC demonstrates remarkable stability when
exposed to increased request volumes.Comment: arXiv admin note: text overlap with arXiv:2303.1475
Blockchain for IoT Access Control: Recent Trends and Future Research Directions
With the rapid development of wireless sensor networks, smart devices, and
traditional information and communication technologies, there is tremendous
growth in the use of Internet of Things (IoT) applications and services in our
everyday life. IoT systems deal with high volumes of data. This data can be
particularly sensitive, as it may include health, financial, location, and
other highly personal information. Fine-grained security management in IoT
demands effective access control. Several proposals discuss access control for
the IoT, however, a limited focus is given to the emerging blockchain-based
solutions for IoT access control. In this paper, we review the recent trends
and critical needs for blockchain-based solutions for IoT access control. We
identify several important aspects of blockchain, including decentralised
control, secure storage and sharing information in a trustless manner, for IoT
access control including their benefits and limitations. Finally, we note some
future research directions on how to converge blockchain in IoT access control
efficiently and effectively
Database Design and Implementation
The book of Database Design and Implementation is a comprehensive guide that provides a thorough introduction to the principles, concepts, and best practices of database design and implementation. It covers the essential topics required to design, develop, and manage a database system, including data modeling, database normalization, SQL programming, and database administration.
The book is designed for students, database administrators, software developers, and anyone interested in learning how to design and implement a database system. It provides a step-by-step approach to database design and implementation, with clear explanations and practical examples. It also includes exercises and quizzes at the end of each chapter to help reinforce the concepts covered. The book begins by introducing the fundamental concepts of database systems and data modeling. It then discusses the process of database design and normalization, which is essential for creating a well-structured and efficient database system. The book also covers SQL programming, which is used for querying, updating, and managing data in a database. Additionally, it includes a comprehensive discussion on database administration, including security, backup and recovery, and performance tuning.https://orc.library.atu.edu/atu_oer/1002/thumbnail.jp
Identity Management and Authorization Infrastructure in Secure Mobile Access to Electronic Health Records
We live in an age of the mobile paradigm of anytime/anywhere access, as the mobile device
is the most ubiquitous device that people now hold. Due to their portability, availability, easy
of use, communication, access and sharing of information within various domains and areas of
our daily lives, the acceptance and adoption of these devices is still growing. However, due to
their potential and raising numbers, mobile devices are a growing target for attackers and, like
other technologies, mobile applications are still vulnerable.
Health information systems are composed with tools and software to collect, manage, analyze
and process medical information (such as electronic health records and personal health records).
Therefore, such systems can empower the performance and maintenance of health services,
promoting availability, readability, accessibility and data sharing of vital information about a
patients overall medical history, between geographic fragmented health services. Quick access
to information presents a great importance in the health sector, as it accelerates work processes,
resulting in better time utilization. Additionally, it may increase the quality of care.
However health information systems store and manage highly sensitive data, which raises serious
concerns regarding patients privacy and safety, and may explain the still increasing number
of malicious incidents reports within the health domain.
Data related to health information systems are highly sensitive and subject to severe legal
and regulatory restrictions, that aim to protect the individual rights and privacy of patients.
Along side with these legislations, security requirements must be analyzed and measures implemented.
Within the necessary security requirements to access health data, secure authentication,
identity management and access control are essential to provide adequate means to
protect data from unauthorized accesses. However, besides the use of simple authentication
models, traditional access control models are commonly based on predefined access policies
and roles, and are inflexible. This results in uniform access control decisions through people,
different type of devices, environments and situational conditions, and across enterprises, location
and time.
Although already existent models allow to ensure the needs of the health care systems, they still
lack components for dynamicity and privacy protection, which leads to not have desire levels
of security and to the patient not to have a full and easy control of his privacy. Within this
master thesis, after a deep research and review of the stat of art, was published a novel dynamic
access control model, Socio-Technical Risk-Adaptable Access Control modEl (SoTRAACE),
which can model the inherent differences and security requirements that are present in this
thesis. To do this, SoTRAACE aggregates attributes from various domains to help performing
a risk assessment at the moment of the request. The assessment of the risk factors identified
in this work is based in a Delphi Study. A set of security experts from various domains were
selected, to classify the impact in the risk assessment of each attribute that SoTRAACE aggregates.
SoTRAACE was integrated in an architecture with requirements well-founded, and based
in the best recommendations and standards (OWASP, NIST 800-53, NIST 800-57), as well based in
deep review of the state-of-art. The architecture is further targeted with the essential security
analysis and the threat model. As proof of concept, the proposed access control model was implemented within the user-centric
architecture, with two mobile prototypes for several types of accesses by patients and healthcare
professionals, as well the web servers that handles the access requests, authentication and
identity management.
The proof of concept shows that the model works as expected, with transparency, assuring privacy
and data control to the user without impact for user experience and interaction. It is clear
that the model can be extended to other industry domains, and new levels of risks or attributes
can be added because it is modular. The architecture also works as expected, assuring secure
authentication with multifactor, and secure data share/access based in SoTRAACE decisions.
The communication channel that SoTRAACE uses was also protected with a digital certificate.
At last, the architecture was tested within different Android versions, tested with static and
dynamic analysis and with tests with security tools.
Future work includes the integration of health data standards and evaluating the proposed system
by collecting users’ opinion after releasing the system to real world.Hoje em dia vivemos em um paradigma móvel de acesso em qualquer lugar/hora, sendo que
os dispositivos móveis são a tecnologia mais presente no dia a dia da sociedade. Devido à sua
portabilidade, disponibilidade, fácil manuseamento, poder de comunicação, acesso e partilha
de informação referentes a várias áreas e domÃnios das nossas vidas, a aceitação e integração
destes dispositivos é cada vez maior. No entanto, devido ao seu potencial e aumento do número
de utilizadores, os dispositivos móveis são cada vez mais alvos de ataques, e tal como outras
tecnologias, aplicações móveis continuam a ser vulneráveis.
Sistemas de informação de saúde são compostos por ferramentas e softwares que permitem
recolher, administrar, analisar e processar informação médica (tais como documentos de saúde
eletrónicos). Portanto, tais sistemas podem potencializar a performance e a manutenção dos
serviços de saúde, promovendo assim a disponibilidade, acessibilidade e a partilha de dados
vitais referentes ao registro médico geral dos pacientes, entre serviços e instituições que estão
geograficamente fragmentadas. O rápido acesso a informações médicas apresenta uma grande
importância para o setor da saúde, dado que acelera os processos de trabalho, resultando assim
numa melhor eficiência na utilização do tempo e recursos. Consequentemente haverá uma
melhor qualidade de tratamento. Porém os sistemas de informação de saúde armazenam e
manuseiam dados bastantes sensÃveis, o que levanta sérias preocupações referentes à privacidade
e segurança do paciente. Assim se explica o aumento de incidentes maliciosos dentro do
domÃnio da saúde.
Os dados de saúde são altamente sensÃveis e são sujeitos a severas leis e restrições regulamentares,
que pretendem assegurar a proteção dos direitos e privacidade dos pacientes, salvaguardando
os seus dados de saúde. Juntamente com estas legislações, requerimentos de segurança
devem ser analisados e medidas implementadas. Dentro dos requerimentos necessários
para aceder aos dados de saúde, uma autenticação segura, gestão de identidade e controlos de
acesso são essenciais para fornecer meios adequados para a proteção de dados contra acessos
não autorizados. No entanto, além do uso de modelos simples de autenticação, os modelos
tradicionais de controlo de acesso são normalmente baseados em polÃticas de acesso e cargos
pré-definidos, e são inflexÃveis. Isto resulta em decisões de controlo de acesso uniformes para
diferentes pessoas, tipos de dispositivo, ambientes e condições situacionais, empresas, localizações
e diferentes alturas no tempo. Apesar dos modelos existentes permitirem assegurar
algumas necessidades dos sistemas de saúde, ainda há escassez de componentes para accesso
dinâmico e proteção de privacidade , o que resultam em nÃveis de segurança não satisfatórios e
em o paciente não ter controlo directo e total sobre a sua privacidade e documentos de saúde.
Dentro desta tese de mestrado, depois da investigação e revisão intensiva do estado da arte,
foi publicado um modelo inovador de controlo de acesso, chamado SoTRAACE, que molda as
diferenças de acesso inerentes e requerimentos de segurança presentes nesta tese. Para isto,
o SoTRAACE agrega atributos de vários ambientes e domÃnios que ajudam a executar uma avaliação
de riscos, no momento em que os dados são requisitados. A avaliação dos fatores de risco
identificados neste trabalho são baseados num estudo de Delphi. Um conjunto de peritos de
segurança de vários domÃnios industriais foram selecionados, para classificar o impacto de cada
atributo que o SoTRAACE agrega. O SoTRAACE foi integrado numa arquitectura para acesso a
dados médicos, com requerimentos bem fundados, baseados nas melhores normas e recomendações (OWASP, NIST 800-53, NIST 800-57), e em revisões intensivas do estado da arte. Esta
arquitectura é posteriormente alvo de uma análise de segurança e modelos de ataque.
Como prova deste conceito, o modelo de controlo de acesso proposto é implementado juntamente
com uma arquitetura focada no utilizador, com dois protótipos para aplicações móveis,
que providênciam vários tipos de acesso de pacientes e profissionais de saúde. A arquitetura é
constituÃda também por servidores web que tratam da gestão de dados, controlo de acesso e
autenticação e gestão de identidade. O resultado final mostra que o modelo funciona como esperado,
com transparência, assegurando a privacidade e o controlo de dados para o utilizador,
sem ter impacto na sua interação e experiência. Consequentemente este modelo pode-se extender
para outros setores industriais, e novos nÃveis de risco ou atributos podem ser adicionados
a este mesmo, por ser modular. A arquitetura também funciona como esperado, assegurando
uma autenticação segura com multi-fator, acesso e partilha de dados segura baseado em decisões
do SoTRAACE. O canal de comunicação que o SoTRAACE usa foi também protegido com
um certificado digital.
A arquitectura foi testada em diferentes versões de Android, e foi alvo de análise estática,
dinâmica e testes com ferramentas de segurança.
Para trabalho futuro está planeado a integração de normas de dados de saúde e a avaliação do
sistema proposto, através da recolha de opiniões de utilizadores no mundo real
Intership Report on data merging at the bank of Portugal Internship Experience at the Bank of Portugal: A Comprehensive Dive into Full Stack Development - Leveraging Modern Technology to Innovate Financial Infrastructure and Enhance User Experience
Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceThis report details my full-stack development internship experiences at the Bank of Portugal, with a
particular emphasis on the creation of a website intended to increase operational effectiveness in the
DAS Department. My main contributions met a clear need, which was the absence of a reliable
platform that could manage and combine data from many sources. I was actively involved in creating
functionality for the Django applications Integrator and BAII using Django, a high-level Python web
framework. Several problems were addressed by the distinctive features I planned and programmed,
including daily data extraction from several SQL databases, entity error detection, data merging, and
user-friendly interfaces for data manipulation. A feature that enables the attribution of litigation to
certain entities was also developed. The outcomes of the developed features have proven to be useful,
giving the Institutional Intervention Area, the Sanctioning Action Area, the Illicit Financial Activity
Investigation Area, and the Money Laundering Preventive Supervision Area for Capital and Financing
of Terrorism tools to carry out their duties more effectively. The full-stack development approaches'
advancement and use in the banking industry, notably in data management and web application
development, have been aided by this internship experience
Context Aware Middleware Architectures: Survey and Challenges
Abstract: Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of
developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness,
context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected
middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work
Contributions to the privacy provisioning for federated identity management platforms
Identity information, personal data and user’s profiles are key assets for organizations
and companies by becoming the use of identity management (IdM) infrastructures a prerequisite
for most companies, since IdM systems allow them to perform their business
transactions by sharing information and customizing services for several purposes in more
efficient and effective ways.
Due to the importance of the identity management paradigm, a lot of work has been done
so far resulting in a set of standards and specifications. According to them, under the
umbrella of the IdM paradigm a person’s digital identity can be shared, linked and reused
across different domains by allowing users simple session management, etc. In this way,
users’ information is widely collected and distributed to offer new added value services
and to enhance availability. Whereas these new services have a positive impact on users’
life, they also bring privacy problems.
To manage users’ personal data, while protecting their privacy, IdM systems are the ideal
target where to deploy privacy solutions, since they handle users’ attribute exchange.
Nevertheless, current IdM models and specifications do not sufficiently address comprehensive
privacy mechanisms or guidelines, which enable users to better control over the
use, divulging and revocation of their online identities. These are essential aspects, specially
in sensitive environments where incorrect and unsecured management of user’s data
may lead to attacks, privacy breaches, identity misuse or frauds.
Nowadays there are several approaches to IdM that have benefits and shortcomings, from
the privacy perspective.
In this thesis, the main goal is contributing to the privacy provisioning for federated
identity management platforms. And for this purpose, we propose a generic architecture
that extends current federation IdM systems. We have mainly focused our contributions
on health care environments, given their particularly sensitive nature. The two main
pillars of the proposed architecture, are the introduction of a selective privacy-enhanced
user profile management model and flexibility in revocation consent by incorporating an
event-based hybrid IdM approach, which enables to replace time constraints and explicit
revocation by activating and deactivating authorization rights according to events. The
combination of both models enables to deal with both online and offline scenarios, as well
as to empower the user role, by letting her to bring together identity information from
different sources.
Regarding user’s consent revocation, we propose an implicit revocation consent mechanism
based on events, that empowers a new concept, the sleepyhead credentials, which
is issued only once and would be used any time. Moreover, we integrate this concept
in IdM systems supporting a delegation protocol and we contribute with the definition
of mathematical model to determine event arrivals to the IdM system and how they are
managed to the corresponding entities, as well as its integration with the most widely
deployed specification, i.e., Security Assertion Markup Language (SAML).
In regard to user profile management, we define a privacy-awareness user profile management
model to provide efficient selective information disclosure. With this contribution a
service provider would be able to accesses the specific personal information without being
able to inspect any other details and keeping user control of her data by controlling
who can access. The structure that we consider for the user profile storage is based on
extensions of Merkle trees allowing for hash combining that would minimize the need of
individual verification of elements along a path. An algorithm for sorting the tree as we
envision frequently accessed attributes to be closer to the root (minimizing the access’
time) is also provided.
Formal validation of the above mentioned ideas has been carried out through simulations
and the development of prototypes. Besides, dissemination activities were performed in
projects, journals and conferences.Programa Oficial de Doctorado en IngenierÃa TelemáticaPresidente: MarÃa Celeste Campo Vázquez.- Secretario: MarÃa Francisca Hinarejos Campos.- Vocal: Óscar Esparza MartÃ
A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems
As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation.
Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way.
The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved.
This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches.
A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system.
This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach.
While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach
- …