7 research outputs found
Big Ideas paper: Policy-driven middleware for a legally-compliant Internet of Things.
Internet of Things (IoT) applications, systems and services
are subject to law. We argue that for the IoT to develop
lawfully, there must be technical mechanisms that allow the
enforcement of speci ed policy, such that systems align with
legal realities. The audit of policy enforcement must assist
the apportionment of liability, demonstrate compliance with
regulation, and indicate whether policy correctly captures le-
gal responsibilities. As both systems and obligations evolve
dynamically, this cycle must be continuously maintained.
This poses a huge challenge given the global scale of the
IoT vision. The IoT entails dynamically creating new ser-
vices through
managed and exible data exchange
.
Data management is complex in this dynamic environment,
given the need to both control and share information, often
across federated domains of administration.
We see middleware playing a key role in managing the
IoT. Our vision is for a middleware-enforced, uni ed policy
model that applies end-to-end, throughout the IoT. This is
because policy cannot be bound to things, applications, or
administrative domains, since functionality is the result of
composition, with dynamically formed chains of data ows.
We have investigated the use of Information Flow Control
(IFC) to manage and audit data ows in cloud computing;
a domain where trust can be well-founded, regulations are
more mature and associated responsibilities clearer. We feel
that IFC has great potential in the broader IoT context.
However, the sheer scale and the dynamic, federated nature
of the IoT pose a number of signi cant research challenges
Assessment of VLSI resources requirement for a sliced trusted platform module
Recent increases in cybercrime suggest questions such as: How can one trust a secure system? How can one protect private information from being stolen and maintain security? Trust in any system requires a foundation or root of trust. A root of trust is necessary to establish confidence that a machine is clean and that a software execution environment is secure. A root of trust can be implemented using the Trusted Platform Module (TPM), which is promising for enhancing security of general-purpose computing systems.
In cloud computing, one of the proposed approaches is to use homomorphic encryption to create k program slices to be executed on k different cloud nodes. The TPM at the cloud node can then also be distributed or sliced along the lines presented in this thesis.
In this work, we propose to increase TPM efficiency by distributing the TPM into multiple shares using Residue Number Systems (RNS). We then perform an evaluation of the silicon area, and execution time required for a sliced-TPM implementation and compares it to a single TPM. We characterize the execution time required by each TPM command using measurements obtained on ModelSim simulator.
Finally, we show that the proposed scheme improves TPM efficiency and that execution time of TPM commands was noticeably improved. In the case of 4 shares the required execution time of the TPM commands that involving RSA operation in each slice was decreased by 93%, and the area of each slice was decreased by 2.93% while the total area was increased by 74%. In the case of 10 shares the required execution time of the TPM commands that involving RSA operations in each slice was decreased by 99%, and the area of each slice was decreased by 3.3% while the total area was increased by 85%
Enhancing trustability in MMOGs environments
Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds
(VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more
autonomic, security, and trust mechanisms in a way similar to humans do in the real
life world. As known, this is a difficult matter because trusting in humans and organizations
depends on the perception and experience of each individual, which is difficult to
quantify or measure. In fact, these societal environments lack trust mechanisms similar
to those involved in humans-to-human interactions. Besides, interactions mediated
by compute devices are constantly evolving, requiring trust mechanisms that keep the
pace with the developments and assess risk situations.
In VW/MMOGs, it is widely recognized that users develop trust relationships from their
in-world interactions with others. However, these trust relationships end up not being
represented in the data structures (or databases) of such virtual worlds, though they
sometimes appear associated to reputation and recommendation systems. In addition,
as far as we know, the user is not provided with a personal trust tool to sustain his/her
decision making while he/she interacts with other users in the virtual or game world.
In order to solve this problem, as well as those mentioned above, we propose herein a
formal representation of these personal trust relationships, which are based on avataravatar
interactions. The leading idea is to provide each avatar-impersonated player
with a personal trust tool that follows a distributed trust model, i.e., the trust data is
distributed over the societal network of a given VW/MMOG.
Representing, manipulating, and inferring trust from the user/player point of view certainly
is a grand challenge. When someone meets an unknown individual, the question
is “Can I trust him/her or not?”. It is clear that this requires the user to have access to
a representation of trust about others, but, unless we are using an open source VW/MMOG,
it is difficult —not to say unfeasible— to get access to such data. Even, in an open
source system, a number of users may refuse to pass information about its friends, acquaintances,
or others. Putting together its own data and gathered data obtained from
others, the avatar-impersonated player should be able to come across a trust result
about its current trustee. For the trust assessment method used in this thesis, we use
subjective logic operators and graph search algorithms to undertake such trust inference
about the trustee. The proposed trust inference system has been validated using
a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy
increase in evaluating trustability of avatars.
Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its
trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g.,
graph search methods), on an individual basis, rather than based on usual centralized
reputation systems. In particular, and unlike other trust discovery methods, our methods
run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft),
mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo,
Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de
assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos
utilizam na vida real. Como se sabe, esta não é uma questão fácil. Porque confiar em
seres humanos e ou organizações depende da percepção e da experiĂŞncia de cada indivĂduo,
o que Ă© difĂcil de quantificar ou medir Ă partida. Na verdade, esses ambientes
sociais carecem dos mecanismos de confiança presentes em interacções humanas presenciais.
Além disso, as interacções mediadas por dispositivos computacionais estão em
constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da
evolução para avaliar situações de risco.
Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relações
de confiança a partir das suas interacções no mundo com outros. No entanto, essas relações
de confiança acabam por não ser representadas nas estruturas de dados (ou bases
de dados) do VW/MMOG especĂfico, embora Ă s vezes apareçam associados Ă reputação
e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe
é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual
para sustentar o seu processo de tomada de decisĂŁo, enquanto ele interage com outros
utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como
os mencionados acima, propomos nesta tese uma representação formal para essas relações
de confiança pessoal, baseada em interacções avatar-avatar. A ideia principal
é fornecer a cada jogador representado por um avatar uma ferramenta de confiança
pessoal que segue um modelo de confiança distribuĂda, ou seja, os dados de confiança
sĂŁo distribuĂdos atravĂ©s da rede social de um determinado VW/MMOG.
Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente
um grande desafio. Quando alguĂ©m encontra um indivĂduo desconhecido, a
pergunta é “Posso confiar ou não nele?”. É claro que isto requer que o utilizador tenha
acesso a uma representação de confiança sobre os outros, mas, a menos que possamos
usar uma plataforma VW/MMOG de cĂłdigo aberto, Ă© difĂcil — para nĂŁo dizer impossĂvel
— obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de código
aberto, um número de utilizadores pode recusar partilhar informações sobre seus amigos,
conhecidos, ou sobre outros. Ao juntar seus prĂłprios dados com os dados obtidos de
outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma
avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir.
Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos
lógica subjectiva para a representação da confiança, e também operadores lógicos da
lĂłgica subjectiva juntamente com algoritmos de procura em grafos para empreender
o processo de inferência da confiança relativamente a outro utilizador. O sistema de
inferência de confiança proposto foi validado através de um número de cenários Open-Simulator (opensimulator.org), que mostrou um aumento na precisão na avaliação da
confiança de avatares.
Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos
virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a
lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo,
através de métodos de pesquisa em grafos), partindo de uma base individual, em
vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao
contrário de outros métodos de determinação do grau de confiança, os nossos métodos
sĂŁo executados em tempo real
Mobile user authentication system (MUAS) for e-commerce applications.
The rapid growth of e-commerce has many associated security concerns. Thus, several studies to develop secure online authentication systems have emerged. Most studies begin with the premise that the intermediate network is the primary point of compromise. In this thesis, we assume that the point of compromise lies within the end-host or browser; this security threat is called the man-in-the-browser (MITB) attack. MITB attacks can bypass security measures of public key infrastructures (PKI), as well as encryption mechanisms for secure socket layers and transport layer security (SSL/TLS) protocol. This thesis focuses on developing a system that can circumvent MITB attacks using a two-phase secure-user authentication system, with phases that include challenge and response generation. The proposed system represents the first step in conducting an online business transaction.The proposed authentication system design contributes to protect the confidentiality of the initiating client by requesting minimal and non-confidential information to bypass the MITB attack and transition the authentication mechanism from the infected browser to a mobile-based system via a challenge/response mechanism. The challenge and response generation process depends on validating the submitted information and ensuring the mobile phone legitimacy. Both phases within the MUAS context mitigate the denial-of-service (DOS) attack via registration information, which includes the client’s mobile number and the International Mobile Equipment Identity (IMEI) of the client’s mobile phone.This novel authentication scheme circumvents the MITB attack by utilising the legitimate client’s personal mobile phone as a detached platform to generate the challenge response and conduct business transactions. Although the MITB attacker may have taken over the challenge generation phase by failing to satisfy the required security properties, the response generation phase generates a secure response from the registered legitimate mobile phone by employing security attributes from both phases. Thus, the detached challenge- and response generation phases are logically linked