11,335 research outputs found

    Decision support for choice of security solution: the Aspect-Oriented Risk Driven Development (AORDD)framework

    Get PDF
    In security assessment and management there is no single correct solution to the identified security problems or challenges. Instead there are only choices and tradeoffs. The main reason for this is that modern information systems and security critical information systems in particular must perform at the contracted or expected security level, make effective use of available resources and meet end-users' expectations. Balancing these needs while also fulfilling development, project and financial perspectives, such as budget and TTM constraints, mean that decision makers have to evaluate alternative security solutions.\ud \ud This work describes parts of an approach that supports decision makers in choosing one or a set of security solutions among alternatives. The approach is called the Aspect-Oriented Risk Driven Development (AORDD) framework, combines Aspect-Oriented Modeling (AOM) and Risk Driven Development (RDD) techniques and consists of the seven components: (1) An iterative AORDD process. (2) Security solution aspect repository. (3) Estimation repository to store experience from estimation of security risks and security solution variables involved in security solution decisions. (4) RDD annotation rules for security risk and security solution variable estimation. (5) The AORDD security solution trade-off analysis and trade-o€ tool BBN topology. (6) Rule set for how to transfer RDD information from the annotated UML diagrams into the trad-off tool BBN topology. (7) Trust-based information aggregation schema to aggregate disparate information in the trade-o€ tool BBN topology. This work focuses on components 5 and 7, which are the two core components in the AORDD framework

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    My word is my bond ; reputation as collateral in nineteenth century English provincial banking

    Get PDF
    There are few real-world economic transactions that do not involve an element of trust, yet in textbook economics trust is not prominently discussed. In that world, perfectly informed and computationally endowed agents reach optimal, enforceable decisions in continuously harmonizing exchanges. Trust is therefore linked to deviations from the textbook ideal: incomplete information, costly enforcement, and computational limitations faced by agents. Trust can then be thought of as an algorithm, in other words, a way of resolving uncertainty in a complex world. In this sense trust may be seen as a form of expectation concerning the behavior of other agents whose actions and intentions cannot be (fully) observed. This paper pursues this approach by “running the algorithm backwards” and trying to establish what factors led a 19th century provincial English bank to trust different loan applicants. Using a data-set of some 200 loan decisions, and knowing the size of collateral (if any) requested, we develop a method to estimate the probability that the bank attached to each borrower’s promise to repay (i.e., the trust the bank had towards the borrower), adjusting for stages in the business cycle. We then regress this estimated probability on a variety of observable borrower characteristics. We find that trust is not correlated with a priori expected variables, such as borrower’s assets or frequency of interaction. This suggests that trust was built up in other interactions, possibly through social or religious networks, and that the banking relationship reflected information available to bank directors other than what was purely pertinent to the borrowers’ economic conditions. This has strong implications for the allocation of credit to industry in 19th century England.

    Encouraging Privacy-Aware Smartphone App Installation: Finding out what the Technically-Adept Do

    Get PDF
    Smartphone apps can harvest very personal details from the phone with ease. This is a particular privacy concern. Unthinking installation of untrustworthy apps constitutes risky behaviour. This could be due to poor awareness or a lack of knowhow: knowledge of how to go about protecting privacy. It seems that Smartphone owners proceed with installation, ignoring any misgivings they might have, and thereby irretrievably sacrifice their privacy

    Enhancing service quality and reliability in intelligent traffic system

    Get PDF
    Intelligent Traffic Systems (ITS) can manage on-road traffic efficiently based on real-time traffic conditions, reduce delay at the intersections, and maintain the safety of the road users. However, emergency vehicles still struggle to meet their targeted response time, and an ITS is vulnerable to various types of attacks, including cyberattacks. To address these issues, in this dissertation, we introduce three techniques that enhance the service quality and reliability of an ITS. First, an innovative Emergency Vehicle Priority System (EVPS) is presented to assist an Emergency Vehicle (EV) in attending the incident place faster. Our proposed EVPS determines the proper priority codes of EV based on the type of incidents. After priority code generation, EVPS selects the number of traffic signals needed to be turned green considering the impact on other vehicles gathered in the relevant adjacent cells. Second, for improving reliability, an Intrusion Detection System for traffic signals is proposed for the first time, which leverages traffic and signal characteristics such as the flow rate, vehicle speed, and signal phase time. Shannon’s entropy is used to calculate the uncertainty associated with the likelihood of particular evidence and Dempster-Shafer (DS) decision theory is used to fuse the evidential information. Finally, to improve the reliability of a future ITS, we introduce a model that assesses the trust level of four major On-Board Units (OBU) of a self-driving car along with Global Positioning System (GPS) data and safety messages. Both subjective logic (DS theory) and CertainLogic are used to develop the theoretical underpinning for estimating the trust value of a self-driving car by fusing the trust value of four OBU components, GPS data and safety messages. For evaluation and validation purposes, a popular and widely used traffic simulation package, namely Simulation of Urban Mobility (SUMO), is used to develop the simulation platform using a real map of Melbourne CBD. The relevant historical real data taken from the VicRoads website were used to inject the traffic flow and density in the simulation model. We evaluated the performance of our proposed techniques considering different traffic and signal characteristics such as occupancy rate, flow rate, phase time, and vehicle speed under many realistic scenarios. The simulation result shows the potential efficacy of our proposed techniques for all selected scenarios.Doctor of Philosoph

    An Investigation into Trust & Reputation for Agent-Based Virtual Organisations

    No full text
    Trust is a prevalent concept in human society. In essence, it concerns our reliance on the actions of our peers, and the actions of other entities within our environment. For example, we may rely on our car starting in the morning to get to work on time, and on the actions of our fellow drivers, so that we may get there safely. For similar reasons, trust is becoming increasingly important in computing, as systems, such as the Grid, require computing resources to work together seamlessly, across organisational and geographical boundaries (Foster et al., 2001). In this context, the reliability of resources in one organisation cannot be assumed from the point of view of another. Moreover, certain resources may fail more often than others, and for this reason, we argue that software systems must be able to assess the reliability of different resources, so that they may choose which resources to rely upon. With this in mind, our goal here is to develop a mechanism by which software entities can automatically assess the trustworthiness of a given entity (the trustee). In achieving this goal, we have developed a probabilistic framework for assessing trust based on observations of a trustee's past behaviour. Such observations may be accounted for either when they are made directly by the assessing party (the truster), or by a third party (reputation source). In the latter case, our mechanism can cope with the possibility that third party information is unreliable, either because the sender is lying, or because it has a different world view. In this document, we present our framework, and show how it can be applied to cases in which a trustee's actions are represented as binary events; for example, a trustee may cooperate with the truster, or it may defect. We place our work in context, by showing how it constitutes part of a system for managing coalitions of agents, operating in a grid computing environment. We then give an empirical evaluation of our method, which shows that it outperforms the most similar system in the literature, in many important scenarios
    • 

    corecore