3,406 research outputs found
Weathering the Nest: Privacy Implications of Home Monitoring for the Aging American Population
The research in this paper will seek to ascertain the extent of personal data entry and collection required to enjoy at least the minimal promised benefits of distributed intelligence and monitoring in the home. Particular attention will be given to the abilities and sensitivities of the population most likely to need these devices, notably the elderly and disabled. The paper will then evaluate whether existing legal limitations on the collection, maintenance, and use of such data are applicable to devices currently in use in the home environment and whether such regulations effectively protect privacy. Finally, given appropriate policy parameters, the paper will offer proposals to effectuate reasonable and practical privacy-protective solutions for developers and consumers
Pervasive computing reference architecture from a software engineering perspective (PervCompRA-SE)
Pervasive computing (PervComp) is one of the most challenging research topics nowadays. Its complexity exceeds the outdated main frame and client-server computation models. Its systems are highly volatile, mobile, and resource-limited ones that stream a lot of data from different sensors. In spite of these challenges, it entails, by default, a lengthy list of desired quality features like context sensitivity, adaptable behavior, concurrency, service omnipresence, and invisibility. Fortunately, the device manufacturers improved the enabling technology, such as sensors, network bandwidth, and batteries to pave the road for pervasive systems with high capabilities. On the other hand, this domain area has gained an enormous amount of attention from researchers ever since it was first introduced in the early 90s of the last century. Yet, they are still classified as visionary systems that are expected to be woven into people’s daily lives. At present, PervComp systems still have no unified architecture, have limited scope of context-sensitivity and adaptability, and many essential quality features are insufficiently addressed in PervComp architectures. The reference architecture (RA) that we called (PervCompRA-SE) in this research, provides solutions for these problems by providing a comprehensive and innovative pair of business and technical architectural reference models. Both models were based on deep analytical activities and were evaluated using different qualitative and quantitative methods. In this thesis we surveyed a wide range of research projects in PervComp in various subdomain areas to specify our methodological approach and identify the quality features in the PervComp domain that are most commonly found in these areas. It presented a novice approach that utilizes theories from sociology, psychology, and process engineering. The thesis analyzed the business and architectural problems in two separate chapters covering the business reference architecture (BRA) and the technical reference architecture (TRA). The solutions for these problems were introduced also in the BRA and TRA chapters. We devised an associated comprehensive ontology with semantic meanings and measurement scales. Both the BRA and TRA were validated throughout the course of research work and evaluated as whole using traceability, benchmark, survey, and simulation methods. The thesis introduces a new reference architecture in the PervComp domain which was developed using a novel requirements engineering method. It also introduces a novel statistical method for tradeoff analysis and conflict resolution between the requirements. The adaptation of the activity theory, human perception theory and process re-engineering methods to develop the BRA and the TRA proved to be very successful. Our approach to reuse the ontological dictionary to monitor the system performance was also innovative. Finally, the thesis evaluation methods represent a role model for researchers on how to use both qualitative and quantitative methods to evaluate a reference architecture. Our results show that the requirements engineering process along with the trade-off analysis were very important to deliver the PervCompRA-SE. We discovered that the invisibility feature, which was one of the envisioned quality features for the PervComp, is demolished and that the qualitative evaluation methods were just as important as the quantitative evaluation methods in order to recognize the overall quality of the RA by machines as well as by human beings
Social Data
As online social media grow, it is increasingly important to distinguish between the different threats to privacy that arise from the conversion of our social interactions into data. One well-recognized threat is from the robust concentrations of electronic information aggregated into colossal databases. Yet much of this same information is also consumed socially and dispersed through a user interface to hundreds, if not thousands, of peer users.
In order to distinguish relationally shared information from the threat of the electronic database, this essay identifies the massive amounts of personal information shared via the user interface of social technologies as “social data.” The main thesis of this essay is that, unlike electronic databases, which are the focus of the Fair Information Practice Principles (FIPPs), there are no commonly accepted principles to guide the recent explosion of voluntarily adopted practices, industry codes, and laws that address social data.
This essay aims to remedy that by proposing three social data principles — a sort of FIPPs for the front-end of social media: the Boundary Regulation Principle, the Identity Integrity Principle, and the Network Integrity Principle. These principles can help courts, policymakers, and organizations create more consistent and effective rules regarding the use of social data
Recommended from our members
Discovering Network Control Vulnerabilities and Policies in Evolving Networks
The range and number of new applications and services are growing at an unprecedented rate. Computer networks need to be able to provide connectivity for these services and meet their constantly changing demands. This requires not only support of new network protocols and security requirements, but often architectural redesigns for long-term improvements to efficiency, speed, throughput, cost, and security. Networks are now facing a drastic increase in size and are required to carry a constantly growing amount of heterogeneous traffic. Unfortunately such dynamism greatly complicates security of not only the end nodes in the network, but also of the nodes of the network itself. To make matters worse, just as applications are being developed at faster and faster rates, attacks are becoming more pervasive and complex. Networks need to be able to understand the impact of these attacks and protect against them.
Network control devices, such as routers, firewalls, censorship devices, and base stations, are elements of the network that make decisions on how traffic is handled. Although network control devices are expected to act according to specifications, there can be various reasons why they do not in practice. Protocols could be flawed, ambiguous or incomplete, developers could introduce unintended bugs, or attackers may find vulnerabilities in the devices and exploit them. Malfunction could intentionally or unintentionally threaten the confidentiality, integrity, and availability of end nodes and the data that passes through the network. It can also impact the availability and performance of the control devices themselves and the security policies of the network. The fast-paced evolution and scalability of current and future networks create a dynamic environment for which it is difficult to develop automated tools for testing new protocols and components. At the same time, they make the function of such tools vital for discovering implementation flaws and protocol vulnerabilities as networks become larger and more complex, and as new and potentially unrefined architectures become adopted. This thesis will present the design, implementation, and evaluation of a set of tools designed for understanding implementation of network control nodes and how they react to changes in traffic characteristics as networks evolve. We will first introduce Firecycle, a test bed for analyzing the impact of large-scale attacks and Machine-to-Machine (M2M) traffic on the Long Term Evolution (LTE) network. We will then discuss Autosonda, a tool for automatically discovering rule implementation and finding triggering traffic features in censorship devices.
This thesis provides the following contributions:
1. The design, implementation, and evaluation of two tools to discover models of network control nodes in two scenarios of evolving networks, mobile network and censored internet
2. First existing test bed for analysis of large-scale attacks and impact of traffic scalability on LTE mobile networks
3. First existing test bed for LTE networks that can be scaled to arbitrary size and that deploys traffic models based on real traffic traces taken from a tier-1 operator
4. An analysis of traffic models of various categories of Internet of Things (IoT) devices
5. First study demonstrating the impact of M2M scalability and signaling overload on the packet core of LTE mobile networks
6. A specification for modeling of censorship device decision models
7. A means for automating the discovery of features utilized in censorship device decision models, comparison of these models, and their rule discover
Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study
Background: As social robots increasingly integrate into public spaces, comprehending their security implications becomes paramount. This study is conducted amidst the growing use of social robots in public spaces (SRPS), emphasising the necessity for tailored security standards for these unique robotic systems. Methods: In this systematic mapping study (SMS), we meticulously review and analyse existing literature from the Web of Science database, following guidelines by Petersen et al. We employ a structured approach to categorise and synthesise literature on SRPS security aspects, including physical safety, data privacy, cybersecurity, and legal/ethical considerations. Results: Our analysis reveals a significant gap in existing safety standards, originally designed for industrial robots, that need to be revised for SRPS. We propose a thematic framework consolidating essential security guidelines for SRPS, substantiated by evidence from a considerable percentage of the primary studies analysed. Conclusions: The study underscores the urgent need for comprehensive, bespoke security standards and frameworks for SRPS. These standards ensure that SRPS operate securely and ethically, respecting individual rights and public safety, while fostering seamless integration into diverse human-centric environments. This work is poised to enhance public trust and acceptance of these robots, offering significant value to developers, policymakers, and the general public.publishedVersio
Recommended from our members
Social engineering in the internet of everything
The Internet of Everything is becoming a reality, with fridges, smart TVs, cars, medical monitoring equipment and industrial control systems all communicating via the Internet. There have already been cases where smart toys have been exploited by hackers to steal personal information. Social engineering attacks on these devices which have little or no security are already a reality with hackers being quick to exploit any weakness in cyber space. This article reviews past cases, gives three scenarios which could lead to the owner of an IoT device giving private information to hackers and then proposes defence recommendations
Understanding security risks and users perception towards adopting wearable Internet of Medical Things
This thesis examines users’ perception of trust within the context of security and privacy of Wearable Internet of Medical Things (WIoMT). WIoMT is a collective term for all medical devices connected to internet to facilitate collection and sharing of health-related data such as blood pressure, heart rate, oxygen level and more. Common wearable devices include smart watches and fitness bands. WIoMT, a phenomenon due to Internet of Things (IoT) has become prevalent in managing the day-to-day activities and health of individuals. This increased growth and adoption poses severe security and privacy concerns. Similar to IoT, there is a need to analyse WIoMT security risks as they are used by individuals and organisations on regular basis, risking personal and confidential information. Additionally, for better implementation, performance, adoption, and secured wearable medical devices, it is crucial to observe users’ perception. Users’ perspectives towards trust are critical for adopting WIoMT. This research aimed to understand users’ perception of trust in the adoption of WIoMT, while also exploring the security risks associated with adopting wearable IoMT. Employing a quantitative method approach, 189 participants from Western Sydney University completed an online survey. The results of the study and research model indicated more than half of the variance (R2 = 0.553) in the Intention to Use WIoMT devices, which was determined by the significant predictors (95% Confidence Interval; p < 0.05), Perceived Usefulness, Perceived Ease of Use and Perceived Security and Privacy. Among these two, the domain Perceived Security and Privacy was found to have significant outcomes. Hence, this study reinforced that a WIoMT user intends to use the device only if he/she trusts the device; trust here has been defined in terms of its usefulness, easy to use and security and privacy features. This finding will be a steppingstone for equipment vendors and manufacturers to have a good grasp on the health industry, since the proper utilisation of WIoMT devices results in the effective and efficient management of health and wellbeing of users. The expected outcome from this research also aims to identify how users’ security and perception matters while adopting WIoMT, which in future can benefit security professionals to examine trust factors when implementing new and advanced WIoMT devices. Moreover, the expected result will help consumers as well as different healthcare industry to create a device which can be easily adopted and used securely by consumers
Secure platforms for enforcing contextual access control
Advances in technology and wide scale deployment of networking enabled portable devices such as smartphones has made it possible to provide pervasive access to sensitive data to authorized individuals from any location. While this has certainly made data more accessible, it has also increased the risk of data theft as the data may be accessed from potentially unsafe locations in the presence of untrusted parties. The smartphones come with various embedded sensors that can provide rich contextual information such as sensing the presence of other users in a context. Frequent context profiling can also allow a mobile device to learn its surroundings and infer the familiarity and safety of a context. This can be used to further strengthen the access control policies enforced on a mobile device. Incorporating contextual factors into access control decisions requires that one must be able to trust the information provided by these context sensors. This requires that the underlying operating system and hardware be well protected against attacks from malicious adversaries. ^ In this work, we explore how contextual factors can be leveraged to infer the safety of a context. We use a context profiling technique to gradually learn a context\u27s profile, infer its familiarity and safety and then use this information in the enforcement of contextual access policies. While intuitive security configurations may be suitable for non-critical applications, other security-critical applications require a more rigorous definition and enforcement of contextual policies. We thus propose a formal model for proximity that allows one to define whether two users are in proximity in a given context and then extend the traditional RBAC model by incorporating these proximity constraints. Trusted enforcement of contextual access control requires that the underlying platform be secured against various attacks such as code reuse attacks. To mitigate these attacks, we propose a binary diversification approach that randomizes the target executable with every run. We also propose a defense framework based on control flow analysis that detects, diagnoses and responds to code reuse attacks in real time
- …