3,529 research outputs found
Modeling of Personalized Privacy Disclosure Behavior: A Formal Method Approach
In order to create user-centric and personalized privacy management tools,
the underlying models must account for individual users' privacy expectations,
preferences, and their ability to control their information sharing activities.
Existing studies of users' privacy behavior modeling attempt to frame the
problem from a request's perspective, which lack the crucial involvement of the
information owner, resulting in limited or no control of policy management.
Moreover, very few of them take into the consideration the aspect of
correctness, explainability, usability, and acceptance of the methodologies for
each user of the system. In this paper, we present a methodology to formally
model, validate, and verify personalized privacy disclosure behavior based on
the analysis of the user's situational decision-making process. We use a model
checking tool named UPPAAL to represent users' self-reported privacy disclosure
behavior by an extended form of finite state automata (FSA), and perform
reachability analysis for the verification of privacy properties through
computation tree logic (CTL) formulas. We also describe the practical use cases
of the methodology depicting the potential of formal technique towards the
design and development of user-centric behavioral modeling. This paper, through
extensive amounts of experimental outcomes, contributes several insights to the
area of formal methods and user-tailored privacy behavior modeling
Cookie Disclaimers: Impact of Design and Users’ Attitude
Dark patterns in cookie disclaimers are factors that are used to lead
users to accept more cookies than needed and more than they are
aware of. The contributions of this paper are (1) evaluating the
efficacy of several of these factors while measuring actual behavior;
(2) identifying users’ attitude towards cookie disclaimers including
how they decide which cookies to accept or reject. We show that
different visual representation of the reject/accept option have a
significant impact on users’ decision.We also found that the labeling
of the reject option has a significant impact. In addition, we confirm
previous research regarding biasing text (which has no significant
impact on users’ decision). Our results on users’ attitude towards
cookie disclaimers indicate that for several user groups the design
of the disclaimer only plays a secondary role when it comes to
decision making. We provide recommendations on how to improve
the situation for the different user groups
Forensic Artifact Finder (ForensicAF): An Approach & Tool for Leveraging Crowd-Sourced Curated Forensic Artifacts
Current methods for artifact analysis and understanding depend on investigator expertise. Experienced and technically savvy examiners spend a lot of time reverse engineering applications while attempting to find crumbs they leave behind on systems. This takes away valuable time from the investigative process, and slows down forensic examination. Furthermore, when specific artifact knowledge is gained, it stays within the respective forensic units. To combat these challenges, we present ForensicAF, an approach for leveraging curated, crowd-sourced artifacts from the Artifact Genome Project (AGP). The approach has the overarching goal of uncovering forensically relevant artifacts from storage media. We explain our approach and construct it as an Autopsy Ingest Module. Our implementation focused on both File and Registry artifacts. We evaluated ForensicAF using systematic and random sampling experiments. While ForensicAF showed consistent results with registry artifacts across all experiments, it also revealed that deeper folder traversal yields more File Artifacts during data source ingestion. When experiments were conducted on case scenario disk images without apriori knowledge, ForensicAF uncovered artifacts of forensic relevance that help in solving those scenarios. We contend that ForensicAF is a promising approach for artifact extraction from storage media, and its utility will advance as more artifacts are crowd-sourced by AGP
Forensicast: A Non-intrusive Approach & Tool for Logical Forensic Acquisition & Analysis of the Google Chromecast TV
The era of traditional cable Television (TV) is swiftly coming to an end. People today subscribe to a multitude of streaming services. Smart TVs have enabled a new generation of entertainment, not only limited to constant on-demand streaming as they now offer other features such as web browsing, communication, gaming etc. These functions have recently been embedded into a small IoT device that can connect to any TV with High Definition Multimedia Interface (HDMI) input known as Google Chromecast TV. Its wide adoption makes it a treasure trove for potential digital evidence. Our work is the primary source on forensically interrogating Chromecast TV devices. We found that the device is always unlocked, allowing extraction of application data through the backup feature of Android Debug Bridge (ADB) without device root access. We take advantage of this minimal access and demonstrate how a series of artifacts can stitch together a detailed timeline, and we automate the process by constructing Forensicast – a Chromecast TV forensic acquisition and timelining tool. Our work targeted (n=112) of the most popular Android TV applications including 69% (77/112) third party applications and 31% (35/112) system applications. 65% (50/77) third party applications allowed backup, and of those 90% (45/50) contained time-based identifiers, 40% (20/50) invoked some form of logs/activity monitoring, 50% (25/50) yielded some sort of token/cookie, 8% (4/50) resulted in a device ID, 26% (13/50) produced a user ID, and 24% (12/50) created other information. 26% (9/35) system applications provided meaningful artifacts, 78% (7/9) provided time based identifiers, 22% (2/9) involved some form of logs/activity monitoring, 22% (2/9) yielded some form of token/cookie data, 22% (2/9) resulted in a device ID, 44% (4/9) provided a user ID, and 33% (3/9) created other information. Our findings also illustrated common artifacts found in applications that are related to developer and advertising utilities, mainly WebView, Firebase, and Facebook Analytics. Future work and open research problems are shared
Substation-Aware. An intrusion detection system for the IEC 61850 protocol.
The number of cyberattacks against the Smart Grid has increased in the last years. Considered as a critical infrastructure, power system operators must improve the cybersecurity countermeasures of their installations. Intrusion Detection Systems (IDS) appears as a promising solution to detect hidden activity of the hackers before launching the attack. Most detection tools are generalist, designed to find predefined patterns such as frequency of messages, well-known malware packets, source and destination of the messages or the content of each packet itself. These tools also allow plugging modules for different protocols, offering a better understanding of the analysed data, such as the protocol action (read, write, reset...) or data model/schema understanding. However, the semantics of the data transmitted cannot be inferred. The Substation-Aware (SBT-Aware) tool adds the latest feature for primary and secondary substations, taking into account not only the protocols defined in the IEC 61850 standard, but the substation topology as well. In this paper we present the SBT-Aware, an IDS that has been developed and tested in the course of the H2020 SDN-microSENSE project.The research presented has been done in the context of SDN-microSENSE project. SDN-microSENSE has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 833955. The information contained in this publication reflects only the authors’ view. EC is not responsible for any use that may be made of this information
Best Practices for Notification Studies for Security and Privacy Issues on the Internet
Researchers help operators of vulnerable and non-compliant internet services
by individually notifying them about security and privacy issues uncovered in
their research. To improve efficiency and effectiveness of such efforts,
dedicated notification studies are imperative. As of today, there is no
comprehensive documentation of pitfalls and best practices for conducting such
notification studies, which limits validity of results and impedes
reproducibility. Drawing on our experience with such studies and guidance from
related work, we present a set of guidelines and practical recommendations,
including initial data collection, sending of notifications, interacting with
the recipients, and publishing the results. We note that future studies can
especially benefit from extensive planning and automation of crucial processes,
i.e., activities that take place well before the first notifications are sent.Comment: Accepted to the 3rd International Workshop on Information Security
Methodology and Replication Studies (IWSMR '21), colocated with ARES '2
- …