13 research outputs found

    Why Do They Do What They Do?: A Study of What Motivates Users to (Not) Follow Computer Security Advice Why Do They Do What They Do? A Study of What Motivates Users to (Not) Follow Computer Security Advice

    No full text
    ABSTRACT Usable security researchers have long been interested in what users do to keep their devices and data safe and how that compares to recommendations. Additionally, experts have long debated and studied the psychological underpinnings and motivations for users to do what they do, especially when such behavior is seen as risky, at least to experts. This study investigates user motivations through a survey conducted on Mechanical Turk, which resulted in responses from 290 participants. We use a rational decision model to guide our design, as well as current thought on human motivation in general and in the realm of computer security. Through quantitative and qualitative analysis, we identify key gaps in perception between those who follow common security advice (i.e., update software, use a password manager, use 2FA, change passwords) and those who do not and help explain participants' motivations behind their decisions. Additionally, we find that social considerations are trumped by individualized rationales

    Troubleshooting interactive complexity bugs in wireless sensor networks using data mining techniques

    Get PDF
    This article presents a tool for uncovering bugs due to interactive complexity in networked sensing applications. Such bugs are not localized to one component that is faulty, but rather result from complex and unexpected interactions between multiple often individually non-faulty components. Moreover, the manifestations of these bugs are often not repeatable, making them particularly hard to find, as the particular sequence of events that invokes the bug may not be easy to reconstruct. Because of the distributed nature of failure scenarios, our tool looks for {\em sequences\/} of events that may be responsible for faulty behavior, as opposed to localized bugs such as a bad pointer in a module. We identified several challenges in applying discriminative sequence mining for root cause analysis when the system fails to perform as expected and presented our solutions to those challenges. We also presented two alternatives schemes, namely, two stage mining and the progressive discriminative sequence mining to address the scalability challenge. An extensible framework is developed where a front-end collects runtime data logs of the system being debugged and an offline back-end uses frequent discriminative pattern mining to uncover likely causes of failure. We provided three case studies where we applied our tool successfully to troubleshoot the cause of the problem. We uncovered a kernel-level race condition bug in the LiteOS operating system and a protocol design bug in the directed diffusion protocol. We also presented a case study of debugging a multichannel MAC protocol that was found to exhibit corner cases of poor performance (worse than single channel MAC). The tool helped uncover event sequences that lead to a highly degraded mode of operation. Fixing the problem significantly improved the performance of the protocol. Finally, we provided a detailed analysis of tool overhead in terms of memory requirements and impact on the running application.unpublishednot peer reviewe

    To Follow or Not to Follow: A Study of User Motivations around Cybersecurity Advice

    No full text

    Fueoogle: A Participatory Sensing Fuel-Efficient Maps Application

    Get PDF
    This report presents a participatory sensing service, called Fueoogle, that maps vehicular fuel consumption on city streets, allowing drivers to find the most fuel-efficient routes for their vehicles between arbitrary end-points. The service exploits measurements of vehicular sensors, available via the OBD-II interface that gives access to most gauges and engine instrumentation. The OBD-II sensors are standardized in all vehicles produced in the US since 1996, constituting some of the largest ``sensor deployments" to date. Using fuel-related measurements contributed by participating vehicles, we develop a route planner that maps normalized fuel-efficiency of city streets, enabling vehicles to compute minimum fuel routes from one point to another. Street congestion, elevation variability, average speed, and average distance between stops (e.g., stop signs) lead to changes in the amount of fuel consumed making fuel-efficient routes potentially different from shortest or fastest routes, and a function of vehicle type. Our experimental study answers two questions related to the viability of the new service. First, how much fuel can it save? Second, can it survive conditions of sparse deployment? The main challenge under such conditions is to generalize from relatively sparse measurements on a subset of streets to estimates of measurements of an entire city. Through extensive experimental data collection and evaluation, conducted over the duration of a month across several different cars and drivers, we show that significant savings can be achieved by choosing the right route. We also provide extensive results pertaining to the accuracy of models that are used for prediction of fuel consumption values.unpublishednot peer reviewe

    Finding Symbolic Bug Patterns in Sensor Networks

    No full text
    Abstract. This paper presents a failure diagnosis algorithm for summarizing and generalizing patterns that lead to instances of anomalous behavior in sensor networks. Often multiple seemingly different event patterns lead to the same type of failure manifestation. A hidden relationship exists, in those patterns, among event attributes that is somehow responsible for the failure. For example, in some system, a message might always get corrupted if the sender is more than two hops away from the receiver (which is a distance relationship) irrespective of the senderId and receiverId. To uncover such failure-causing relationships, we present anewsymbolic pattern extraction technique that identifies and symbolically expresses relationships correlated with anomalous behavior. Symbolic pattern extraction is a new concept in sensor network debugging that is unique in its ability to generalize over patterns that involve different combinations of nodes or message exchanges by extracting their common relationship. As a proof of concept, we provide synthetic traffic scenarios where we show that applying symbolic pattern extraction can uncover more complex bug patterns that are crucial to the understanding of real causes of problems. We also use symbolic pattern extraction to diagnose a real bug and show that it generates much fewer and more accurate patterns compared to previous approaches

    Production of cellulase enzyme by solid state bioconversion on agricultural waste using Trichoderma species

    No full text
    Cellulase production was carried out by solid state bioconversion using rice straw, an agricultural waste, as the substrate of three Trichoderma spp. in lab-scale experiments. The results were compared to select the best fungi among them for the production of cellulase. Trichoderma harzianum was found to be the best among the three species, which produced the highest cellulase enzyme of of 0.77 IU/ml of filter paper activity and 1.88 IU/ml of carboxymethyl cellulose activity. The glucosamine and reducing sugar parameters were observed to evaluate the growth and substrate utilization in the experiment and pH was also recorded

    Pronet: Network trust assessment based on incomplete provenance

    No full text
    Abstract—This paper presents a tool ProNet, that is used to obtain the network trust based on incomplete provenance. We consider a multihop scenario where a set of source nodes observe an event and disseminate their observations as an information item through a multihop path to the command center. Nodes are assumed to embed their provenance details on the information content. Received provenance may not be complete at the command center due to attackers dropping provenance or the unavailability of provenance. We design ProNet, a tool which is at the command center that acts on the received information item to determine the information trust, node-level trust and sequence-level trust. ProNet contains three steps. In the first step it reconstructs the complete provenance details of received information from the available provenance. In the second step it employs a data classification scheme to classify the data into a good and bad pool. In the third step it employs pattern mining on the reconstructed provenance of bad data pools to determine the frequently appearing node and node sequence. This frequent appearance will quantify the trust level of nodes and node sequence. Now an information quality/trust level of newly received information can be determined based on the occurrences of these node/sequence patterns on the provenance data. We provide a detailed analysis on false positive and false negatives. I
    corecore