70 research outputs found

    A discourse in conflict : resolving the definitional uncertainty of cyber war : a thesis presented in partial fulfilment for the requirements for the degree of Master of Arts in Defence and Security Studies at Massey University, Albany, New Zealand

    Get PDF
    Since emerging in academic literature in the 1990s, definitions of ‘cyber war’ and cyber warfare’ have been notably inconsistent. There has been no research that examines these inconsistencies and whether they can be resolved. Using the methodology of discourse analysis, this thesis addresses this research need. Analysis has identified that the study of cyber war and cyber warfare is inherently inter-disciplinary. The most prominent academic disciplines contributing definitions are Strategic Studies, Security Studies, Information and Communications Technology, Law, and Military Studies. Despite the apparent definitional uncertainty, most researchers do not offer formal definitions of cyber war or cyber warfare. Moreover, there is little evidentiary basis in literature to distinguish between cyber war and cyber warfare. Proximate analysis of definitions of cyber war and cyber warfare suggests a high level of inconsistency between dozens of definitions. However, through deeper analysis of both the relationships between definitions and their underlying structure, this thesis demonstrates that (a) the relationships between definitions can be represented hierarchically, through a discourse hierarchy of definitions; and (b) all definitions share a common underlying structure, accessible through the application of a structural definition model. Crucially, analysis of definitions via these constructs allows a foundational definition of cyber war and cyber warfare to be identified. Concomitantly, use of the model identifies the areas of greatest inter-definitional inconsistency and the implications thereof and contributes to the construction of a taxonomy of definitions of cyber war and cyber warfare. Considered holistically, these research outputs allow for significant resolution of the inconsistency between definitions. Moreover, these outputs provide a basis for the emergence of dominant functional definitions that may aid in the development of policy, strategy, and doctrine

    Computational Controversy

    Full text link
    Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have appeared, using new data sources such as Wikipedia, which help us now better understand these phenomena. However, compared to what social sciences have discovered about such debates, the existing computational approaches mostly focus on just a few of the many important aspects around the concept of controversies. In order to link the two strands, we provide and evaluate here a controversy model that is both, rooted in the findings of the social science literature and at the same time strongly linked to computational methods. We show how this model can lead to computational controversy analytics that have full coverage over all the crucial aspects that make up a controversy.Comment: In Proceedings of the 9th International Conference on Social Informatics (SocInfo) 201

    Analyzing Controversial Topics within Facebook

    Get PDF
    Social media plays a significant role in the dissemination of information. Now more than ever, consumers turn to social media sites (SMS) to catch up on current events and share their perspectives. While this form of communication is enjoyed by the public, it also has its drawbacks. Because many perspectives can be captured via SMS, this often leads to public discourse and in some cases, controversy. Misinformation and disinformation continue to spread throughout the internet allowing many consumers to become misinformed. This further elevates such discourse and allows for real issues to be forgotten as online debate spirals out of reality and false information gains traction. Given the issues at hand, this paper seeks to demonstrate a rudimentary measurement of curve fitting as a proof of concept for capturing controversy on Facebook using the reactions of its user base toward controversial topics

    Identifying leading indicators of product recalls from online reviews using positive unlabeled learning and domain adaptation

    Full text link
    Consumer protection agencies are charged with safeguarding the public from hazardous products, but the thousands of products under their jurisdiction make it challenging to identify and respond to consumer complaints quickly. From the consumer's perspective, online reviews can provide evidence of product defects, but manually sifting through hundreds of reviews is not always feasible. In this paper, we propose a system to mine Amazon.com reviews to identify products that may pose safety or health hazards. Since labeled data for this task are scarce, our approach combines positive unlabeled learning with domain adaptation to train a classifier from consumer complaints submitted to the U.S. Consumer Product Safety Commission. On a validation set of manually annotated Amazon product reviews, we find that our approach results in an absolute F1 score improvement of 8% over the best competing baseline. Furthermore, we apply the classifier to Amazon reviews of known recalled products; the classifier identifies reviews reporting safety hazards prior to the recall date for 45% of the products. This suggests that the system may be able to provide an early warning system to alert consumers to hazardous products before an official recall is announced

    HUBFIRE - A multi-class SVM based JPEG steganalysis using HBCL statistics and FR Index

    Get PDF
    Blind Steganalysis attempts to detect steganographic data without prior knowledge of either the embedding algorithm or the 'cover' image. This paper proposes new features for JPEG blind steganalysis using a combination of Huffman Bit Code Length (HBCL) Statistics and File size to Resolution ratio (FR Index); the Huffman Bit File Index Resolution (HUBFIRE) algorithm proposed uses these functionals to build the classifier using a multi-class Support Vector Machine (SVM). JPEG images spanning a wide range of resolutions are used to create a 'stego-image' database employing three embedding schemes - the advanced Least Significant Bit encoding technique, that embeds in the spatial domain, a transform-domain embedding scheme: JPEG Hide-and-Seek and Model Based Steganography which employs an adaptive embedding technique. This work employs a multi-class SVM over the proposed 'HUBFIRE' algorithm for statistical steganalysis, which is not yet explored by steganalysts. Experiments conducted prove the model's accuracy over a wide range of payloads and embedding schemes

    Selection of classification models from repository of model for water quality dataset

    Get PDF
    This paper proposes a new technique, Model Selection Technique (MST) for selection andranking of models from the repository of models by combining three performance measures(Acc, TPR and TNR). This technique provides weightage to each performance measure to findthe most suitable model from the repository of models. A number of classification modelshave been generated to classify water quality using the most significant features andclassifiers such as J48, JRip and BayesNet. To validate this technique proposed, the waterquality dataset of Kinta River was used in this research. The results demonstrate that theFunction classifier is the optimal model with the most outstanding accuracy of 97.02%, TPR =0.96 and TNR = 0.98. In conclusion, MST is able to find the most relevant model from therepository of models by using weights in classifying the water quality dataset.Keywords: selection of models; water quality; classification model; models repository
    • …
    corecore