4,493 research outputs found

    Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences

    Full text link
    In this survey, we first briefly review the current state of cyber attacks, highlighting significant recent changes in how and why such attacks are performed. We then investigate the mechanics of malware command and control (C2) establishment: we provide a comprehensive review of the techniques used by attackers to set up such a channel and to hide its presence from the attacked parties and the security tools they use. We then switch to the defensive side of the problem, and review approaches that have been proposed for the detection and disruption of C2 channels. We also map such techniques to widely-adopted security controls, emphasizing gaps or limitations (and success stories) in current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages. Listing abstract compressed from version appearing in repor

    SciTech News- 69(2)-2015

    Get PDF
    Columns and Reports From the Editor..........................................5 SciTech News Call for Articles.......................5 Division News Science-Technology Division.........................6 Chemistry Division.................................... 15 Engineering Division................................. 21 Aerospace Section of the Engineering Division................... 25 Architecture, Building Engineering, Construction and Design Section of the Engineering Division................... 26 Award & Other Announcements Stacey Mantooth Receives 2015 Marion E. Sparks Award for Professional Development...................................... 17 Engineering Division Awards Recipients....... 24 Engineering Division Mentoring Program...... 26 Conference Reports Post International Chemical Congress Report Held in Malaysia and Vietnam 2014, by Malarvili Ramalingam, PhD............... 18 Reviews Sci-Tech Book News Reviews...................... 2

    Learning archetypes as tools of Cybergogy for a 3D educational landscape: a structure for eTeaching in Second Life

    No full text
    This paper considers issues of validity and credibility of eTeaching using a 3D Virtual World as a delivery medium of eLearning pertaining to the transfer of authentic real life skills. It identifies the game like qualities perceived therein, recognising that these very attributes may, when experienced superficially, be a contributing factor to the potential educational demise of the platform. It goes on to examine traditional educational theories in the light of the affordances of a virtual world seeking to adapt and apply them to the construction of a new conceptual framework of a pedagogy reflecting the affordances and understanding of on-line learning which incorporates the implementation of Learning Archetypes (models of activities) to maximise the essence of a virtual world, in as much as it is able to facilitate learning experiences delimited by physical world constraints. It builds upon these ideas to develop a working model of Cybergogy and Learning Archetypes in 3D with a view to making it available to people who wish to demonstrate theoretically robust lesson and course planning. The model is then applied to three examples of eTeaching, developed as Case Studies for the purpose of critically evaluating the model, which is found to be operationally effective, accurate and flexible. Conclusions are drawn that identify the merits and challenges of implementing such a model of Cybergogy into eTeaching and eLearning conducted in Second Life®

    Text mining for social sciences: new approaches

    Get PDF
    The rise of the Internet has determined an important change in the way we look at the world, and then the mode we measure it. In June 2018, more than 55% of the world’s population has an Internet access. It follows that, every day we are able to quantify what more than four billion people do, how and when they do it. This means data. The availability of all these data raised more than one questions: How to manage them? How to treat them? How to extract information from them? Now, more than ever before, we need to think about new rules, new methods and new procedures for handling this huge amount of data, which are characterized by being unstructured, raw and messy. One of the most interesting challenge in this field regards the implementation of processes for deriving information from textual sources; this process is also known as Text Mining. Born in the mid-90s, Text Mining represents a prolific field which has evolved – thanks to technology evolution – from the Automatic Text Analysis, a set of methods for the description and the analysis of documents. Textual data, even if transformed into a structured format, present several criticisms as they are characterized by high dimensionality and noise. Moreover, online texts – like social media posts or blogs comments – are most of the time very short, and this means more sparseness of the matrices when the data are encoded. All these findings pose the problem of looking at new and advanced solutions for treating Web Data, that are able to overcome these criticisms and at the same time, return the information contained into these texts. The objective is to propose a fast and scalable method, able to deal with the findings of the online texts, and then with big and sparse matrices. To do that, we propose a procedure that starts from the collection of texts to the interpretation of the results. The innovative parts of this procedure consist of the choice of the weighting scheme for the term-document matrix and the co-clustering approach for data classification. To verify the validity of the procedure, we test it through two real applications: one concerning the topic of the safety and health at work and another regarding the subject of the Brexit vote. It will be shown how the technique works on different types of texts, allowing us to obtain meaningful results. For the reasons described above, in this research work we implement and test on real datasets a new procedure for content analysis of textual data, using a two-way approach in the Text Clustering field. As will be shown in the following pages, Text Clustering is a process of unsupervised classification that reproduces the internal structure of the data, by dividing the text into different groups on the basis of the lexical similarities. Text Clustering is mostly utilized for content analysis, and it might be applied for the classification of words, documents or both. In latter case we refer to two-way clustering, that is the specific approach we implemented within this research work for the treatment of the texts. To better organize the research work, we divided it into two parts: a first part of theory and a second one of application. The first part contains a preliminary chapter of literature review on the field of the Automatic Text Analysis in the context of data revolution, and a second chapter where the new procedure for text co-clustering is proposed. The second part regards the application of the proposed techniques on two different set of texts, one composed of news and another one composed of tweets. The idea is to test the same procedure on different type of texts, in order to verify the validity and the robustness of the method
    • …
    corecore