20 research outputs found

    The Proceedings of 15th Australian Information Security Management Conference, 5-6 December, 2017, Edith Cowan University, Perth, Australia

    Get PDF
    Conference Foreword The annual Security Congress, run by the Security Research Institute at Edith Cowan University, includes the Australian Information Security and Management Conference. Now in its fifteenth year, the conference remains popular for its diverse content and mixture of technical research and discussion papers. The area of information security and management continues to be varied, as is reflected by the wide variety of subject matter covered by the papers this year. The papers cover topics from vulnerabilities in “Internet of Things” protocols through to improvements in biometric identification algorithms and surveillance camera weaknesses. The conference has drawn interest and papers from within Australia and internationally. All submitted papers were subject to a double blind peer review process. Twenty two papers were submitted from Australia and overseas, of which eighteen were accepted for final presentation and publication. We wish to thank the reviewers for kindly volunteering their time and expertise in support of this event. We would also like to thank the conference committee who have organised yet another successful congress. Events such as this are impossible without the tireless efforts of such people in reviewing and editing the conference papers, and assisting with the planning, organisation and execution of the conference. To our sponsors, also a vote of thanks for both the financial and moral support provided to the conference. Finally, thank you to the administrative and technical staff, and students of the ECU Security Research Institute for their contributions to the running of the conference

    China & technical global internet governance: from norm-taker to norm-maker?

    Full text link
     This dissertation examines Chinese engagement with core norms structuring technical global internet governance. It finds that China has been a norm-taker globally, but more of a norm-maker both regionally and domestically. Beijing seeks to restrict US government and non-state actor authority, but cannot due to limited support and power constraints

    An Economic Analysis of Domain Name Policy

    Get PDF
    One of the most important features of the architecture of the Internet is the Domain Name System (DNS), which is administered by the Internet Corporation for Assigned Names and Numbers (ICANN). Logically, the DNS is organized into Top Level Domains (such as .com), Second Level Domains (such as amazon.com), and third, fourth, and higher level domains (such as www.amazon.com). The physically infrastructure of the DNS consists of name servers, including the Root Server System which provides the information that directs name queries for each Top Level Domain to the appropriate server. ICANN is responsible for the allocation of the root and the creation or reallocation of Top Level Domains. The Root Server System and associated name space are scarce resources in the economic sense. The root servers have a finite capacity and expansion of the system is costly. The name space is scarce, because each string (or set of characters) can only be allocated to one Registry (or operator of a Top Level Domain). In addition, name service is not a public good in the economic sense, because it is possible to exclude strings from the DNS and because the allocation of a string to one firm results in the inability of other firms to use that name string. From the economic perspective, therefore, the question arises: what is the most efficient method for allocating the root resource? There are only five basic options available for allocation of the root. (1) a static root, equivalent to a decision to waste the currently unallocated capacity; (2) public interest hearings (or beauty contests); (3) lotteries; (4) a queuing mechanism; or (5) an auction. The fundamental economic question about the Domain Name System is which of these provides the most efficient mechanism for allocating the root resource? This resource allocation problem is analogous to problems raised in the telecommunications sector, where the Federal Communications Commission has a long history of attempting to allocate broadcast spectrum and the telephone number space. This experience reveals that a case-by-case allocation on the basis of ad hoc judgments about the public interest is doomed to failure, and that auctions (as opposed to lotteries or queues) provide the best mechanism for insuring that such public-trust resources find their highest and best use. Based on the telecommunications experience, the best method for ICANN to allocate new Top Level Domains would be to conduct an auction. Many auction designs are possible. One proposal is to auction a fixed number of new Top Level Domain slots each year. This proposal would both expand the root resource at a reasonable pace and insure that the slots went to their highest and best use. Public interest Top Level Domains could be allocated by another mechanism such as a lottery and their costs to ICANN could be subsidized by the proceeds of the auction

    Beyond NETmundial: The Roadmap for Institutional Improvements to the Global Internet Governance Ecosystem

    Get PDF
    Beyond NETmundial: The Roadmap for Institutional Improvements to the Global Internet Governance Ecosystem explores options for the implementation of a key section of the “NETmundial Multistakeholder Statement” that was adopted at the Global Meeting on the Future of Internet Governance (NETmundial) held on April 23rd and 24th 2014 in São Paulo, Brazil. The Roadmap section of the statement concisely sets out a series of proposed enhancements to existing mechanisms for global internet governance, as well as suggestions of possible new initiatives that the global community may wish to consider. The sixteen chapters by leading practitioners and scholars are grouped into six sections: The NETmundial Meeting; Strengthening the Internet Governance Forum; Filling the Gaps; Improving ICANN; Broader Analytical Perspectives; and Moving Forward

    The Normative Order of the Internet: A Theory of Rule and Regulation Online

    Get PDF
    There is order on the internet, but how has this order emerged and what challenges will threaten and shape its future? This study shows how a legitimate order of norms has emerged online, through both national and international legal systems. It establishes the emergence of a normative order of the internet, an order which explains and justifies processes of online rule and regulation. This order integrates norms at three different levels (regional, national, international), of two types (privately and publicly authored), and of different character (from ius cogens to technical standards). The author assesses their internal coherence, their consonance with other order norms and their consistency with the order's finality. The normative order of the internet is based on and produces a liquefied system characterized by self-learning normativity. In light of the importance of the socio-communicative online space, this is a book for anyone interested in understanding the contemporary development of the internet.

    Towards A knowledge-Based Economy - Europe and Central Asia - Internet Development and Governance

    Get PDF
    The diversity and socio-economic differentiation of the real world prevents the full-scale cultivation of Information and Communication Technologies (ICT) to the benefit of all. Furthermore, the lack of determination and political will in some countries and slowness of responses to new technological opportunities in some others are responsible for the creation of another social divide – a digital one. The above problems were fully acknowledged by the World Summit on the Information Society (WSIS). The Summit called for a joint international effort to overcome the digital divide between and within the United Nations Member States under the Digital Solidarity umbrella. This report was prepared as a follow-up to the Summit and represents a brief review of the status and trends in the area of ICT and Internet development in the UNECE region and provides background information on the state of the art in some relevant ICT subsectors in the Member States. The report focuses on the state of the Internet critical resources and, consequently, on the ICT and Internet penetration across countries and social groups. It also looks into existing Internet governance arrangements and makes some recommendations. The report contains three parts and conclusions. The first part, “Towards a Knowledge-based Economy: Progress Assessment”, highlights the situation in the region with regards to the digital divide, both between and within countries, and national strategies and actions aiming at overcoming barriers to accessing the Internet. The second part, “Internet Development: Current State of Critical Internet Resources in the UNECE Region”, concentrates on reviewing the physical Internet backbone, interconnection and connectivity within the Internet in the UNECE Member States. The third part, “Governing the Evolving Internet in the UNECE Region”, focuses on the issues of Internet Governance in the countries of the region, challenges faced by the countries and participation of key stakeholders in ICT and Internet policy formulation and implementation. The final part contains conclusions and recommendations.Internet, governance, knowledge-based economy, Europe, Central Asia, transition economies

    Counteracting phishing through HCI

    Get PDF
    Computer security is a very technical topic that is in many cases hard to grasp for the average user. Especially when using the Internet, the biggest network connecting computers globally together, security and safety are important. In many cases they can be achieved without the user's active participation: securely storing user and customer data on Internet servers is the task of the respective company or service provider, but there are also a lot of cases where the user is involved in the security process, especially when he or she is intentionally attacked. Socially engineered phishing attacks are such a security issue were users are directly attacked to reveal private data and credentials to an unauthorized attacker. These types of attacks are the main focus of the research presented within my thesis. I have a look at how these attacks can be counteracted by detecting them in the first place but also by mediating these detection results to the user. In prior research and development these two areas have most often been regarded separately, and new security measures were developed without taking the final step of interacting with the user into account. This interaction mainly means presenting the detection results and receiving final decisions from the user. As an overarching goal within this thesis I look at these two aspects united, stating the overall protection as the sum of detection and "user intervention". Within nine different research projects about phishing protection this thesis gives answers to ten different research questions in the areas of creating new phishing detectors (phishing detection) and providing usable user feedback for such systems (user intervention): The ten research questions cover five different topics in both areas from the definition of the respective topic over ways how to measure and enhance the areas to finally reasoning about what is making sense. The research questions have been chosen to cover the range of both areas and the interplay between them. They are mostly answered by developing and evaluating different prototypes built within the projects that cover a range of human-centered detection properties and evaluate how well these are suited for phishing detection. I also take a look at different possibilities for user intervention (e.g. how should a warning look like? should it be blocking or non-blocking or perhaps even something else?). As a major contribution I finally present a model that combines phishing detection and user intervention and propose development and evaluation recommendations for similar systems. The research results show that when developing security detectors that yield results being relevant for end users such a detector can only be successful in case the final user feedback already has been taken into account during the development process.Sicherheit rund um den Computer ist ein, für den durchschnittlichen Benutzer schwer zu verstehendes Thema. Besonders, wenn sich die Benutzer im Internet - dem größten Netzwerk unserer Zeit - bewegen, ist die technische und persönliche Sicherheit der Benutzer extrem wichtig. In vielen Fällen kann diese ohne das Zutun des Benutzers erreicht werden. Datensicherheit auf Servern zu garantieren obliegt den Dienstanbietern, ohne dass eine aktive Mithilfe des Benutzers notwendig ist. Es gibt allerdings auch viele Fälle, bei denen der Benutzer Teil des Sicherheitsprozesses ist, besonders dann, wenn er selbst ein Opfer von Attacken wird. Phishing Attacken sind dabei ein besonders wichtiges Beispiel, bei dem Angreifer versuchen durch soziale Manipulation an private Daten des Nutzers zu gelangen. Diese Art der Angriffe stehen im Fokus meiner vorliegenden Arbeit. Dabei werfe ich einen Blick darauf, wie solchen Attacken entgegen gewirkt werden kann, indem man sie nicht nur aufspürt, sondern auch das Ergebnis des Erkennungsprozesses dem Benutzer vermittelt. Die bisherige Forschung und Entwicklung betrachtete diese beiden Bereiche meistens getrennt. Dabei wurden Sicherheitsmechanismen entwickelt, ohne den finalen Schritt der Präsentation zum Benutzer hin einzubeziehen. Dies bezieht sich hauptsächlich auf die Präsentation der Ergebnisse um dann den Benutzer eine ordnungsgemäße Entscheidung treffen zu lassen. Als übergreifendes Ziel dieser Arbeit betrachte ich diese beiden Aspekte zusammen und postuliere, dass Benutzerschutz die Summe aus Problemdetektion und Benutzerintervention' ("user intervention") ist. Mit Hilfe von neun verschiedenen Forschungsprojekten über Phishingschutz beantworte ich in dieser Arbeit zehn Forschungsfragen über die Erstellung von Detektoren ("phishing detection") und das Bereitstellen benutzbaren Feedbacks für solche Systeme ("user intervention"). Die zehn verschiedenen Forschungsfragen decken dabei jeweils fünf verschiedene Bereiche ab. Diese Bereiche erstrecken sich von der Definition des entsprechenden Themas über Messmethoden und Verbesserungsmöglichkeiten bis hin zu Überlegungen über das Kosten-Nutzen-Verhältnis. Dabei wurden die Forschungsfragen so gewählt, dass sie die beiden Bereiche breit abdecken und auf die Abhängigkeiten zwischen beiden Bereichen eingegangen werden kann. Die Forschungsfragen werden hauptsächlich durch das Schaffen verschiedener Prototypen innerhalb der verschiedenen Projekte beantwortet um so einen großen Bereich benutzerzentrierter Erkennungsparameter abzudecken und auszuwerten wie gut diese für die Phishingerkennung geeignet sind. Außerdem habe ich mich mit den verschiedenen Möglichkeiten der Benutzerintervention befasst (z.B. Wie sollte eine Warnung aussehen? Sollte sie Benutzerinteraktion blockieren oder nicht?). Ein weiterer Hauptbeitrag ist schlussendlich die Präsentation eines Modells, dass die Entwicklung von Phishingerkennung und Benutzerinteraktionsmaßnahmen zusammenführt und anhand dessen dann Entwicklungs- und Analyseempfehlungen für ähnliche Systeme gegeben werden. Die Forschungsergebnisse zeigen, dass Detektoren im Rahmen von Computersicherheitsproblemen die eine Rolle für den Endnutzer spielen nur dann erfolgreich entwickelt werden können, wenn das endgültige Benutzerfeedback bereits in den Entwicklungsprozesses des Detektors einfließt

    Untangling the Web: A Guide To Internet Research

    Get PDF
    [Excerpt] Untangling the Web for 2007 is the twelfth edition of a book that started as a small handout. After more than a decade of researching, reading about, using, and trying to understand the Internet, I have come to accept that it is indeed a Sisyphean task. Sometimes I feel that all I can do is to push the rock up to the top of that virtual hill, then stand back and watch as it rolls down again. The Internet—in all its glory of information and misinformation—is for all practical purposes limitless, which of course means we can never know it all, see it all, understand it all, or even imagine all it is and will be. The more we know about the Internet, the more acute is our awareness of what we do not know. The Internet emphasizes the depth of our ignorance because our knowledge can only be finite, while our ignorance must necessarily be infinite. My hope is that Untangling the Web will add to our knowledge of the Internet and the world while recognizing that the rock will always roll back down the hill at the end of the day
    corecore