495 research outputs found

    MACE: Deliverable D7.6 - Report on user interface design and community experiments

    Get PDF
    This deliverable presents the progress of the user interface design and community building experiments within the MACE project. In Chapter 2 we generally present the interface of the MACE portal, which is a platform to discover and enrich architectural resources and, at the same time, to support the community formed around architectural topics. Besides the advanced search, the portal provides various visual tools for metadata based search and browsing, tailored to architectural needs (see Chapter 3). Different metadata widgets are used to visualize and access multiple dimensions of each resource, as presented in Chapter 4. These widgets not only establish meaningful cross–connections between resources, but also invite to add and edit metadata effortlessly. In order to generate a critical mass of metadata and ensure sustainability of projects’ outcomes, supporting community and fostering end user contributions are critical. In Chapter 5, we present the components deploied in this direction as well as an analytical framework for incentive mechanisms. Within the dissemination strategy, the MACE project has got a unique chance to raise its public awareness at La Biennale of architecture in Venice, 2008. In this context we designed an interactive installation, demonstrating, in an exhibition setting, the benefits of resource interconnection via metadata (see Chapter 6). Chapter 7 presents our preliminary conclusions and an overview of planned future activities

    Data Mining and Decision Support: An Integrative Approach

    Get PDF

    Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet

    Get PDF
    For years, legal wiretapping was straightforward: the officer doing the intercept connected a tape recorder or the like to a single pair of wires. By the 1990s, however, the changing structure of telecommunications—there was no longer just “Ma Bell” to talk to—and new technologies such as ISDN and cellular telephony made executing a wiretap more complicated for law enforcement. Simple technologies would no longer suffice. In response, Congress passed the Communications Assistance for Law Enforcement Act (CALEA) which mandated a standardized lawful intercept interface on all local phone switches. Since its passage, technology has continued to progress, and in the face of new forms of communication—Skype, voice chat during multiplayer online games, instant messaging, etc.—law enforcement is again experiencing problems. The FBI has called this “Going Dark”: their loss of access to suspects’ communication. According to news reports, law enforcement wants changes to the wiretap laws to require a CALEA-like interface in Internet software. CALEA, though, has its own issues: it is complex software specifically intended to create a security hole—eavesdropping capability—in the already-complex environment of a phone switch. It has unfortunately made wiretapping easier for everyone, not just law enforcement. Congress failed to heed experts’ warnings of the danger posed by this mandated vulnerability, and time has proven the experts right. The so-called “Athens Affair,” where someone used the built-in lawful intercept mechanism to listen to the cell phone calls of high Greek officials, including the Prime Minister, is but one example. In an earlier work, we showed why extending CALEA to the Internet would create very serious problems, including the security problems it has visited on the phone system. In this paper, we explore the viability and implications of an alternative method for addressing law enforcements need to access communications: legalized hacking of target devices through existing vulnerabilities in end-user software and platforms. The FBI already uses this approach on a small scale; we expect that its use will increase, especially as centralized wiretapping capabilities become less viable. Relying on vulnerabilities and hacking poses a large set of legal and policy questions, some practical and some normative. Among these are: (1) Will it create disincentives to patching? (2) Will there be a negative effect on innovation? (Lessons from the so-called “Crypto Wars” of the 1990s, and in particular the debate over export controls on cryptography, are instructive here.) (3) Will law enforcement’s participation in vulnerabilities purchasing skew the market? (4) Do local and even state law enforcement agencies have the technical sophistication to develop and use exploits? If not, how should this be handled? A larger FBI role? (5) Should law enforcement even be participating in a market where many of the sellers and other buyers are themselves criminals? (6) What happens if these tools are captured and repurposed by miscreants? (7) Should we sanction otherwise illegal network activity to aid law enforcement? (8) Is the probability of success from such an approach too low for it to be useful? As we will show, these issues are indeed challenging. We regard the issues raised by using vulnerabilities as, on balance, preferable to adding more complexity and insecurity to online systems

    Writing: Monologue, Dialogue, and Ecological Narrative

    Get PDF
    Writing mostly is a solitary activity. Right now, I sit in front of a computer screen. On my desk are piles of paper; notes for what I want to say; unfinished projects waiting to be attended to; books on shelves nearby to be consulted. I need to be alone when I write. Whether writing on a computer, on a typewriter or by hand, most writers I know prefer a secluded place without distractions from telephones and other people who demand attention. Writing requires a concentration that is absent in ordinary conversations. This affects how we communicate in writing. Writing flows differently from the way a conversation flows. It flows from word and from comments to comments on comments. Writing has a beginning, an end and headlines that introduce the whole and its parts. These are units of thought, not of interaction. Writing is composed of small building blocks. Composition is deliberate, more or less controlled, moving through a topic without loosing track of the overall purpose, the whole. We may go back to assure that our composition is coherent, eliminating redundancy, improving the wording and inserting thoughts that showed up later. When we have to write under non-solitary conditions, for example, in a newsroom with pressing deadlines, interrupting telephone calls, other reporters conversing in the neighboring cubical, an editor calling us with new assignments, the product is different. It reads more like an assembly of observational reports, a collage of images, lacking an overall structure. Textbooks on writing rarely speak of the conditions of writing but celebrate the qualities of its preferred product: style, grace, grammar, logical structure and coherence

    Mining complex trees for hidden fruit : a graph–based computational solution to detect latent criminal networks : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Technology at Massey University, Albany, New Zealand.

    Get PDF
    The detection of crime is a complex and difficult endeavour. Public and private organisations – focusing on law enforcement, intelligence, and compliance – commonly apply the rational isolated actor approach premised on observability and materiality. This is manifested largely as conducting entity-level risk management sourcing ‘leads’ from reactive covert human intelligence sources and/or proactive sources by applying simple rules-based models. Focusing on discrete observable and material actors simply ignores that criminal activity exists within a complex system deriving its fundamental structural fabric from the complex interactions between actors - with those most unobservable likely to be both criminally proficient and influential. The graph-based computational solution developed to detect latent criminal networks is a response to the inadequacy of the rational isolated actor approach that ignores the connectedness and complexity of criminality. The core computational solution, written in the R language, consists of novel entity resolution, link discovery, and knowledge discovery technology. Entity resolution enables the fusion of multiple datasets with high accuracy (mean F-measure of 0.986 versus competitors 0.872), generating a graph-based expressive view of the problem. Link discovery is comprised of link prediction and link inference, enabling the high-performance detection (accuracy of ~0.8 versus relevant published models ~0.45) of unobserved relationships such as identity fraud. Knowledge discovery uses the fused graph generated and applies the “GraphExtract” algorithm to create a set of subgraphs representing latent functional criminal groups, and a mesoscopic graph representing how this set of criminal groups are interconnected. Latent knowledge is generated from a range of metrics including the “Super-broker” metric and attitude prediction. The computational solution has been evaluated on a range of datasets that mimic an applied setting, demonstrating a scalable (tested on ~18 million node graphs) and performant (~33 hours runtime on a non-distributed platform) solution that successfully detects relevant latent functional criminal groups in around 90% of cases sampled and enables the contextual understanding of the broader criminal system through the mesoscopic graph and associated metadata. The augmented data assets generated provide a multi-perspective systems view of criminal activity that enable advanced informed decision making across the microscopic mesoscopic macroscopic spectrum

    Topic driven testing

    Get PDF
    Modern interactive applications offer so many interaction opportunities that automated exploration and testing becomes practically impossible without some domain specific guidance towards relevant functionality. In this dissertation, we present a novel fundamental graphical user interface testing method called topic-driven testing. We mine the semantic meaning of interactive elements, guide testing, and identify core functionality of applications. The semantic interpretation is close to human understanding and allows us to learn specifications and transfer knowledge across multiple applications independent of the underlying device, platform, programming language, or technology stack—to the best of our knowledge a unique feature of our technique. Our tool ATTABOY is able to take an existing Web application test suite say from Amazon, execute it on ebay, and thus guide testing to relevant core functionality. Tested on different application domains such as eCommerce, news pages, mail clients, it can trans- fer on average sixty percent of the tested application behavior to new apps—without any human intervention. On top of that, topic-driven testing can go with even more vague instructions of how-to descriptions or use-case descriptions. Given an instruction, say “add item to shopping cart”, it tests the specified behavior in an application–both in a browser as well as in mobile apps. It thus improves state-of-the-art UI testing frame- works, creates change resilient UI tests, and lays the foundation for learning, transfer- ring, and enforcing common application behavior. The prototype is up to five times faster than existing random testing frameworks and tests functions that are hard to cover by non-trained approaches.Moderne interaktive Anwendungen bieten so viele Interaktionsmöglichkeiten, dass eine vollstĂ€ndige automatische Exploration und das Testen aller Szenarien praktisch unmöglich ist. Stattdessen muss die Testprozedur auf relevante KernfunktionalitĂ€t ausgerichtet werden. Diese Arbeit stellt ein neues fundamentales Testprinzip genannt thematisches Testen vor, das beliebige Anwendungen u ̈ber die graphische OberflĂ€che testet. Wir untersuchen die semantische Bedeutung von interagierbaren Elementen um die Kernfunktionenen von Anwendungen zu identifizieren und entsprechende Tests zu erzeugen. Statt typischen starren Testinstruktionen orientiert sich diese Art von Tests an menschlichen AnwendungsfĂ€llen in natĂŒrlicher Sprache. Dies erlaubt es, Software Spezifikationen zu erlernen und Wissen von einer Anwendung auf andere zu ĂŒbertragen unabhĂ€ngig von der Anwendungsart, der Programmiersprache, dem TestgerĂ€t oder der -Plattform. Nach unserem Kenntnisstand ist unser Ansatz der Erste dieser Art. Wir prĂ€sentieren ATTABOY, ein Programm, das eine existierende Testsammlung fĂŒr eine Webanwendung (z.B. fĂŒr Amazon) nimmt und in einer beliebigen anderen Anwendung (sagen wir ebay) ausfĂŒhrt. Dadurch werden Tests fĂŒr Kernfunktionen generiert. Bei der ersten AusfĂŒhrung auf Anwendungen aus den DomĂ€nen Online Shopping, Nachrichtenseiten und eMail, erzeugt der Prototyp sechzig Prozent der Tests automatisch. Ohne zusĂ€tzlichen manuellen Aufwand. DarĂŒber hinaus interpretiert themen- getriebenes Testen auch vage Anweisungen beispielsweise von How-to Anleitungen oder Anwendungsbeschreibungen. Eine Anweisung wie "FĂŒgen Sie das Produkt in den Warenkorb hinzu" testet das entsprechende Verhalten in der Anwendung. Sowohl im Browser, als auch in einer mobilen Anwendung. Die erzeugten Tests sind robuster und effektiver als vergleichbar erzeugte Tests. Der Prototyp testet die ZielfunktionalitĂ€t fĂŒnf mal schneller und testet dabei Funktionen die durch nicht spezialisierte AnsĂ€tze kaum zu erreichen sind

    SecureQEMU: Emulation-based Software Protection Providing Encrypted Code Execution and Page Granularity Code Signing

    Get PDF
    This research presents an original emulation-based software protection scheme providing protection from reverse code engineering (RCE) and software exploitation using encrypted code execution and page-granularity code signing, respectively. Protection mechanisms execute in trusted emulators while remaining out-of-band of untrusted systems being emulated. This protection scheme is called SecureQEMU and is based on a modified version of Quick Emulator (QEMU) [5]. RCE is a process that uncovers the internal workings of a program. It is used during vulnerability and intellectual property (IP) discovery. To protect from RCE program code may have anti-disassembly, anti-debugging, and obfuscation techniques incorporated. These techniques slow the process of RCE, however, once defeated protected code is still comprehensible. Encryption provides static code protection, but encrypted code must be decrypted before execution. SecureQEMUs\u27 scheme overcomes this limitation by keeping code encrypted during execution. Software exploitation is a process that leverages design and implementation errors to cause unintended behavior which may result in security policy violations. Traditional exploitation protection mechanisms provide a blacklist approach to software protection. Specially crafted exploit payloads bypass these protection mechanisms. SecureQEMU provides a whitelist approach to software protection by executing signed code exclusively. Unsigned malicious code (exploits, backdoors, rootkits, etc.) remain unexecuted, therefore, protecting the system. SecureQEMUs\u27 cache mechanisms increase performance by 0.9% to 1.8% relative to QEMU. Emulation overhead for SecureQEMU varies from 1400% to 2100% with respect to native performance. SecureQEMUs\u27 performance increase is negligible with respect to emulation overhead. Dependent on risk management strategy, SecureQEMU\u27s protection benefits may outweigh emulation overhead
    • 

    corecore