396 research outputs found

    Improving the robustness and privacy of HTTP cookie-based tracking systems within an affiliate marketing context : a thesis presented in fulfilment of the requirements for the degree of Doctor of Philosophy at Massey University, Albany, New Zealand

    Get PDF
    E-commerce activities provide a global reach for enterprises large and small. Third parties generate visitor traffic for a fee; through affiliate marketing, search engine marketing, keyword bidding and through organic search, amongst others. Therefore, improving the robustness of the underlying tracking and state management techniques is a vital requirement for the growth and stability of e-commerce. In an inherently stateless ecosystem such as the Internet, HTTP cookies have been the de-facto tracking vector for decades. In a previous study, the thesis author exposed circumstances under which cookie-based tracking system can fail, some due to technical glitches, others due to manipulations made for monetary gain by some fraudulent actors. Following a design science research paradigm, this research explores alternative tracking vectors discussed in previous research studies within a cross-domain tracking environment. It evaluates their efficacy within current context and demonstrates how to use them to improve the robustness of existing tracking techniques. Research outputs include methods, instantiations and a privacy model artefact based on information seeking behaviour of different categories of tracking software, and their resulting privacy intrusion levels. This privacy model provides clarity and is useful for practitioners and regulators to create regulatory frameworks that do not hinder technological advancement, rather they curtail privacy-intrusive tracking practices on the Internet. The method artefacts are instantiated as functional prototypes, available publicly on Internet, to demonstrate the efficacy and utility of the methods through live tests. The research contributes to the theoretical knowledge base through generalisation of empirical findings and to the industry by problem solving design artefacts

    Building and exploiting context on the web

    Get PDF
    [no abstract

    Securing the Next Generation Web

    Get PDF
    With the ever-increasing digitalization of society, the need for secure systems is growing. While some security features, like HTTPS, are popular, securing web applications, and the clients we use to interact with them remains difficult.To secure web applications we focus on both the client-side and server-side. For the client-side, mainly web browsers, we analyze how new security features might solve a problem but introduce new ones. We show this by performing a systematic analysis of the new Content Security Policy (CSP)\ua0 directive navigate-to. In our research, we find that it does introduce new vulnerabilities, to which we recommend countermeasures. We also create AutoNav, a tool capable of automatically suggesting navigation policies for this directive. Finding server-side vulnerabilities in a black-box setting where\ua0 there is no access to the source code is challenging. To improve this, we develop novel black-box methods for automatically finding vulnerabilities. We\ua0 accomplish this by identifying key challenges in web scanning and combining the best of previous methods. Additionally, we leverage SMT solvers to\ua0 further improve the coverage and vulnerability detection rate of scanners.In addition to browsers, browser extensions also play an important role in the web ecosystem. These small programs, e.g. AdBlockers and password\ua0 managers, have powerful APIs and access to sensitive user data like browsing history. By systematically analyzing the extension ecosystem we find new\ua0 static and dynamic methods for detecting both malicious and vulnerable extensions. In addition, we develop a method for detecting malicious extensions\ua0 solely based on the meta-data of downloads over time. We analyze new attack vectors introduced by Google’s new vehicle OS, Android Automotive. This\ua0 is based on Android with the addition of vehicle APIs. Our analysis results in new attacks pertaining to safety, privacy, and availability. Furthermore, we\ua0 create AutoTame, which is designed to analyze third-party apps for vehicles for the vulnerabilities we found

    Track The Planet: A Web-Scale Analysis Of How Online Behavioral Advertising Violates Social Norms

    Get PDF
    Various forms of media have long been supported by advertising as part of a broader social agreement in which the public gains access to monetarily free or subsidized content in exchange for paying attention to advertising. In print- and broadcast-oriented media distribution systems, advertisers relied on broad audience demographics of various publications and programs in order to target their offers to the appropriate groups of people. The shift to distributing media on the World Wide Web has vastly altered the underlying dynamic by which advertisements are targeted. Rather than rely on imprecise demographics, the online behavioral advertising (OBA) industry has developed a system by which individuals’ web browsing histories are covertly surveilled in order that their product preferences may be deduced from their online behavior. Due to a failure of regulation, Internet users have virtually no means to control such surveillance, and it contravenes a host of well-established social norms. This dissertation explores the ways in which the recent emergence of OBA has come into conflict with these societal norms. Rather than a mere process for targeting messages, OBA represents a profound shift in the underlying balance of power within society. This power balance is embedded in an information asymmetry which gives corporations and governments significantly more knowledge of, and power over, citizens than vice-versa. Companies do not provide the public with an accounting of their techniques or the scale at which they operate. In order to shed light on corporate behavior in the OBA sector, two new tools were developed for this dissertation: webXray and policyXray. webXray is the most powerful tool available for attributing the flow of user data on websites to the companies which receive and process it. policyXray is the first, and currently only, tool capable of auditing website privacy policies in order to evaluate disclosure of data transfers to specific parties. Both tools are highly resource efficient, allowing them to analyze millions of data flows and operate at a scale which is normally reserved for the companies collecting data. In short, these tools rectify the existing information asymmetry between the OBA industry and the public by leveraging the tools of mass surveillance for socially-beneficial ends. The research presented herein allows many specific existing social-normative concerns to be explored using empirical data in a way which was not previously possible. The impact of OBA on three main areas is investigated: regulatory norms, medical privacy norms, and norms related to the utility of the press. Through an examination of data flows on one million websites, and policies on 200,000 more, it is found in the area of regulatory norms that well-established Fair Information Practice Principles are severely undermined by the self-regulatory “notice and choice” paradigm. In the area of informational norms related to personal health, an analysis of data flows on 80,000 pages related to 2,000 medical conditions reveals that user health concerns are shared with a number of commercial parties, virtually no policies exist to restrict or regulate the practice, and users are at risk of embarrassment and discrimination. Finally, an analysis of 250,000 pages drawn from 5,000 U.S.-based media outlets demonstrates that core values of an independent and trustworthy press are undermined by commercial surveillance and centralized revenue systems. This surveillance may also transfer data to government entities, potentially resulting in chilling effects which compromise the ability of the press to serve as a check on power. The findings of this dissertation make it clear that current approaches to regulating OBA based on “notice and choice” have failed. The underlying “choice” of OBA is to sacrifice core social values in favor of increased profitability for primarily U.S.-based advertising firms. Therefore, new regulatory approaches based on mass surveillance of corporate, rather than user, behaviors must be pursued. Only by resolving the information asymmetry between the public, private corporations, and the state may social norms be respected in the online environment

    Raspberry Pi Technology

    Get PDF

    Energy efficient heterogeneous virtualized data centers

    Get PDF
    Meine Dissertation befasst sich mit software-gesteuerter Steigerung der Energie-Effizienz von Rechenzentren. Deren Anteil am weltweiten Gesamtstrombedarf wurde auf 1-2%geschätzt, mit stark steigender Tendenz. Server verursachen oft innerhalb von 3 Jahren Stromkosten, die die Anschaffungskosten übersteigen. Die Steigerung der Effizienz aller Komponenten eines Rechenzentrums ist daher von hoher ökonomischer und ökologischer Bedeutung. Meine Dissertation befasst sich speziell mit dem effizienten Betrieb der Server. Ein Großteil wird sehr ineffizient genutzt, Auslastungsbereiche von 10-20% sind der Normalfall, bei gleichzeitig hohem Strombedarf. In den letzten Jahren wurde im Bereich der Green Data Centers bereits Erhebliches an Forschung geleistet, etwa bei Kühltechniken. Viele Fragestellungen sind jedoch derzeit nur unzureichend oder gar nicht gelöst. Dazu zählt, inwiefern eine virtualisierte und heterogene Server-Infrastruktur möglichst stromsparend betrieben werden kann, ohne dass Dienstqualität und damit Umsatzziele Schaden nehmen. Ein Großteil der bestehenden Arbeiten beschäftigt sich mit homogenen Cluster-Infrastrukturen, deren Rahmenbedingungen nicht annähernd mit Business-Infrastrukturen vergleichbar sind. Hier dürfen verringerte Stromkosten im Allgemeinen nicht durch Umsatzeinbußen zunichte gemacht werden. Insbesondere ist ein automatischer Trade-Off zwischen mehreren Kostenfaktoren, von denen einer der Energiebedarf ist, nur unzureichend erforscht. In meiner Arbeit werden mathematische Modelle und Algorithmen zur Steigerung der Energie-Effizienz von Rechenzentren erforscht und bewertet. Es soll immer nur so viel an stromverbrauchender Hardware online sein, wie zur Bewältigung der momentan anfallenden Arbeitslast notwendig ist. Bei sinkender Arbeitslast wird die Infrastruktur konsolidiert und nicht benötigte Server abgedreht. Bei steigender Arbeitslast werden zusätzliche Server aufgedreht, und die Infrastruktur skaliert. Idealerweise geschieht dies vorausschauend anhand von Prognosen zur Arbeitslastentwicklung. Die Arbeitslast, gekapselt in VMs, wird in beiden Fällen per Live Migration auf andere Server verschoben. Die Frage, welche VM auf welchem Server laufen soll, sodass in Summe möglichst wenig Strom verbraucht wird und gewisse Nebenbedingungen nicht verletzt werden (etwa SLAs), ist ein kombinatorisches Optimierungsproblem in mehreren Variablen. Dieses muss regelmäßig neu gelöst werden, da sich etwa der Ressourcenbedarf der VMs ändert. Weiters sind Server hinsichtlich ihrer Ausstattung und ihres Strombedarfs nicht homogen. Aufgrund der Komplexität ist eine exakte Lösung praktisch unmöglich. Eine Heuristik aus verwandten Problemklassen (vector packing) wird angepasst, ein meta-heuristischer Ansatz aus der Natur (Genetische Algorithmen) umformuliert. Ein einfach konfigurierbares Kostenmodell wird formuliert, um Energieeinsparungen gegenüber der Dienstqualität abzuwägen. Die Lösungsansätze werden mit Load-Balancing verglichen. Zusätzlich werden die Forecasting-Methoden SARIMA und Holt-Winters evaluiert. Weiters werden Modelle entwickelt, die den negativen Einfluss einer Live Migration auf die Dienstqualität voraussagen können, und Ansätze evaluiert, die diesen Einfluss verringern. Abschließend wird untersucht, inwiefern das Protokollieren des Energieverbrauchs Auswirkungen auf Aspekte der Security und Privacy haben kann.My thesis is about increasing the energy efficiency of data centers by using a management software. It was estimated that world-wide data centers already consume 1-2%of the globally provided electrical energy. Furthermore, a typical server causes higher electricity costs over a 3 year lifespan than the purchase cost. Hence, increasing the energy efficiency of all components found in a data center is of high ecological as well as economic importance. The focus of my thesis is to increase the efficiency of servers in a data center. The vast majority of servers in data centers are underutilized for a significant amount of time, operating regions of 10-20%utilization are common. Still, these servers consume huge amounts of energy. A lot of efforts have been made in the area of Green Data Centers during the last years, e.g., regarding cooling efficiency. Nevertheless, there are still many open issues, e.g., operating a virtualized, heterogeneous business infrastructure with the minimum possible power consumption, under the constraint that Quality of Service, and in consequence, revenue are not severely decreased. The majority of existing work is dealing with homogeneous cluster infrastructures, where large assumptions can be made. Especially, an automatic trade-off between competing cost categories, with energy costs being just one of them, is insufficiently studied. In my thesis, I investigate and evaluate mathematical models and algorithms in the context of increasing the energy efficiency of servers in a data center. The amount of online, power consuming resources should at all times be close to the amount of actually required resources. If the workload intensity is decreasing, the infrastructure is consolidated by shutting down servers. If the intensity is rising, the infrastructure is scaled by waking up servers. Ideally, this happens pro-actively by making forecasts about the workload development. Workload is encapsulated in VMs and is live migrated to other servers. The problem of mapping VMs to physical servers in a way that minimizes power consumption, but does not lead to severe Quality of Service violations, is a multi-objective combinatorial optimization problem. It has to be solved frequently as the VMs' resource demands are usually dynamic. Further, servers are not homogeneous regarding their performance and power consumption. Due to the computational complexity, exact solutions are practically intractable. A greedy heuristic stemming from the problem of vector packing and a meta-heuristic genetic algorithm are investigated and evaluated. A configurable cost model is created in order to trade-off energy cost savings with QoS violations. The base for comparison is load balancing. Additionally, the forecasting methods SARIMA and Holt-Winters are evaluated. Further, models able to predict the negative impact of live migration on QoS are developed, and approaches to decrease this impact are investigated. Finally, an examination is carried out regarding the possible consequences of collecting and storing energy consumption data of servers on security and privacy

    Multi-objective optimisation methods applied to aircraft techno-economic and environmental issues

    No full text
    Engineering methods that couple multi-objective optimisation (MOO) techniques with high fidelity computational tools are expected to minimise the environmental impact of aviation while increasing the growth, with the potential to reveal innovative solutions. In order to mitigate the compromise between computational efficiency and fidelity, these methods can be accelerated by harnessing the computational efficiency of Graphic Processor Units (GPUs). The aim of the research is to develop a family of engineering methods to support research in aviation with respect to the environmental and economic aspects. In order to reveal the non-dominated trade-o_, also known as Pareto Front(PF), among conflicting objectives, a MOO algorithm, called Multi-Objective Tabu Search 2 (MOTS2), is developed, benchmarked relative to state-of-the-art methods and accelerated by using GPUs. A prototype fluid solver based on GPU is also developed, so as to simulate the mixing capability of a microreactor that could potentially be used in fuel-saving technologies in aviation. By using the aforementioned methods, optimal aircraft trajectories in terms of flight time, fuel consumption and emissions are generated, and alternative designs of a microreactor are suggested, so as to assess the trade-offs between pressure losses and the micro-mixing capability. As a key contribution to knowledge, with reference to competitive optimisers and previous cases, the capabilities of the proposed methodology are illustrated in prototype applications of aircraft trajectory optimisation (ATO) and micromixing optimisation with 2 and 3 objectives, under operational and geometrical constraints, respectively. In the short-term, ATO ought to be applied to existing aircraft. In the long-term, improving the micro-mixing capability of a microreactor is expected to enable the use of hydrogen-based fuel. This methodology is also benchmarked and assessed relative to state-of-the-art techniques in ATO and micro-mixing optimisation with known and unknown trade-offs, whereas the former could only optimise 2 objectives and the latter could not exploit the computational efficiency of GPUs. The impact of deploying on GPUs a micro-mixing _ow solver, which accelerates the generation of trade-off against a reference study, and MOTS2, which illustrates the scalability potential, is assessed. With regard to standard analytical function test cases and verification cases in MOO, MOTS2 can handle the multi-modality of the trade-o_ of ZDT4, which is a MOO benchmark function with many local optima that presents a challenge for a state-of-the-art genetic algorithm for ATO, called NSGAMO, based on case studies in the public domain. However, MOTS2 demonstrated worse performance on ZDT3, which is a MOO benchmark function with a discontinuous trade-o_, for which NSGAMO successfully captured the target PF. Comparing their overall performance, if the shape of the PF is known, MOTS2 should be preferred in problems with multi-modal trade-offs, whereas NSGAMO should be employed in discontinuous PFs. The shape of the trade-o_ between the objectives in airfoil shape optimisation, ATO and micro-mixing optimisation was continuous. The weakness of MOTS2 to sufficiently capture the discontinuous PF of ZDT3 was not critical in the studied examples … [cont.]

    Efficient main memory-based XML stream processing

    Get PDF
    Applications that process XML documents as files or streams are naturally main-memory based. This makes main memory the bottleneck for scalability. This doctoral thesis addresses this problem and presents a toolkit for effective buffer management in main memory-based XML stream processors. XML document projection is an established technique for reducing the buffer requirements of main memory-based XML processors, where only data relevant to query evaluation is loaded into main memory buffers. We present a novel implementation of this task, where we use string matching algorithms designed for efficient keyword search in flat strings to navigate in tree-structured data. We then introduce an extension of the XQuery language, called FluX, that supports event-based query processing. Purely event-based queries of this language can be executed on streaming XML data in a very direct way. We develop an algorithm to efficiently rewrite XQueries into FluX. This algorithm is capable of exploiting order constraints derived from schemata to reduce the amount of buffering in query evaluation. During streaming query evaluation, we continuously purge buffers from data that is no longer relevant. By combining static query analysis with a dynamic analysis of the buffer contents, we effectively reduce the size of memory buffers. We have confirmed the efficacy of these techniques by extensive experiments and by publication at international venues. To compare our contributions to related work in a systematic manner, we contribute an abstract framework for XML stream processing. This framework allows us to gain a greater-picture view over the factors influencing the main memory consumption.Anwendungen, die XML-Dokumente als Dateien oder Ströme verarbeiten, sind natürlicherweise hauptspeicherbasiert. Für die Skalierbarkeit wird der Hauptspeicher damit zu einem Engpass. Diese Doktorarbeit widmet sich diesem Problem, zu dessen Lösung sie Werkzeuge für eine effektive Pufferverwaltung in hauptspeicherbasierten Prozessoren für XML-Datenströme vorstellt. Die Projektion von XML-Dokumenten ist eine etablierte Methode, um den Pufferverbrauch von hauptspeicherbasierten XML-Prozessoren zu reduzieren. Dabei werden nur jene Daten in den Hauptspeicherpuffer geladen, die für die Anfrageauswertung auch relevant sind. Wir präsentieren eine neue Implementierung dieser Aufgabe, wobei wir Algorithmen zur effizienten Suche in flachen Zeichenketten einsetzen, um in baumartig strukturierten Daten zu navigieren. Danach stellen wir eine Erweiterung der XQuery-Sprache vor, genannt FluX, welche eine ereignisbasierte Anfragebearbeitung erlaubt. Anfragen, die nur ereignisbasierte Konstrukte benutzen, können direkt über XML-Datenströmen ausgewertet werden. Dazu entwickeln wir einen Algorithmus, mit dessen Hilfe sich XQuery-Anfragen effizient in FluX übersetzen lassen. Dieser benutzt Ordnungsinformationen aus Datenschemata, womit das Puffern in der Anfragebearbeitung reduziert werden kann. Während der Verarbeitung des Datenstroms bereinigen wir laufend den Hauptspeicherpuffer von solchen Daten, die nicht länger relevant sind. Eine nachhaltige Reduzierung der Größe von Hauptspeicherpuffern gelingt durch die Kombination der statischen Anfrageanalyse mit einer dynamischen Analyse der Pufferinhalte. Die Effektivität dieser Puffermanagement-Techniken erfährt ihre Bestätigung in umfangreichen Experimenten und internationalen Publikationen. Für einen systematischen Vergleich unserer Beiträge mit der aktuellen Literatur entwickeln wir ein abstraktes System zur Modellierung von Prozessoren zur XML-Stromverarbeitung. So können wir die spezifischen Faktoren herausgreifen, die den Hauptspeicherverbrauch beeinflussen

    B!SON: A Tool for Open Access Journal Recommendation

    Get PDF
    Finding a suitable open access journal to publish scientific work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of Predatory Publishers. To help with these challenges, we introduce a web-based journal recommendation system called B!SON. It is developed based on a systematic requirements analysis, built on open data, gives publisher-independent recommendations and works across domains. It suggests open access journals based on title, abstract and references provided by the user. The recommendation quality has been evaluated using a large test set of 10,000 articles. Development by two German scientific libraries ensures the longevity of the project
    • …
    corecore