3,271 research outputs found

    Online advertising: analysis of privacy threats and protection approaches

    Get PDF
    Online advertising, the pillar of the “free” content on the Web, has revolutionized the marketing business in recent years by creating a myriad of new opportunities for advertisers to reach potential customers. The current advertising model builds upon an intricate infrastructure composed of a variety of intermediary entities and technologies whose main aim is to deliver personalized ads. For this purpose, a wealth of user data is collected, aggregated, processed and traded behind the scenes at an unprecedented rate. Despite the enormous value of online advertising, however, the intrusiveness and ubiquity of these practices prompt serious privacy concerns. This article surveys the online advertising infrastructure and its supporting technologies, and presents a thorough overview of the underlying privacy risks and the solutions that may mitigate them. We first analyze the threats and potential privacy attackers in this scenario of online advertising. In particular, we examine the main components of the advertising infrastructure in terms of tracking capabilities, data collection, aggregation level and privacy risk, and overview the tracking and data-sharing technologies employed by these components. Then, we conduct a comprehensive survey of the most relevant privacy mechanisms, and classify and compare them on the basis of their privacy guarantees and impact on the Web.Peer ReviewedPostprint (author's final draft

    Methods and Tools for Management of Distributed Event Processing Applications

    Get PDF
    Die Erfassung und Verarbeitung von Ereignissen aus cyber-physischen Systemen bietet Anwendern die Möglichkeit, kontinuierlich ĂŒber Leistungsdaten und aufkommende Probleme unterrichtet zu werden (Situational Awareness) oder Wartungsprozesse zustandsabhĂ€ngig zu optimieren (Condition-based Maintenance). Derartige Szenarien verlangen aufgrund der Vielzahl und Frequenz der Daten sowie der Anforderung einer echtzeitnahen Auswertung den Einsatz geeigneter Technologien. Unter dem Namen Event Processing haben sich dabei Technologien etabliert, die in der Lage sind, Datenströme in Echtzeit zu verarbeiten und komplexe Ereignismuster auf Basis rĂ€umlicher, zeitlicher oder kausaler ZusammenhĂ€nge zu erkennen. Gleichzeitig sind heute in diesem Bereich verfĂŒgbare Systeme jedoch noch durch eine hohe technische KomplexitĂ€t der zugrunde liegenden deklarativen Sprachen gekennzeichnet, die bei der Entwicklung echtzeitfĂ€higer Anwendungen zu langsamen Entwicklungszyklen aufgrund notwendiger technischer Expertise fĂŒhrt. Gerade diese Anwendungen weisen allerdings hĂ€ufig eine hohe Dynamik in Bezug auf VerĂ€nderungen von Anforderungen der zu erkennenden Situationen, aber auch der zugrunde liegenden Sensordaten hinsichtlich ihrer Syntax und Semantik auf. Der primĂ€re Beitrag dieser Arbeit ermöglicht Fachanwendern durch die Abstraktion von technischen Details, selbstĂ€ndig verteilte echtzeitfĂ€hige Anwendungen in Form von sogenannten Echtzeit-Verarbeitungspipelines zu erstellen, zu bearbeiten und auszufĂŒhren. Die BeitrĂ€ge der Arbeit lassen sich wie folgt zusammenfassen: 1. Eine Methodik zur Entwicklung echtzeitfĂ€higer Anwendungen unter BerĂŒcksichtigung von Erweiterbarkeit sowie der ZugĂ€nglichkeit fĂŒr Fachanwender. 2. Modelle zur semantischen Beschreibung der Charakteristika von Ereignisproduzenten, Ereignisverarbeitungseinheiten und Ereigniskonsumenten. 3. Ein System zur AusfĂŒhrung von Verarbeitungspipelines bestehend aus geographisch verteilten Ereignisverarbeitungseinheiten. 4. Ein Software-Artefakt zur graphischen Modellierung von Verarbeitungspipelines sowie deren automatisierter AusfĂŒhrung. Die BeitrĂ€ge werden in verschiedenen Szenarien aus den Bereichen Produktion und Logistik vorgestellt, angewendet und evaluiert

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Mobile Agent Based Cloud Computing

    Get PDF
    Cloud Computing is becoming a revolutionizing computing paradigm. It offers various types of services and applications that are being delivered in the internet cloud. The services aim at providing reliable, fault tolerant dynamic computing environment to the user and offers computing resources as per demand. Skype, Dropbox, and Yahoo mail are some of the cloud services that have major impact in our lives. Several measures are taken to maintain the quality of its service in the cloud and to make IT infrastructure available with low cost. This paper presents various aspects of Cloud Computing, its implementation features, challenges and also explores the potential scope for research. The major section of this paper includes surveys of studies related to the possibilities of integrating Mobile Agents in Cloud Computing, since these technologies appear to be promising and marketable. Thus, the paper focuses on resolving challenges and bolstering services of Cloud Computing by utilizing Mobile Agent technology in various aspects of Cloud Computing

    Topologies of agents interactions in knowledge intensive multi-agentsystems for networked information services

    Get PDF
    Agents in a multi-agent system (mAS) could interact and cooperate in many different ways. The topology of agent interaction determines how the agents control and communicate with each other, what are the control and communication capabilities of each agent and the whole system, and how efficient the control and communications are. In consequence, the topology affects the agents’ ability to share knowledge, integrate knowledge, and make efficient use of knowledge in MAS. This paper presents an overview of four major MAS topologic models, assesses their advantages and disadvantages in terms of agent autonomy, adaptation, scalability, and efficiency of cooperation. Some insights into the applicability for each of the topologies to different environment and domain specific applications are explored. A design example of the topological models to an information service management application is attempted to illustrate the practical merits of each topology

    The 2016 Power Trading Agent Competition

    Get PDF
    This is the specification for the Power Trading Agent Competition for 2016 (Power TAC 2016). Power TAC is a competitive simulation that models a “liberalized” retail electrical energy market, where competing business entities or “brokers” offer energy services to customers through tariff contracts, and must then serve those customers by trading in a wholesale market. Brokers are challenged to maximize their profits by buying and selling energy in the wholesale and retail markets, subject to fixed costs and constraints; the winner of an individual “game” is the broker with the highest bank balance at the end of a simulation run. Costs include fees for publication and withdrawal of tariffs, and distribution fees for transporting energy to their contracted customers. Costs are also incurred whenever there is an imbalance between a broker’s total contracted energy supply and demand within a given time slot. The simulation environment models a wholesale market, a regulated distribution utility, and a population of energy customers, situated in a real location on Earth during a specific period for which weather data is available. The wholesale market is a relatively simple call market, similar to many existing wholesale electric power markets, such as Nord Pool in Scandinavia or FERC markets in North America, but unlike the FERC markets we are modeling a single region, and therefore we approximate locational-marginal pricing through a simple manipulation of the wholesale supply curve. Customer models include households, electric vehicles, and a variety of commercial and industrial entities, many o

    The 2017 Power Trading Agent Competition

    Get PDF
    This is the specification for the Power Trading Agent Competition for 2017 (Power TAC 2017). Power TAC is a competitive simulation that models a “liberalized” retail electrical energy market, where competing business entities or “brokers” offer energy services to customers through tariff contracts, and must then serve those customers by trading in a wholesale market. Brokers are challenged to maximize their profits by buying and selling energy in the wholesale and retail markets, subject to fixed costs and constraints; the winner of an individual “game” is the broker with the highest bank balance at the end of a simulation run. Costs include fees for publication and withdrawal of tariffs, and distribution fees for transporting energy to their contracted customers. Costs are also incurred whenever there is an imbalance between a broker’s total contracted energy supply and demand within a given time slot. The simulation environment models a wholesale market, a regulated distribution utility, and a population of energy customers, situated in a real location on Earth during a specific period for which weather data is available. The wholesale market is a relatively simple call market, similar to many existing wholesale electric power markets, such as Nord Pool in Scandinavia or FERC markets in North America, but unlike the FERC markets we are modeling a single region, and therefore we approximate locational-marginal pricing through a simple manipulation of the wholesale supply curve. Customer models include households, electric vehicles, and a variety of commercial and industrial entities, many of which have production capacity such as solar panels or wind turbines. All have “real-time” metering to support allocation of their hourly supply and demand to their subscribed brokers, and all are approximate ut
    • 

    corecore