8,738 research outputs found
The Institutional-Evolutionary Antitrust Model
The purpose of this article is to provide an alternative antitrust model to the mainstream model that is used in competition policy. I call it the Institutional-Evolutionary Antitrust Model. In order to construct an antitrust model one needs both empirical knowledge and considerations of how to adequately deal with norms. The analysis of competition as an evolutionary process that unfolds within legal rules provides the empirical foundation for the model. The development of the normative dimension involves the elaboration of a comparative approach. Building on those foundations the main features of the Institutional-Evolutionary Model are sketched out and it is shown that its use leads to systematically different outcomes and conclusions than the dominant antitrust ideals.Antitrust, Competition, Competition Policy, Evolutionary Process, Institutions
Trajectory data mining: A review of methods and applications
The increasing use of location-aware devices has led to an increasing availability of trajectory data. As a result, researchers devoted their efforts to developing analysis methods including different data mining methods for trajectories. However, the research in this direction has so far produced mostly isolated studies and we still lack an integrated view of problems in applications of trajectory mining that were solved, the methods used to solve them, and applications using the obtained solutions. In this paper, we first discuss generic methods of trajectory mining and the relationships between them. Then, we discuss and classify application problems that were solved using trajectory data and relate them to the generic mining methods that were used and real world applications based on them. We classify trajectory-mining application problems under major problem groups based on how they are related. This classification of problems can guide researchers in identifying new application problems. The relationships between the methods together with the association between the application problems and mining methods can help researchers in identifying gaps between methods and inspire them to develop new methods. This paper can also guide analysts in choosing a suitable method for a specific problem. The main contribution of this paper is to provide an integrated view relating applications of mining trajectory data and the methods used
Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web
The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the authorâs and shouldnât be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very
instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that
they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our
technologies is still barely visible. McLuhanâs predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet
the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the
services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge
management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The
combination of this expertise, and the time and space afforded the consortium by the
IRC structure, suggested the opportunity for a concerted effort to develop an approach
to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to
the knowledge management services AKT tries to provide. As a medium for the
semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the
provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different
applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing
ontologies to create a third). Ontology mapping, and the elimination of conflicts of
reference, will be important tasks. All of these issues are discussed along with our
proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices
that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which
semantic hygiene prevails interesting enough to reason in? These and many other
questions need to be addressed if we are to provide effective knowledge technologies
for our content on the web
Distributed collaborative structuring
Making Inter- and Intranet resources available in a structured way is one of the most important and challenging problems today. An underlying structure allows users to search for information, documents or relationships without a clearly defined information need. While search and filtering technology is becoming more and more powerful, the development of such explorative access methods lacks behind. This work is concerned with the development of large-scale data mining methods that allow to structure information spaces based on loosely coupled user annotations and navigation patterns. An essential challenge, that was not yet fully realized in this context, is heterogeneity. Different users and user groups often have different preferences and needs on how to access an information collection. While current Business Intelligence, Information Retrieval or Content Management solutions allow for a certain degree of personalization, these approaches are still very static. This considerably limits their applicability in heterogeneous environments. This work is based on a novel paradigm, called collaborative structuring. This term is chosen as a generalization to the term collaborative filtering. Instead of only filtering items, collaborative structuring allows users to organize information spaces in a loosely coupled way, based on patterns emerging through data mining. A first contribution of the work is to define the conceptual notion of collaborative structuring as combinatorial optimization problem and to put it into relation with existing research in the areas of data and web mining. As second contribution, highly scalable, distributed optimization strategies are proposed and analyzed. Finally, the proposed approaches are quantitatively evaluated against existing methods using several real-world data sets. Also, practical experience from two application areas is given, namely information access for heterogeneous expert communities and collaborative media organization
Effects of component-subscription network topology on large-scale data centre performance scaling
Modern large-scale date centres, such as those used for cloud computing
service provision, are becoming ever-larger as the operators of those data
centres seek to maximise the benefits from economies of scale. With these
increases in size comes a growth in system complexity, which is usually
problematic. There is an increased desire for automated "self-star"
configuration, management, and failure-recovery of the data-centre
infrastructure, but many traditional techniques scale much worse than linearly
as the number of nodes to be managed increases. As the number of nodes in a
median-sized data-centre looks set to increase by two or three orders of
magnitude in coming decades, it seems reasonable to attempt to explore and
understand the scaling properties of the data-centre middleware before such
data-centres are constructed. In [1] we presented SPECI, a simulator that
predicts aspects of large-scale data-centre middleware performance,
concentrating on the influence of status changes such as policy updates or
routine node failures. [...]. In [1] we used a first-approximation assumption
that such subscriptions are distributed wholly at random across the data
centre. In this present paper, we explore the effects of introducing more
realistic constraints to the structure of the internal network of
subscriptions. We contrast the original results [...] exploring the effects of
making the data-centre's subscription network have a regular lattice-like
structure, and also semi-random network structures resulting from parameterised
network generation functions that create "small-world" and "scale-free"
networks. We show that for distributed middleware topologies, the structure and
distribution of tasks carried out in the data centre can significantly
influence the performance overhead imposed by the middleware
Discovery of topological constraints on spatial object classes using a refined topological model
In a typical data collection process, a surveyed spatial object is annotated upon creation, and is classified based on its attributes. This annotation can also be guided by textual definitions of objects. However, interpretations of such definitions may differ among people, and thus result in subjective and inconsistent classification of objects. This problem becomes even more pronounced if the cultural and linguistic differences are considered. As a solution, this paper investigates the role of topology as the defining characteristic of a class of spatial objects. We propose a data mining approach based on frequent itemset mining to learn patterns in topological relations between objects of a given class and other spatial objects. In order to capture topological relations between more than two (linear) objects, this paper further proposes a refinement of the 9-intersection model for topological relations of line geometries. The discovered topological relations form topological constraints of an object class that can be used for spatial object classification. A case study has been carried out on bridges in the OpenStreetMap dataset for the state of Victoria, Australia. The results show that the proposed approach can successfully learn topological constraints for the class bridge, and that the proposed refined topological model for line geometries outperforms the 9-intersection model in this task
UBIDEV: a homogeneous service framework for pervasive computing environments
This dissertation studies the heterogeneity problem of pervasive computing system from the viewpoint of an infrastructure aiming to provide a service-oriented application model. From Distributed System passing through mobile computing, pervasive computing is presented as a step forward in ubiquitous availability of services and proliferation of interacting autonomous entities. To better understand the problems related to the heterogeneous and dynamic nature of pervasive computing environments, we need to analyze the structure of a pervasive computing system from its physical and service dimension. The physical dimension describes the physical environment together wit the technology infrastructure that characterizes the interactions and the relations within the environment; the service dimension represents the services (being them software or not) the environment is able to provide [Nor99]. To better separate the constrains and the functionalities of a pervasive computing system, this dissertation classifies it in terms of resources, context, classification, services, coordination and application. UBIDEV, as the key result of this dissertation, introduces a unified model helping the design and the implementation of applications for heterogeneous and dynamic environments. This model is composed of the following concepts: ⢠Resource: all elements of the environment that are manipulated by the application, they are the atomic abstraction unit of the model. ⢠Context: all information coming from the environment that is used by the application to adapts its behavior. Context contains resources and services and defines their role in the application. ⢠Classification: the environment is classified according to the application ontology in order to ground the generic conceptual model of the application to the specific environment. It defines the basic semantic level of interoperability. ⢠Service: the functionalities supported by the system; each service manipulates one or more resources. Applications are defined as a coordination and adaptation of services. ⢠Coordination: all aspects related to service composition and execution as well as the use of the contextual information are captured by the coordination concept. ⢠Application Ontology: represents the viewpoint of the application on the specific context; it defines the high level semantic of resources, services and context. Applying the design paradigm proposed by UBIDEV, allows to describe applications according to a Service Oriented Architecture[Bie02], and to focus on application functionalities rather than their relations with the physical devices. Keywords: pervasive computing, homogenous environment, service-oriented, heterogeneity problem, coordination model, context model, resource management, service management, application interfaces, ontology, semantic services, interaction logic, description logic.Questa dissertazione studia il problema della eterogeneit`a nei sistemi pervasivi proponendo una infrastruttura basata su un modello orientato ai servizi. I sistemi pervasivi sono presentati come unâevoluzione naturale dei sistemi distribuiti, passando attraverso mobile computing, grazie ad una disponibilit`a ubiqua di servizi (sempre, ovunque ed in qualunque modo) e ad loro e con lâambiente stesso. Al fine di meglio comprendere i problemi legati allintrinseca eterogeneit`a dei sistemi pervasivi, dobbiamo prima descrivere la struttura fondamentale di questi sistemi classificandoli attraverso la loro dimensione fisica e quella dei loro servizi. La dimensione fisica descrive lâambiente fisico e tutti i dispositivi che fanno parte del contesto della applicazione. La dimensione dei servizi descrive le funzionalit`a (siano esse software o no) che lâambiente `e in grado di fornire [Nor99]. I sistemi pervasivi vengono cos`Äą classificati attraverso una metrica pi `u formale del tipo risorse, contesto, servizi, coordinazione ed applicazione. UBIDEV, come risultato di questa dissertazione, introduce un modello uniforme per la descrizione e lo sviluppo di applicazioni in ambienti dinamici ed eterogenei. Il modello `e composto dai seguenti concetti di base: ⢠Risorse: gli elementi dellâambiente fisico che fanno parte del modello dellapplicazione. Questi rappresentano lâunit`a di astrazione atomica di tutto il modello UBIDEV. ⢠Contesto: le informazioni sullo stato dellâambiente che il sistema utilizza per adattare il comportamento dellâapplicazione. Il contesto include informazioni legate alle risorse, ai servizi ed alle relazioni che li legano. ⢠Classificazione: lâambiente viene classificato sulla base di una ontologia che rappresenta il punto di accordo a cui tutti i moduli di sistema fanno riferimento. Questa classificazione rappresenta il modello concettuale dellâapplicazione che si riflette sullâintero ambiente. Si definisce cos`Äą la semantica di base per tutto il sistema. ⢠Servizi: le funzionalit`a che il sistema `e in grado di fornire; ogni servizio `e descritto in termini di trasformazione di una o pi `u risorse. Le applicazioni sono cos`Äą definite in termini di cooperazione tra servizi autonomi. ⢠Coordinazione: tutti gli aspetti legati alla composizione ed alla esecuzione di servizi cos`Äą come lâelaborazione dellâinformazione contestuale. ⢠Ontologia dellâApplicazione: rappresenta il punto di vista dellâapplicazione; definisce la semantica delle risorse, dei servizi e dellâinformazione contestuale. Applicando il paradigma proposto da UBIDEV, si possono descrivere applicazioni in accordo con un modello Service-oriented [Bie02] ed, al tempo stesso, ridurre lâapplicazione stessa alle sue funzionalit`a di alto livello senza intervenire troppo su come queste funzionalit` a devono essere realizzate dalle singole componenti fisiche
- âŚ