9,414 research outputs found

    Coordinated Multicast Beamforming in Multicell Networks

    Full text link
    We study physical layer multicasting in multicell networks where each base station, equipped with multiple antennas, transmits a common message using a single beamformer to multiple users in the same cell. We investigate two coordinated beamforming designs: the quality-of-service (QoS) beamforming and the max-min SINR (signal-to-interference-plus-noise ratio) beamforming. The goal of the QoS beamforming is to minimize the total power consumption while guaranteeing that received SINR at each user is above a predetermined threshold. We present a necessary condition for the optimization problem to be feasible. Then, based on the decomposition theory, we propose a novel decentralized algorithm to implement the coordinated beamforming with limited information sharing among different base stations. The algorithm is guaranteed to converge and in most cases it converges to the optimal solution. The max-min SINR (MMS) beamforming is to maximize the minimum received SINR among all users under per-base station power constraints. We show that the MMS problem and a weighted peak-power minimization (WPPM) problem are inverse problems. Based on this inversion relationship, we then propose an efficient algorithm to solve the MMS problem in an approximate manner. Simulation results demonstrate significant advantages of the proposed multicast beamforming algorithms over conventional multicasting schemes.Comment: 10pages, 9 figure

    An application of machine learning to the organization of institutional software repositories

    Get PDF
    Software reuse has become a major goal in the development of space systems, as a recent NASA-wide workshop on the subject made clear. The Data Systems Technology Division of Goddard Space Flight Center has been working on tools and techniques for promoting reuse, in particular in the development of satellite ground support software. One of these tools is the Experiment in Libraries via Incremental Schemata and Cobweb (ElvisC). ElvisC applies machine learning to the problem of organizing a reusable software component library for efficient and reliable retrieval. In this paper we describe the background factors that have motivated this work, present the design of the system, and evaluate the results of its application

    A reusable iterative optimization software library to solve combinatorial problems with approximate reasoning

    Get PDF
    Real world combinatorial optimization problems such as scheduling are typically too complex to solve with exact methods. Additionally, the problems often have to observe vaguely specified constraints of different importance, the available data may be uncertain, and compromises between antagonistic criteria may be necessary. We present a combination of approximate reasoning based constraints and iterative optimization based heuristics that help to model and solve such problems in a framework of C++ software libraries called StarFLIP++. While initially developed to schedule continuous caster units in steel plants, we present in this paper results from reusing the library components in a shift scheduling system for the workforce of an industrial production plant.Comment: 33 pages, 9 figures; for a project overview see http://www.dbai.tuwien.ac.at/proj/StarFLIP

    KEMNAD: A Knowledge Engineering Methodology for Negotiating Agent Development

    Get PDF
    Automated negotiation is widely applied in various domains. However, the development of such systems is a complex knowledge and software engineering task. So, a methodology there will be helpful. Unfortunately, none of existing methodologies can offer sufficient, detailed support for such system development. To remove this limitation, this paper develops a new methodology made up of: (1) a generic framework (architectural pattern) for the main task, and (2) a library of modular and reusable design pattern (templates) of subtasks. Thus, it is much easier to build a negotiating agent by assembling these standardised components rather than reinventing the wheel each time. Moreover, since these patterns are identified from a wide variety of existing negotiating agents(especially high impact ones), they can also improve the quality of the final systems developed. In addition, our methodology reveals what types of domain knowledge need to be input into the negotiating agents. This in turn provides a basis for developing techniques to acquire the domain knowledge from human users. This is important because negotiation agents act faithfully on the behalf of their human users and thus the relevant domain knowledge must be acquired from the human users. Finally, our methodology is validated with one high impact system

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization

    Full text link
    Joint allocation of spectrum and user association is considered for a large cellular network. The objective is to optimize a network utility function such as average delay given traffic statistics collected over a slow timescale. A key challenge is scalability: given nn Access Points (APs), there are O(2n)O(2^n) ways in which the APs can share the spectrum. The number of variables is reduced from O(2n)O(2^n) to O(nk)O(nk), where kk is the number of users, by optimizing over local overlapping neighborhoods, defined by interference conditions, and by exploiting the existence of sparse solutions in which the spectrum is divided into k+1k+1 segments. We reformulate the problem by optimizing the assignment of subsets of active APs to those segments. An ℓ0\ell_0 constraint enforces a one-to-one mapping of subsets to spectrum, and an iterative (reweighted ℓ1\ell_1) algorithm is used to find an approximate solution. Numerical results for a network with 100 APs serving several hundred users show the proposed method achieves a substantial increase in total throughput relative to benchmark schemes.Comment: Submitted to the IEEE International Symposium on Information Theory (ISIT), 201

    Cross-Layer Optimization of Fast Video Delivery in Cache-Enabled Relaying Networks

    Full text link
    This paper investigates the cross-layer optimization of fast video delivery and caching for minimization of the overall video delivery time in a two-hop relaying network. The half-duplex relay nodes are equipped with both a cache and a buffer which facilitate joint scheduling of fetching and delivery to exploit the channel diversity for improving the overall delivery performance. The fast delivery control is formulated as a two-stage functional non-convex optimization problem. By exploiting the underlying convex and quasi-convex structures, the problem can be solved exactly and efficiently by the developed algorithm. Simulation results show that significant caching and buffering gains can be achieved with the proposed framework, which translates into a reduction of the overall video delivery time. Besides, a trade-off between caching and buffering gains is unveiled.Comment: 7 pages, 4 figures; accepted for presentation at IEEE Globecom, San Diego, CA, Dec. 201
    • …
    corecore