1,164 research outputs found
DATA COMPRESSION USING EFFICIENT DICTIONARY SELECTION METHOD
With the increase in silicon densities, it is becoming feasible for compression systems to be implemented in chip. A system with distributed memory architecture is based on having data compression and decompression engines working independently on different data at the same time. This data is stored in memory distributed to each processor. The objective of the project is to design a lossless data compression system which operates in high-speed to achieve high compression rate. By using the architecture of compressors, the data compression rates are significantly improved. Also inherent scalability of architecture is possible. The main parts of the system are the data compressors and the control blocks providing control signals for the Data compressors, allowing appropriate control of the routing of data into and from the system. Each Data compressor can process four bytes of data into and from a block of data in every clock cycle. The data entering the system needs to be clocked in at a rate of 4 bytes in every clock cycle. This is to ensure that adequate data is present for all compressors to process rather than being in an idle state
Revenue requirements for mobile operators with ultra-high mobile broadband data traffic growth.
Mobile broadband data access over cellular networks has been established as a major new service in just a few years. The mobile broadband penetration has risen from almost zero to between 10 and 15 per cent in Western European leading markets from 2007 to the end of 2009. More than 75% of network traffic was broadband data in 2009, and the data volumes are growing rapidly. But the revenue generation is the reverse as the average for operators in Europe in 2009 was around 77 per cent of service revenues from voice, 10 per cent from SMS and 13 per cent from other data. Voice and broadband data service are built on two quite different business models. Voice pricing is volume based. Revenue depends linearly on the number of voice minutes. Broadband data service on the other hand is mainly flat fee based even if different levels are being introduced as well as tiers. Revenue is decoupled from traffic and therefore also from operating costs and investment requirements. This is what we define as a revenue gap. Earnings as well as internal financing will suffer from increasing traffic per user unless the flat fee can be raised or changed to volume based, other revenue can be obtained and/or operating costs and investments can be reduced accordingly. Observable trends and common forecasts indicate strong growth of mobile broadband traffic as well as declining revenue from mobile voice in the next five year period. This outlook suggests a prospective revenue gap with weak top-line growth and expanding operating costs and investment requirements. This is not only a profitability and cash flow issue. It may also severely restrict the industry's revenue and profit growth potential if it is handled mainly by cost-cutting. In sections 2 - 4 we describe related work, our contribution, the specific research questions as well as the methodology and its problems. Section 5 is an overview of mobile operators' revenue, its sources and development till today. Section 6 presents trends, developments and published forecasts that may be relevant for the future. Section 7 contains our conclusions. --Mobile broadband,mobile operator revenues,revenue requirements,voice revenues,non-voice revenues
Recommended from our members
ICOPER Project - Deliverable 4.3 ISURE: Recommendations for extending effective reuse, embodied in the ICOPER CD&R
The purpose of this document is to capture the ideas and recommendations, within and beyond the ICOPER community, concerning the reuse of learning content, including appropriate methodologies as well as established strategies for remixing and repurposing reusable resources. The overall remit of this work focuses on describing the key issues that are related to extending effective reuse embodied in such materials. The objective of this investigation, is to support the reuse of learning content whilst considering how it could be originally created and then adapted with that ‘reuse’ in mind. In these circumstances a survey on effective reuse best practices can often provide an insight into the main challenges and benefits involved in the process of creating, remixing and repurposing what we are now designating as Reusable Learning Content (RLC).
Several key issues are analysed in this report: Recommendations for extending effective reuse, building upon those described in the previous related deliverables 4.1 Content Development Methodologies and 4.2 Quality Control and Web 2.0 technologies. The findings of this current survey, however, provide further recommendations and strategies for using and developing this reusable learning content. In the spirit of ‘reuse’, this work also aims to serve as a foundation for the many different stakeholders and users within, and beyond, the ICOPER community who are interested in reusing learning resources.
This report analyses a variety of information. Evidence has been gathered from a qualitative survey that has focused on the technical and pedagogical recommendations suggested by a Special Interest Group (SIG) on the most innovative practices with respect to new media content authors (for content authoring or modification) and course designers (for unit creation). This extended community includes a wider collection of OER specialists. This collected evidence, in the form of video and audio interviews, has also been represented as multimedia assets potentially helpful for learning and useful as learning content in the New Media Space (See section 4 for further details).
Section 2 of this report introduces the concept of reusable learning content and reusability. Section 3 discusses an application created by the ICOPER community to enhance the opportunities for developing reusable content. Section 4 of this report provides an overview of the methodology used for the qualitative survey. Section 5 presents a summary of thematic findings. Section 6 highlights a list of recommendations for effective reuse of educational content, which were derived from thematic analysis described in Appendix A. Finally, section 7 summarises the key outcomes of this work
Personal Knowledge Models with Semantic Technologies
Conceptual Data Structures (CDS) is a unified meta-model for representing knowledge cues in varying degrees of granularity, structuredness, and formality.
CDS consists of: (1) A simple, expressive data-model; (2) A relation ontology which unifies the relations found in cognitive models of personal knowledge management tools, e. g., documents, mind-maps, hypertext, or semantic wikis. (3) An interchange format for structured text. Implemented prototypes have been evaluated
Models, Algorithms, and Architectures for Scalable Packet Classification
The growth and diversification of the Internet imposes increasing demands on the performance and functionality of network infrastructure. Routers, the devices responsible for the switch-ing and directing of traffic in the Internet, are being called upon to not only handle increased volumes of traffic at higher speeds, but also impose tighter security policies and provide support for a richer set of network services. This dissertation addresses the searching tasks performed by Internet routers in order to forward packets and apply network services to packets belonging to defined traffic flows. As these searching tasks must be performed for each packet traversing the router, the speed and scalability of the solutions to the route lookup and packet classification problems largely determine the realizable performance of the router, and hence the Internet as a whole. Despite the energetic attention of the academic and corporate research communities, there remains a need for search engines that scale to support faster communication links, larger route tables and filter sets and increasingly complex filters. The major contributions of this work include the design and analysis of a scalable hardware implementation of a Longest Prefix Matching (LPM) search engine for route lookup, a survey and taxonomy of packet classification techniques, a thorough analysis of packet classification filter sets, the design and analysis of a suite of performance evaluation tools for packet classification algorithms and devices, and a new packet classification algorithm that scales to support high-speed links and large filter sets classifying on additional packet fields
Towards a standard protocol for community-driven organizations of knowledge
International audienceThis paper deals with the "Web 2.0", where every user can contribute to the content, "harnessing collective intelligence". After studying what makes the success of services like Google Base, Del.icio.us and the Open Directory Project, we propose a unifying "REST" protocol for this kind of community-driven organizations of knowledge. The aim is to make the collaboration possible beyond the boundaries of the software and of the resulting communities
- …