4,206 research outputs found
Latvian pension reform
In 1995, Latvia became the first country in Central and Eastern Europe to implement parametric reform of the Soviet-style PAYGO pension system, and the first in the world to implement the"notional defined contribution (NDC) system"originally designed for Sweden. The Government's intention was to follow the overhaul of the PAYGO system with the creation of a funded second tier by 1998, but the reform has lagged. Public acceptance of the new system has been poor, and pressures for rollback of the reforms have grown. After such a splashy beginning why did the Latvian reform stall? What has been the net effect of the reforms after the roll backs? How did Latvia balance the difficult issues of system incentives, fairness, and affordability? What are the lessons of the Latvian experience with the NDC system for other reforming countries? These questions are the subject of this paper. It includes a description of pre-reform situation, describes the key provisions of the original reform, and discusses the subsequent amendments. The impact of the reform is assessed on the basis of macroeconomic and microeconomic simulations. On the basis of those, the reforms are evaluated and conclusions for other countries are drawn.Pensions&Retirement Systems,Banks&Banking Reform,Environmental Economics&Policies,Economic Theory&Research,Gender and Law
Living In the KnowlEdge Society (LIKES) Initiative and iSchools' Focus on the Information Field
In this poster, we describe the similarities between the Living In the KnowlEdge Society (LIKES) project and iSchools – both focus on the information field. This might lead to future collaborations between the two. One of the LIKES objectives is to spread computational thinking, fundamental CS/IT paradigms, key computing concepts and ICT paradigms across the Knowledge Society. This is analogous to iSchools’ vision of education for thorough understanding of information, IT and their applications. In the previous three LIKES workshops, participants from various disciplines had an intense discussion about grand challenges to incorporate computing/IT in their disciplines. All iSchools have courses that teach computing and information-related topics. If those courses can be expanded for other non-computing disciplines on their campuses with support from experiences of LIKES, it would further empower professionals in the iField
Source Book on Digital Libraries
This extensive report outlines the steps necessary to create a national, electronic Science, Engineering and Technology Library. Step one is for NSF to play a lead role in launching a concerted R&D program in the area. Step two involves partnerships, cooperative ventures, and production conversion of backarchives. ARPA, NASA, NIST, Library of Congress, NLM, NAI, and many other groups must become involved if we are to serve the broad base of users; it will only be successful if supported by top-quality research on information storage and retrieval, hypertext, document processing, human-computer interaction, scaling up of information systems, networking, multimedia systems, visualization, education, and training. NOTE: Because of its large size, this reports is not available in hard copy from the department. It can be obtained electronically through anonynous FTP to fox.cs.vt.edu (in directory /pub/DigitalLibrary). To obtain a hard copy, write to Mark Roope at University Printing Services; "Documents on Demand"; Virginia Tech; Blacksburg VA 24061-0243; or call (703) 231-6701
Open Peer to Peer Technologies
Peer-to-peer applications allow us to separate out the concepts of authoring information and publishing that same information. It allows for decentralized application design, something that is both an opportunity and a challenge.
All the peer-to-peer applications, in various ways, return the content, choice, and control to ordinary users. Tiny end points on the Internet, sometimes even without knowing each other, exchange information and form communities. In these applications there are no more clients and servers, instead the communication takes place between cooperating peers.
There are many applications nowadays which are being labeled as peer-to-peer. A way to examine the distinction of whether an application is peer-to-peer or not is to check on the owner of the hardware that the service runs on. Like Napster, if the huge part of the hardware that Napster runs on is owned by the Napster users on millions of desktops then it is peer-to-peer. Peer-to-peer is a way of decentralizing not only features, but costs and administration also. By decentralizing data and therefore redirecting users so they download data directly from other user's computers, Napster reduced the load on its servers to the point where it could cheaply support tens of millions of users. The same principle is used in many commercial peer-to-peer systems. In short peer-to-peer cannot only distribute files. It can also distribute the burden of supporting network connections. The overall bandwidth remains the same as in centralized systems, but bottlenecks are eliminated at central sites and equally importantly, at their ISPs.
Search techniques are important to making peer-to-peer systems useful. But there is a higher level of system design and system use. Topics like trust, accountability and metadata have to be handled before searching is viable
XML for ETDs
The main objective of this project was to devise a tool/procedure to aid students at Virginia Tech in developing their electronic theses and dissertations (ETDs) in eXtensible Markup Language (XML) and to document properly all the work that was done at Virginia Tech in this regard. The project began by studying the other ETD-XML projects done earlier. Both the approaches (DTD and XSD) explored at Virginia Tech were studied and an attempt was made to improve the XSD approach using VBA (Visual Basic for Applications). The proposed approach was completely implemented and documented in a way that should be easy for the students to comprehend. This should help ease student efforts to prepare theses in XML
Calibration, validation and the NERC Airborne Remote Sensing Facility
The application of airborne and satellite remote sensing to terrestrial applications has been dominated by empirically-based, semi-quantitative approaches, in contrast to those developed in the marine and atmospheric sciences which have often developed from rigorous physically-based models. Furthermore, the traceability of EO data and the methodological basis of many applications has often been taken for granted, with the result that the repeatability of analyses and the reliability of many terrestrial EO products can be questioned. ‘NCAVEO’ is a recently established network of Earth Observation experts and data users committed to exchanging knowledge and understanding in the area of remote sensing data calibration and validation. It aims to provide a UK-based forum to collate available knowledge and expertise associated with the calibration and validation of EO-based products from both UK and overseas providers, in different discipline areas including land, ocean and atmosphere. This paper will introduce NCAVEO and highlight some of the contributions it hopes to make to airborne remote sensing in the UK
Beyond Harvesting: Digital Library Components as OAI Extensions
Reusability always has been a controversial topic in Digital Library (DL) design. While componentization has gained momentum in software engineering in general, there has not yet been broad DL standardization in component interfaces. Recently, the Open Archives Initiative (OAI) has begun to address this by creating a standard protocol for accessing metadata archives. It is proposed that this protocol be extended to act as the glue that binds together various components of a typical DL. In order to test the feasibility of this approach, a set of protocol extensions was created, implemented, and integrated as components of production and research DLs. The performance of these components was analyzed from the perspective of execution speed, network traffic, and data consistency. On the whole, this work has simultaneously revealed the feasibility of such OAI extensions for component interaction, and has identified aspects of the OAI protocol that constrain such extensions
Crawling on the World Wide Web
As the World Wide Web grows rapidly, a web search engine is needed for people to search through the Web. The crawler is an important module of a web search engine. The quality of a crawler directly affects the searching quality of such web search engines. Given some seed URLs, the crawler should retrieve the web pages of those URLs, parse the HTML files, add new URLs into its buffer and go back to the first phase of this cycle. The crawler also can retrieve some other information from the HTML files as it is parsing them to get the new URLs. This paper describes the design, implementation, and some considerations of a new crawler programmed as an learning exercise and for possible use for experimental studies
- …
