186 research outputs found
TwiddleNet metadata tagging and data dissemination in mobile device networks
y were only a few years ago; instead they offer a range of content capture capabilities, including high resolution photos, videos and sound recordings. Their communication modalities and processing power have also evolved significantly. Modern mobile devices are very capable platforms, many surpassing their desktop cousins only a few years removed. TwiddleNet is a distributed architecture of personal servers that harnesses the power of these mobile devices, enabling real time information dissemination and file sharing of multiple data types from commercial-off-the-shelf platforms. This thesis focuses on two specific issues of the TwiddleNet design; metadata tagging and data dissemination. Through a combination of automatically generated and user input metadata tag values, TwiddleNet users can locate files across participating devices. Metaphor appropriate custom tags can be added as needed to insure efficient, rich and successful file searches. Intelligent data dissemination algorithms provide context sensitive governance to the file transfer scheme. Smart dissemination reconciles device and operational states with the amount of requested data and content to send, enabling providers to meet their most pressing needs, whether that is continuing to generate content or servicing requests.http://archive.org/details/twiddlenetmetada109453333US Navy (USN) author.Approved for public release; distribution is unlimited
Emerging technologies for learning (volume 1)
Collection of 5 articles on emerging technologies and trend
A knowledge services roadmay for online learning
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2005.Includes bibliographical references (p. 76-78).In today's society, there is a need for organizations to have a robust knowledge infrastructure in place, so that they can create or acquire knowledge; store knowledge; disseminate knowledge, and protect and manage their knowledge assets. However, with advances in the publishing media, our ability to generate information has far exceeded our abilities to find, review and understand it, thus leading to "Information Overload". Information overload refers to the inability to extract needed knowledge from existing information due to the volume of information, or lack of understanding of information and its whereabouts, or efficient ways to locate relevant information. These issues could be addressed by having efficient Knowledge Management Systems/Knowledge Services, so that people can create and understand available information, and have services to help them learn effectively and make better decisions. To tackle the new information needs, the use of technologies such as Weblog Services (weblog-enabled knowledge services) offer opportunities for decentralized knowledge creation and dissemination; as such tools put the authors in charge of knowledge creation process without any administration-enforced policies. Learning environments are also typically characterized by challenges such as barriers to use, quality control and relevance issues, or issues of credibility of information. These issues are effectively tackled by weblog services since weblogs are often open source and need no training for authoring. In addition, favorite blogs act as information filters or "bird dogs" and point at useful information. Feedback incorporated in weblog services makes people react and learn "interactively" and also enhances credibility and trust in information.(cont.) Weblog services can also share published content through the process of Content Syndication, and thus offer an insight into knowledge assets in the timeliest of ways. This thesis report describes certain weblog services implementations carried out at MIT. Results of such implementations have emphasized the applications of such weblog (knowledge) services in knowledge sharing and online learning. However, there are certain issues to be addressed in weblog services such as privacy and intellectual property issues, as well as resolution of organizational tussles in the domain of content syndication standards.by Anand Rajagopal.S.M
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
A case study at Cisco Systems, Inc.
This research aims to provide a better understanding on how firms stimulate knowledge sharing through the utilization of collaboration tools, in particular Emergent Social Software Platforms (ESSPs). It focuses on the distinctive applications of ESSPs and on the initiatives contributing to maximize its advantages.
In the first part of the research, I have itemized all types of existing collaboration tools and classify them in different categories according to their capabilities, objectives and according to their faculty for promoting knowledge sharing. In the second part, and based on an exploratory case study at Cisco Systems, I have identified the main applications of an existing enterprise social software platform named Webex Social.
By combining a qualitative and quantitative approach, as well as combining data collected from survey’s results and from the analysis of the company’s documents, I am expecting to maximize the outcome of this investigation and reduce the risk of bias.
Although effects cannot be universalized based on one single case study, some utilization patterns have been underlined from the data collected and potential trends in managing knowledge have been observed. The results of the research have also enabled identifying most of the constraints experienced by the users of the firm’s social software platform.
Utterly, this research should provide a primary framework for firms planning to create or implement a social software platform and for firms willing to increase adoption levels and to promote the overall participation of users. It highlights the common traps that should be avoided by developers when designing a social software platform and the capabilities that it should inherently carry to support an effective knowledge management strategy
Federating Heterogeneous Digital Libraries by Metadata Harvesting
This dissertation studies the challenges and issues faced in federating heterogeneous digital libraries (DLs) by metadata harvesting. The objective of federation is to provide high-level services (e.g. transparent search across all DLs) on the collective metadata from different digital libraries. There are two main approaches to federate DLs: distributed searching approach and harvesting approach. As the distributed searching approach replies on executing queries to digital libraries in real time, it has problems with scalability. The difficulty of creating a distributed searching service for a large federation is the motivation behind Open Archives Initiatives Protocols for Metadata Harvesting (OAI-PMH). OAI-PMH supports both data providers (repositories, archives) and service providers. Service providers develop value-added services based on the information collected from data providers. Data providers are simply collections of harvestable metadata. This dissertation examines the application of the metadata harvesting approach in DL federations. It addresses the following problems: (1) Whether or not metadata harvesting provides a realistic and scalable solution for DL federation. (2) What is the status of and problems with current data provider implementations, and how to solve these problems. (3) How to synchronize data providers and service providers. (4) How to build different types of federation services over harvested metadata. (5) How to create a scalable and reliable infrastructure to support federation services. The work done in this dissertation is based on OAI-PMH, and the results have influenced the evolution of OAI-PMH. However, the results are not limited to the scope of OAI-PMH. Our approach is to design and build key services for metadata harvesting and to deploy them on the Web. Implementing a publicly available service allows us to demonstrate how these approaches are practical. The problems posed above are evaluated by performing experiments over these services.
To summarize the results of this thesis, we conclude that the metadata harvesting approach is a realistic and scalable approach to federate heterogeneous DLs. We present two models of building federation services: a centralized model and a replicated model. Our experiments also demonstrate that the repository synchronization problem can be addressed by push, pull, and hybrid push/pull models; each model has its strengths and weaknesses and fits a specific scenario. Finally, we present a scalable and reliable infrastructure to support the applications of metadata harvesting
Social media: a new frontier for retailers?
During the last two decades the retailing industry is finding itself in a state of constant evolution and transformation. Globalization, mergers and acquisitions, and technological developments have drastically changed the retailing landscape. The explosive growth of the Internet has been one of the main catalysts in this process. The effects of the Internet have been mostly felt in retail sectors dealing mainly with intangibles or information products. But these are not likely to be limited to these sectors; increasingly retailers of physical products realize that the empowered, sophisticated, critical and well-informed consumer of today is essentially different to the consumer they have always known. The web, and particularly what is known as Social Media or Web 2.0, have given consumers much more control, information and power over the market process, posing retailers with a number of important dilemmas and challenges. This article explains what the new face of the Internet, widely referred to as Web 2.0 or Social Media, is, identifies its importance as a strategic marketing tool and proposes a number of alternative strategies for retailers. Implementing such strategies will allow retailers not only to survive, but also create competitive advantages and thrive in the new environment
An Analysis of Data Quality Defects in Podcasting Systems
Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste
An Analysis of Data Quality Defects in Podcasting Systems
Podcasting has emerged as an asynchronous delay-tolerant method for the distribution of multimedia files through a network. Although podcasting has become a popular Internet application, users encounter frequent information quality problems in podcasting systems. To better understand the severity of these quality problems, we have applied the Total Data Quality Management methodology to podcasting. Through the application of this methodology we have quantified the data quality problems inherent within podcasting metadata, and performed an analysis that maps specific metadata defects to failures in popular commercial podcasting platforms. Furthermore, we extracted the Really Simple Syndication (RSS) feeds from the iTunes catalog for the purpose of performing the most comprehensive measurement of podcasting metadata to date. From these findings we attempted to improve the quality of podcasting data through the creation of a metadata validation tool - PodCop. PodCop extends existing RSS validation tools and encapsulates validation rules specific to the context of podcasting. We believe PodCop is the first attempt at improving the overall health of the podcasting ecosyste
- …