667 research outputs found

    Data Management for Dynamic Multimedia Analytics and Retrieval

    Get PDF
    Multimedia data in its various manifestations poses a unique challenge from a data storage and data management perspective, especially if search, analysis and analytics in large data corpora is considered. The inherently unstructured nature of the data itself and the curse of dimensionality that afflicts the representations we typically work with in its stead are cause for a broad range of issues that require sophisticated solutions at different levels. This has given rise to a huge corpus of research that puts focus on techniques that allow for effective and efficient multimedia search and exploration. Many of these contributions have led to an array of purpose-built, multimedia search systems. However, recent progress in multimedia analytics and interactive multimedia retrieval, has demonstrated that several of the assumptions usually made for such multimedia search workloads do not hold once a session has a human user in the loop. Firstly, many of the required query operations cannot be expressed by mere similarity search and since the concrete requirement cannot always be anticipated, one needs a flexible and adaptable data management and query framework. Secondly, the widespread notion of staticity of data collections does not hold if one considers analytics workloads, whose purpose is to produce and store new insights and information. And finally, it is impossible even for an expert user to specify exactly how a data management system should produce and arrive at the desired outcomes of the potentially many different queries. Guided by these shortcomings and motivated by the fact that similar questions have once been answered for structured data in classical database research, this Thesis presents three contributions that seek to mitigate the aforementioned issues. We present a query model that generalises the notion of proximity-based query operations and formalises the connection between those queries and high-dimensional indexing. We complement this by a cost-model that makes the often implicit trade-off between query execution speed and results quality transparent to the system and the user. And we describe a model for the transactional and durable maintenance of high-dimensional index structures. All contributions are implemented in the open-source multimedia database system Cottontail DB, on top of which we present an evaluation that demonstrates the effectiveness of the proposed models. We conclude by discussing avenues for future research in the quest for converging the fields of databases on the one hand and (interactive) multimedia retrieval and analytics on the other

    A multimedia information exchange of the industrial heritage of the Lower Lee Valley.

    Get PDF
    The Lee Valley Industrial Heritage Electronic Archive (LVIHEA) is a model record of industrial buildings composed as a composite of multimedia data files relevant to the interpretation of the region's dynamic industrial environment. The design criteria concerning natural, human and artificial resources are applicable to education and heritage management strategies. The prototype model was evaluated in terms of its efficacy and effectiveness with designated user groups. The developed model will enable qualitative and quantitative analyses concerning the economic, social and industrial history of the region. It can be used as a pedagogic tool for instruction in the principles of structured data design, construction, storage and retrieval, and for techniques of data collection. Furthermore the data sets can be closely analysed and manipulated for interpretative purposes. Chapter one attempts to define the Lee Valley in terms of its geographic, historical, economic and societal context. The aims and resources of the project are outlined and the study is placed in the bibliographic context of similar studies. Thereafter it addresses the processes leading to and a description of the structure of the prototype model. A paper model is presented and the data structures conforming lo or compatible with established planning, archiving and management protocols and strategies are described and evaluated. Chapter two is a detailed description and rationale of the archive's data files and teaching and learning package. It outlines procedures of multimedia data collection and digitisation and provides an evaluative analysis. Chapter three looks at the completed prototype and reviews the soft systems methodology approach to problem analysis used throughout the project. Sections examining the LVIHEA in use and the practical issues of disseminating it follow. The chapter concludes by reviewing the significance of the research and indicates possible directions for further research. The survey is artifact rather than document led and begins with the contemporary landscape before "excavating" to reveal first the recent and then the more distant past. However, many choices for inclusion are necessarily reactive rather than proactive in response to the regular "crises" where conservation is just one consideration in a complex development. Progressive strategies are sometimes sacrificed for the immediate opportunity to record information concerning an artifact under imminent threat of destruction. It is acknowledge that the artefact (building) would usually disappear before its associated documentation and that therefore it was imperative to obtain as much basic detail as possible about as many sites as possible. It is hoped that greater depth can be achieved by tracking down the documentation to its repositories when time permits. Amenity groups had already focussed their attention on many of the more "interesting" sites and every opportunity was taken to incorporate their findings into the LVIHEA. This study provides an insight into the cycle of development and decline of an internationally important industrial landscape. It does so in a structured environment incorporating modem digital technology while providing a framework for continuing study

    SNAP, Crackle, WebWindows!

    Get PDF
    We elaborate the SNAP---Scalable (ATM) Network and (PC) Platforms---view of computing in the year 2000. The World Wide Web will continue its rapid evolution, and in the future, applications will not be written for Windows NT/95 or UNIX, but rather for WebWindows with interfaces defined by the standards of Web servers and clients. This universal environment will support WebTop productivity tools, such as WebWord, WebLotus123, and WebNotes built in modular dynamic fashion, and undermining the business model for large software companies. We define a layered WebWindows software architecture in which applications are built on top of multi-use services. We discuss examples including business enterprise systems (IntraNets), health care, financial services and education. HPCC is implicit throughout this discussion for there is no larger parallel system than the World Wide metacomputer. We suggest building the MPP programming environment in terms of pervasive sustainable WebWindows technologies. In particular, WebFlow will support naturally dataflow integrating data and compute intensive applications on distributed heterogeneous systems

    A query processing system for very large spatial databases using a new map algebra

    Get PDF
    Dans cette thèse nous introduisons une approche de traitement de requêtes pour des bases de donnée spatiales. Nous expliquons aussi les concepts principaux que nous avons défini et développé: une algèbre spatiale et une approche à base de graphe utilisée dans l'optimisateur. L'algèbre spatiale est défini pour exprimer les requêtes et les règles de transformation pendant les différentes étapes de l'optimisation de requêtes. Nous avons essayé de définir l'algèbre la plus complète que possible pour couvrir une grande variété d'application. L'opérateur algébrique reçoit et produit seulement des carte. Les fonctions reçoivent des cartes et produisent des scalaires ou des objets. L'optimisateur reçoit la requête en expression algébrique et produit un QEP (Query Evaluation Plan) efficace dans deux étapes: génération de QEG (Query Evaluation Graph) et génération de QEP. Dans première étape un graphe (QEG) équivalent de l'expression algébrique est produit. Les règles de transformation sont utilisées pour transformer le graphe a un équivalent plus efficace. Dans deuxième étape un QEP est produit de QEG passé de l'étape précédente. Le QEP est un ensemble des opérations primitives consécutives qui produit les résultats finals (la réponse finale de la requête soumise au base de donnée). Nous avons implémenté l'optimisateur, un générateur de requête spatiale aléatoire, et une base de donnée simulée. La base de donnée spatiale simulée est un ensemble de fonctions pour simuler des opérations spatiales primitives. Les requêtes aléatoires sont soumis à l'optimisateur. Les QEPs générées sont soumis au simulateur de base de données spatiale. Les résultats expérimentaux sont utilisés pour discuter les performances et les caractéristiques de l'optimisateur.Abstract: In this thesis we introduce a query processing approach for spatial databases and explain the main concepts we defined and developed: a spatial algebra and a graph based approach used in the optimizer. The spatial algebra was defined to express queries and transformation rules during different steps of the query optimization. To cover a vast variety of potential applications, we tried to define the algebra as complete as possible. The algebra looks at the spatial data as maps of spatial objects. The algebraic operators act on the maps and result in new maps. Aggregate functions can act on maps and objects and produce objects or basic values (characters, numbers, etc.). The optimizer receives the query in algebraic expression and produces one efficient QEP (Query Evaluation Plan) through two main consecutive blocks: QEG (Query Evaluation Graph) generation and QEP generation. In QEG generation we construct a graph equivalent of the algebraic expression and then apply graph transformation rules to produce one efficient QEG. In QEP generation we receive the efficient QEG and do predicate ordering and approximation and then generate the efficient QEP. The QEP is a set of consecutive phases that must be executed in the specified order. Each phase consist of one or more primitive operations. All primitive operations that are in the same phase can be executed in parallel. We implemented the optimizer, a randomly spatial query generator and a simulated spatial database. The query generator produces random queries for the purpose of testing the optimizer. The simulated spatial database is a set of functions to simulate primitive spatial operations. They return the cost of the corresponding primitive operation according to input parameters. We put randomly generated queries to the optimizer, got the generated QEPs and put them to the spatial database simulator. We used the experimental results to discuss on the optimizer characteristics and performance. The optimizer was designed for databases with a very large number of spatial objects nevertheless most of the concepts we used can be applied to all spatial information systems."--Résumé abrégé par UMI

    Professional English. Fundamentals of Software Engineering

    Get PDF
    Посібник містить оригінальні тексти фахового змісту, які супроводжуються термінологічним тематичним вокабуляром та вправами різного методичного спрямування. Для студентів, які навчаються за напрямами підготовки: «Програмна інженерія», «Комп’ютерні науки» «Комп’ютерна інженерія»

    Using hypermedia to improve the dissemination and accessibility of syllabus documents with particular reference to primary mathematics

    Get PDF
    The fundamental question that this study set out to investigate was: Can the advantages of hypermedia be extended to curriculum materials that are for the sole use of teachers? To consider this question, three areas needed to be investigated: hypermedia (the medium); teachers (the target) and curriculum documents (the content). Hypermedia has a long history dating back to Bush (1986) who in 1945 imagined his Memex system as building information trails between ideas. However, it was not until the mid 1980s that technology caught up with the theory and hypermedia came of age. The evaluation of hypermedia documents is still in its infancy and design standards are still being formlulated. Social acceptability and usability will be of major concern in the evaluation process of hypermedia. Therefore this study needed to investigate whether this medium of presentation is socially acceptable to teachers? Advances in Information Technology (IT), both in hardware and software in the last few years have brought the potential of hypermedia to the personal computer (PC). Information, be it text, sound, graphics or video, or a mixture of these, can now be presented on the same screen and the movement between screens can be seamless. The movement between screens is no longer limited to sequential movement as it is when the information is presented in a hard copy form, but can be randomly accessed. This access allows the user to move about the information as they would move about within their own minds, that is, by association. Already commercial hypermedia products are being produced for the education and leisure markets. Teachers\u27 work loads are increasing, as they take on more curriculum responsibilities, while at the same time, information is expanding at a rapid rate. The challenge today is to encourage teachers to use new information technology to overcome these problems. However, since their inception into schools fifteen years ago, computers have not delivered the results that had been expected of them. Can the access to hypermedia curriculum documents help teachers to lessen their work load and encourage them to use IT? Firstly, it is important to consider whether curriculum materials for teacher use are suitable for hypermedia presentation. The literature indicated that textual materials that are not meant to be read sequentially like a novel, arc suitable to be presented in hypermedia form. At present, curriculum materials for teachers contain the content in hard copy form but the presentation is lacking in quality. This hard copy material is expensive, hard to correct and slow to update. Hypermedia offers the potential to overcome these limitations and to provide easy access to much more information. This new medium could allow teachers for the first ti.me to truly integrate their teaching programme by enabling them to access multiple curriculum documents. The methodology used in this study was based on two types of descriptive research, survey and correlation methods. The target population for this study was all K-7 teachers using the Western Australia Mathematics syllabus within Western Australia. The instrument was a mailed survey questionnaire that consisted of five parts. The first part consisted of collecting personal data such as age and gender. The second part was the Computer Attitude Scale (CAS), designed by Loyd and Gressard (1984), and was used to measure attitudes towards learning and using computers. The third part consisted of questions that asked teachers for their views and impressions on the social acceptability and utility of the present hard copy. The fourth part consisted of questions on computer experience and use, both in and outside the classroom. The final part consisted questions on the likely acceptance and usefulness of a hypermedia copy of the syllabus. This study found that the likely medium-based anxiety for this type of application is low for the teachers sampled, with 70 percent indicating that they were likely to accept this type of application. The findings indicated that the acceptance rate increased as the teachers\u27 positive attitude towards computers increased. Teachers that rated themselves competent at using a computer were also more likely to accept this type of application. Time spent using a computer at school showed that teachers who frequently use them at least several times a week were more likely to accept this type of application. The study also found that the majority of teachers sampled considered the ability to link the syllabus to other teaching material was very useful. Many of the problems identified by the teachers sampled concerning the usability of the present hard copy could be overcome using a hypermedia version

    Computer animation data management: Review of evolution phases and emerging issues

    Get PDF
    The computer animation industry has been booming and prospering in recent thirty years. One of the significant changes faced by this industry is the evolution of computer-animation data and, yet, extant literature has offered very little insights into the evolution process and management issues pertinent to computer-animation data. Hence, many questions have surfaced in the extant literature of computer-animation data management. For example, to what extent has the data content expanded in terms of quantity and quality? To what extent has the information technology used to store and process the data changed? To what extent have the user and the community groups diversified in terms of their nature and number? Knowledge pertaining to these issues can provide new research directions to academics and also insights to practitioners for more effective and innovative management of computer-animation data. This conceptual paper, therefore, takes the pioneering step to address these issues by proposing four factors prudent for examining the evolution phases associated with computer-animation data management: technology, content, users, and community. Next, this paper presents a conceptual framework illustrating the inter-dependent relationships between these four factors together with associated theoretical and managerial issues. This paper, albeit limited by its conceptual nature, advances the extant literature of computer animation, information system, and open-product model
    corecore