606 research outputs found

    Creeping decay: cult soundtracks, residual media, and digital technologies

    Get PDF
    This paper explores the recent resurgence in the collecting of cult film soundtracks, in particular films stemming from the late 1960s to the early 1980s and often linked to horror and other modes of exploitation cinema. I consider this phenomenon as an important component of cult film fandom, but one which has largely been overlooked in cult cinema research because it is often considered as belonging to popular music, as opposed to film, research. As films can become cultified in many different ways and across different media, I look into how areas of music culture can both be inspired by, as well as influence, aspects of film culture. The paper also addresses the importance of ‘residual’ technologies within cult film/music cultures, noting in particular the preference for vinyl records as well as VHS tapes in certain cult fan communities, and explores the appeal that such ‘old media’ retain within an increasingly digital mediascape

    Day by Day Care Newsletter: October 1979 - June 1980

    Get PDF
    Issues of Day by Day Care Newsletter, October 1979 though Fall 1980

    Findit - Design and implementation of a mobile application for public transport

    Get PDF
    The purpose of this master’s thesis was to examine existing augmented reality technologies and develop an application for Windows phone. The goal with the application was to find new ways to use augmented reality technologies together with public transport. The end user would use the application to search for and find new journey options. The development process consisted of four steps – investigation, design, implementation and testing. The investigation was done by looking at current public transport and augmented reality applications for smartphones, compare them and see which functionality may be used or improved. Design consisted of scenarios, low-fi prototyping and storyboards where the author followed Microsoft own guidelines for designing and developing a Windows phone application. The application was implemented for both Windows phone 7 and 8 and a comparison was made between the two systems. The last step was testing and it was done by exploration and performance testing. The result is an application and a video. It became clear that augmented reality can be used and the technology for it is here, but the application still have a long way to go with testing and further consultations from users

    Applying Object Detection to Marine Data and Exploring Explainability of a Fully Convolutional Neural Network Using Principal Component Analysis

    Get PDF
    With the rise of focus on man made changes to our planet and wildlife therein, more and more emphasis is put on sustainable and responsible gathering of resources. In an effort to preserve maritime wildlife the Norwegian government decided to create an overview of the presence and abundance of various species of marine lives in the Norwegian fjords and oceans. The current work evaluates the possibility of utilizing machine learning methods in particular the You Only Look Once version 3 algorithm to detect fish in challenging conditions characterized by low light, undesirable algae growth and high noise. It was found that the algorithm trained on images collected during the day time under natural light could detect fish successfully in images collected during night under artificial lighting. The overall average precision score of 88% was achieved. Later principal component analysis was used to analyze the features learned in different layers of the network. It is concluded that for the purpose of object detection in specific application areas, the network can be considerably simplified since many of the feature detector turns our to be redundant.acceptedVersio

    Integrated Development and Parallelization of Automated Dicentric Chromosome Identification Software to Expedite Biodosimetry Analysis

    Get PDF
    Manual cytogenetic biodosimetry lacks the ability to handle mass casualty events. We present an automated dicentric chromosome identification (ADCI) software utilizing parallel computing technology. A parallelization strategy combining data and task parallelism, as well as optimization of I/O operations, has been designed, implemented, and incorporated in ADCI. Experiments on an eight-core desktop show that our algorithm can expedite the process of ADCI by at least four folds. Experiments on Symmetric Computing, SHARCNET, Blue Gene/Q multi-processor computers demonstrate the capability of parallelized ADCI to process thousands of samples for cytogenetic biodosimetry in a few hours. This increase in speed underscores the effectiveness of parallelization in accelerating ADCI. Our software will be an important tool to handle the magnitude of mass casualty ionizing radiation events by expediting accurate detection of dicentric chromosomes

    Analytical querying with typed linear algebra: integration with MonetDB

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringCurrent digital transformations in society heavily rely on safe, easy-to-use, high-performance data storage and analysis for smart decision taking. This triggered the need for efficient analytical querying solutions and the columnar database model is increasingly regarded as the most efficient model for data organization in large data banks. MonetDB is a pioneer in the column-wise database model and is currently at the forefront of high performance DBMS engine. A Linear Algebra Querying (LAQ) engine, using a columnar database paradigm and strongly inspired on Typed Linear Algebra (TLA), was developed in a former MSc. dissertation, with a prototype Web interface. Performance benchmarking of this engine showed it outperformed conventional referenced DBMS but it failed to beat MonetDB’s performance. This dissertation aims to improve the performance of the LAQ engine by following a different path: instead of a standalone engine, the new approach implements the engine on top of MonetDB extended with RMA (Relational Matrix Algebra) and inspired by the TLA approach. This enables the use of LAQ scripting to replace the main stream relational algebra query language approach given by SQL. Matrix operations commonly used in LAQ/TLA, such as matrix-matrix multiplication, Khatri-Rao product or Hadamard-Schur product, had to be implemented in RMA to shift from the relational algebra paradigm to TLA. A thorough analysis of the MonetDB/RMA showed the need to implement key TLA operators that are not available at the frontend. Such operators were implemented and successfully tested and validated, paving the way to future benchmarking its performance with TPC-H/OLAP queries and consequent fine tuning of the engine.Atualmente, as transformaçÔes digitais na sociedade confiam fortemente no armazenamento e na anĂĄlise de dados seguros, fĂĄceis de usar e de alto desempenho para tomadas de decisĂŁo inteligentes. Este facto desencadeou a necessidade de soluçÔes de consultas analĂ­ticas eficientes, em que o modelo de bases de dados colunar Ă© cada vez mais considerado o modelo mais eficiente para organização de dados em grandes bancos de dados. MonetDB Ă© um sistema pioneiro no modelo de bases de dados colunar e atualmente estĂĄ na vanguarda de DBMS’s de alto desempenho. Um motor Linear Algebra Querying (LAQ), que usa o paradigma de bases de dados colunar e fortemente inspirado em Álgebra Linear Tipada (TLA), foi desenvolvido numa antiga dissertação de mestrado em Engenharia InformĂĄtica. O benchmarking do desempenho deste motor mostrou que supera DBMS tradicionais, mas nĂŁo conseguiu superar o desempenho do MonetDB. Esta dissertação visa melhorar o desempenho do motor LAQ seguindo um caminho diferente: em vez de um motor autĂłnomo, a nova abordagem implementa o motor sobre o motor do MonetDB estendido com RMA (Álgebra Relacional Matricial) e inspirado na abordagem de TLA. Isto permite o uso de scripts LAQ para substituir a abordagem da linguagem de consulta de ĂĄlgebra relacional fornecida pelo SQL. OperaçÔes de matrizes comumente usadas em LAQ / TLA, como multiplicação de matrizes, produto Khatri-Rao ou produto Hadamard-Schur, tiveram de ser implementadas em RMA para mudar do paradigma da ĂĄlgebra relacional para TLA. Uma anĂĄlise completa do MonetDB / RMA mostrou a necessidade de implementar os principais operadores de TLA que nĂŁo estĂŁo disponĂ­veis no front-end. Esses operadores foram implementados, testados e validados com sucesso, abrindo caminho para um futuro benchmarking do seu desempenho com queries TPC-H / OLAP e consequente, ajuste do motor

    Microdata Deduplication with Spark

    Get PDF
    Üha rohkem avaldatakse veebis struktureeritud sisu, mis on loetav nii inimeste kui masinate poolt. TĂ€nu otsimootorite loojatele, kes on defineerinud standardid struktureeritud sisu esitamiseks, teevad jĂ€rjest rohkemad veebisaidid osa oma andmetest, nt toodete, isikute, organisatsioonide ja asukohtade kirjeldused, veebis avalikuks. Selleks kasutatakse RDFa, microdata jms vorminguid. Microdata on ĂŒks viimastest vormingutest ning saanud populaarseks suhteliselt lĂŒhikese aja jooksul. Sarnaselt on arenenud tehnoloogiad veebist struktureeritud sisu kĂ€ttesaamiseks. NĂ€iteks on Apache Any23, mis vĂ”imaldab veebilehtedest microdata andmeid eraldada ja linkandmetena kĂ€ttesaadavaks teha. Samas pole struktureeritud andmete veebist kĂ€ttesaamine enam suurim tehniline vĂ€ljakutse. Nimelt on veebist saadud andmeid enne kasutamist vaja puhastada - eemaldada duplikaadid, lahendada ebakĂ”lad ning hakkama tuleb saada ka ebamÀÀraste andmetega.\n\rKĂ€esoleva magistritöö peamiseks fookuseks on efektiivse lahenduse loomine veebis leiduvatest linkandmetest duplikaatide eemaldamine suurte andmekoguste jaoks. Kuigi deduplikeerimise algoritmid on saavutanud suhtelise kĂŒpsuse, tuleb neid konkreetsete andmekomplektide jaoks siiski peenhÀÀlestada. EelkĂ”ige tuleb tuvastada sobivaim vĂ”tme pikkus kirjete sortimiseks. KĂ€esolevas töös tuvastatakse optimaalne vĂ”tme pikkus veebis leiduvate tooteandmete deduplikeerimise kontekstis. Suurte andmemahtude tĂ”ttu kasutatakse Apache Spark'i deduplikeerimist hajusalgoritmide realiseerimiseks.The web is transforming from traditional web to web of data, where information is presented in such a way that it is readable by machines as well as human. As a part of this transformation, every day more and more websites implant structured data, e.g. product, person, organization, place etc., into the HTML pages. To implant the structured data different encoding vocabularies, such as RDFa, microdata, and microformats, are used. Microdata is the most recent addition to these structure data embedding standards, but it has gained more popularity over other formats in less time. Similarly, progress has been made in the extraction of the structured data from web pages, which has resulted in open source tools such as Apache Any23 and non-profit Common Crawl project. Any23 allows extraction of microdata from the web pages with less effort, whereas Common Crawl extracts data from websites and provides it publically for download. In fact, the microdata extraction tools only take care of parsing and data transformation steps of data cleansing. Although with the help of these state-of-the-art extraction tools microdata can be easily extracted, before the extracted data used in potential applications, duplicates should be removed and data unified. Since microdata origins from arbitrary web resources, it has arbitrary quality as well and should be treated correspondingly. \n\rThe main purpose of this thesis is to develop the effective mechanism for deduplication of microdata on the web scale. Although the deduplication algorithms have reached relative maturity, however, these algorithm needs to be executed on specific datasets for fine-tuning. In particular, the need to identify the most suitable length of sorting key in sorted-based deduplication approach. The present work identifies the optimum length of the sorting key in the context of extracted product microdata deduplication. Due to large volumes of data to be processed continuously, Apache Spark will be used for implementing the necessary procedures

    ‘Ichthyologue’: Freshwater Biology in the Poetry of Ted Hughes

    Get PDF
    An ecocritical analysis of Ted Hughes's knowledge of freshwater biology and environmental science, especially the work of his son, Nicholas Hughes. Contains previously unpublished poetry drafts by Ted Hughes
    • 

    corecore