163 research outputs found

    Digital Media as Contemporary Art and its Impact on Museum Practice

    Get PDF
    TechLab: Experiments in media art 1999-2019 is a survey of the TechLab, with an overview of every exhibition, project, and artwork was shown in the space over two decades. Also included are a selection of critical essays from leading academics and curators working across the field of new media, discussing the broader art-historical context of the TechLab as well as its legacy. Contributors include Rhys Edwards, Alison Rajah, Caroline Seck Langill, Robin Oppenheimer, Kate Armstrong, Beryl Graham, and Jordan Strom

    The Audience That Acts

    Get PDF
    This thesis explores an evolution of the relationship between socially-engaged artists and audiences, focusing on a number of strategies through which this connection is being renegotiated, and how these tactics have allowed a new model of artist-audience collaboration to emerge. The model that is proposed is one that positions the artist as a conduit as opposed to an originator. By this I define 'the artist' as one who conducts ideas, as copper wire conducts electrical current, and the 'active audience' as someone who makes use of the energy of those ideas to bring about an act of their own creation. In this way the artist becomes a catalyst for different perspectives and 'artworks' become subversive through the power and resilience of human imagination. The emergence of these forms of collaborative practice, loosely termed 'socially-engaged', is charted through case studies of my own work and the innovations of a number of artists over the past fifty years. These practices share many things in common, including an ability to move easily between modes of engagement with audiences, and the institutional, communal or virtual sites that audiences occupy. Artworks developed in these marginal zones between audience and artist often go unacknowledged (as art), and it is for this very reason they are able to become such potent tactical forms of infiltration of power, shining a light on the invisible to make it visible. It is at these sites of energy transferal that audiences can discover a new articulation of their resistance. In an attempt to reflect in textual form the way that the relationship between artist and audience is being renegotiated, this thesis includes a combination of autoethnographic qualitative research narrated in the first-person, and analysis of theorists, practitioners and audiences in the third-person

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    Creating the Future of Health

    Get PDF
    Creating the Future of Health is the fascinating story of the first fifty years of the Cumming School of Medicine at the University of Calgary. Founded on the recommendation of the Royal Commission on Health Services in 1964 the Cumming School has, from the very beginning, focused on innovation and excellence in health education. With a pioneering focus on novel, responsive and systems-based approaches, it was one of the first faculties to pilot multi-year training programs in family medicine and remains one of only two three-year medical schools in North America. Drawing on interviews with key players and extensive research into documents and primary material, Creating the Future of Health traces the history of the school through the leadership of its Deans. This is a story of perseverance through fiscal turbulence, sweeping changes to health care and health care education, and changing ideas of what health services are and what they should do. It is a story of triumph, of innovation, and of the tenacious spirit that thrives to this day at the Cumming School of Medicine

    Rethinking Consistency Management in Real-time Collaborative Editing Systems

    Get PDF
    Networked computer systems offer much to support collaborative editing of shared documents among users. Increasing concurrent access to shared documents by allowing multiple users to contribute to and/or track changes to these shared documents is the goal of real-time collaborative editing systems (RTCES); yet concurrent access is either limited in existing systems that employ exclusive locking or concurrency control algorithms such as operational transformation (OT) may be employed to enable concurrent access. Unfortunately, such OT based schemes are costly with respect to communication and computation. Further, existing systems are often specialized in their functionality and require users to adopt new, unfamiliar software to enable collaboration. This research discusses our work in improving consistency management in RTCES. We have developed a set of deadlock-free multi-granular dynamic locking algorithms and data structures that maximize concurrent access to shared documents while minimizing communication cost. These algorithms provide a high level of service for concurrent access to the shared document and integrate merge-based or OT-based consistency maintenance policies locally among a subset of the users within a subsection of the document – thus reducing the communication costs in maintaining consistency. Additionally, we have developed client-server and P2P implementations of our hierarchical document management algorithms. Simulations results indicate that our approach achieves significant communication and computation cost savings. We have also developed a hierarchical reduction algorithm that can minimize the space required of RTCES, and this algorithm may be pipelined through our document tree. Further, we have developed an architecture that allows for a heterogeneous set of client editing software to connect with a heterogeneous set of server document repositories via Web services. This architecture supports our algorithms and does not require client or server technologies to be modified – thus it is able to accommodate existing, favored editing and repository tools. Finally, we have developed a prototype benchmark system of our architecture that is responsive to users’ actions and minimizes communication costs

    Newsletter no. 42

    Get PDF
    INHIGEO produces an annual publication that includes information on the commission's activities, national reports, book reviews, interviews and occasional historical articles.N

    University of South Alabama College of Medicine Annual Report for 2016-2017

    Get PDF
    This Annual Report of the College of Medicine catalogues accomplishments of our faculty, students, residents, fellows and staff in teaching, research, scholarly and community service during the 2016-2017 academic year.https://jagworks.southalabama.edu/com_report/1001/thumbnail.jp

    ImMApp: An immersive database of sound art

    Full text link
    The ImMApp (Immersive Mapping Application) thesis addresses contemporary and historical sound art from a position informed by, on one hand, post-structural critical theory, and on the other, a practice-based exploration of contemporary digital technologies (MySQL, XML, XSLT, X3D). It proposes a critical ontological schema derived from Michel Foucault's Archaeology of Knowledge (1972) and applies this to pre-existing information resources dealing with sound art. Firstly an analysis of print-based discourses (Sound by Artists. Lander and Lexier (1990), Noise, Water, Meat. Kahn (2001) and Background Noise: Perspectives on Sound Art. LaBelle (2006» is carried out according to Foucauldian notions of genealogy, subject positions, the statement, institutional affordances and the productive nature of discursive formation. The discursive field (the archive) presented by these major canonical texts is then contrasted with a formulation derived from Giles Deleuze and Felix Guattari: that of a 'minor' history of sound art practices. This is then extended by media theory (McLuhan, Kittler, Manovich) into a critique of two digital sound art resources (The Australian Sound Design Project (Bandt and Paine (2005) and soundtoys.net Stanza (1998). The divergences between the two forms of information technologies (print vs. digital) are discussed. The means by which such digitised methodologies may enhance Foucauldian discourse analysis points onwards towards the two practice-based elements of the thesis. Surface, the first iterative part, is a web-browser based database built on an Apache/MySQIlXML architecture. It is the most extensive mapping of sound art undertaken to date and extends the theoretical framework discussed above into the digital domain. Immersion, the second part, is a re-presentation of this material in an immersive digital environment, following the transformation of the source material via XSL-T into X3D. Immersion is a real-time, large format video, surround sound (5.ln.l) installation and the thesis concludes with a discussion of how this outcome has articulated Foucauldian archaeological method and unframed pre-existing notions of the nature of sound art

    Enhancing Privacy and Fairness in Search Systems

    Get PDF
    Following a period of expedited progress in the capabilities of digital systems, the society begins to realize that systems designed to assist people in various tasks can also harm individuals and society. Mediating access to information and explicitly or implicitly ranking people in increasingly many applications, search systems have a substantial potential to contribute to such unwanted outcomes. Since they collect vast amounts of data about both searchers and search subjects, they have the potential to violate the privacy of both of these groups of users. Moreover, in applications where rankings influence people's economic livelihood outside of the platform, such as sharing economy or hiring support websites, search engines have an immense economic power over their users in that they control user exposure in ranked results. This thesis develops new models and methods broadly covering different aspects of privacy and fairness in search systems for both searchers and search subjects. Specifically, it makes the following contributions: (1) We propose a model for computing individually fair rankings where search subjects get exposure proportional to their relevance. The exposure is amortized over time using constrained optimization to overcome searcher attention biases while preserving ranking utility. (2) We propose a model for computing sensitive search exposure where each subject gets to know the sensitive queries that lead to her profile in the top-k search results. The problem of finding exposing queries is technically modeled as reverse nearest neighbor search, followed by a weekly-supervised learning to rank model ordering the queries by privacy-sensitivity. (3) We propose a model for quantifying privacy risks from textual data in online communities. The method builds on a topic model where each topic is annotated by a crowdsourced sensitivity score, and privacy risks are associated with a user's relevance to sensitive topics. We propose relevance measures capturing different dimensions of user interest in a topic and show how they correlate with human risk perceptions. (4) We propose a model for privacy-preserving personalized search where search queries of different users are split and merged into synthetic profiles. The model mediates the privacy-utility trade-off by keeping semantically coherent fragments of search histories within individual profiles, while trying to minimize the similarity of any of the synthetic profiles to the original user profiles. The models are evaluated using information retrieval techniques and user studies over a variety of datasets, ranging from query logs, through social media and community question answering postings, to item listings from sharing economy platforms.Nach einer Zeit schneller Fortschritte in den Fähigkeiten digitaler Systeme beginnt die Gesellschaft zu erkennen, dass Systeme, die Menschen bei verschiedenen Aufgaben unterstützen sollen, den Einzelnen und die Gesellschaft auch schädigen können. Suchsysteme haben ein erhebliches Potenzial, um zu solchen unerwünschten Ergebnissen beizutragen, weil sie den Zugang zu Informationen vermitteln und explizit oder implizit Menschen in immer mehr Anwendungen in Ranglisten anordnen. Da sie riesige Datenmengen sowohl über Suchende als auch über Gesuchte sammeln, können sie die Privatsphäre dieser beiden Benutzergruppen verletzen. In Anwendungen, in denen Ranglisten einen Einfluss auf den finanziellen Lebensunterhalt der Menschen außerhalb der Plattform haben, z. B. auf Sharing-Economy-Plattformen oder Jobbörsen, haben Suchmaschinen eine immense wirtschaftliche Macht über ihre Nutzer, indem sie die Sichtbarkeit von Personen in Suchergebnissen kontrollieren. In dieser Dissertation werden neue Modelle und Methoden entwickelt, die verschiedene Aspekte der Privatsphäre und der Fairness in Suchsystemen, sowohl für Suchende als auch für Gesuchte, abdecken. Insbesondere leistet die Arbeit folgende Beiträge: (1) Wir schlagen ein Modell für die Berechnung von fairen Rankings vor, bei denen Suchsubjekte entsprechend ihrer Relevanz angezeigt werden. Die Sichtbarkeit wird im Laufe der Zeit durch ein Optimierungsmodell adjustiert, um die Verzerrungen der Sichtbarkeit für Sucher zu kompensieren, während die Nützlichkeit des Rankings beibehalten bleibt. (2) Wir schlagen ein Modell für die Bestimmung kritischer Suchanfragen vor, in dem für jeden Nutzer Aanfragen, die zu seinem Nutzerprofil in den Top-k-Suchergebnissen führen, herausgefunden werden. Das Problem der Berechnung von exponierenden Suchanfragen wird als Reverse-Nearest-Neighbor-Suche modelliert. Solche kritischen Suchanfragen werden dann von einem Learning-to-Rank-Modell geordnet, um die sensitiven Suchanfragen herauszufinden. (3) Wir schlagen ein Modell zur Quantifizierung von Risiken für die Privatsphäre aus Textdaten in Online Communities vor. Die Methode baut auf einem Themenmodell auf, bei dem jedes Thema durch einen Crowdsourcing-Sensitivitätswert annotiert wird. Die Risiko-Scores sind mit der Relevanz eines Benutzers mit kritischen Themen verbunden. Wir schlagen Relevanzmaße vor, die unterschiedliche Dimensionen des Benutzerinteresses an einem Thema erfassen, und wir zeigen, wie diese Maße mit der Risikowahrnehmung von Menschen korrelieren. (4) Wir schlagen ein Modell für personalisierte Suche vor, in dem die Privatsphäre geschützt wird. In dem Modell werden Suchanfragen von Nutzer partitioniert und in synthetische Profile eingefügt. Das Modell erreicht einen guten Kompromiss zwischen der Suchsystemnützlichkeit und der Privatsphäre, indem semantisch kohärente Fragmente der Suchhistorie innerhalb einzelner Profile beibehalten werden, wobei gleichzeitig angestrebt wird, die Ähnlichkeit der synthetischen Profile mit den ursprünglichen Nutzerprofilen zu minimieren. Die Modelle werden mithilfe von Informationssuchtechniken und Nutzerstudien ausgewertet. Wir benutzen eine Vielzahl von Datensätzen, die von Abfrageprotokollen über soziale Medien Postings und die Fragen vom Q&A Forums bis hin zu Artikellistungen von Sharing-Economy-Plattformen reichen
    corecore