25 research outputs found
Succeeding metadata based annotation scheme and visual tips for the automatic assessment of video aesthetic quality in car commercials
In this paper, we present a computational model capable to predict the viewer perception of car advertisements videos by using a set of low-level video descriptors. Our research goal relies on the hypothesis that these descriptors could reflect the aesthetic value of the videos and, in turn, their viewers' perception. To that effect, and as a novel approach to this problem, we automatically annotate our video corpus, downloaded from YouTube, by applying an unsupervised clustering algorithm to the retrieved metadata linked to the viewers' assessments of the videos. In this regard, a regular k-means algorithm is applied as partitioning method with k ranging from 2 to 5 clusters, modeling different satisfaction levels or classes. On the other hand, available metadata is categorized into two different types based on the profile of the viewers of the videos: metadata based on explicit and implicit opinion respectively. These two types of metadata are first individually tested and then combined together resulting in three different models or strategies that are thoroughly analyzed. Typical feature selection techniques are used over the implemented video descriptors as a pre-processing step in the classification of viewer perception, where several different classifiers have been considered as part of the experimental setup. Evaluation results show that the proposed video descriptors are clearly indicative of the subjective perception of viewers regardless of the implemented strategy and the number of classes considered. The strategy based on explicit opinion metadata clearly outperforms the implicit one in terms of classification accuracy. Finally, the combined approach slightly improves the explicit, achieving a top accuracy of 72.18% when distinguishing between 2 classes, and suggesting that better classification results could be obtained by using suitable metrics to model perception derived from all available metadata.Publicad
Exploiting visual saliency for assessing the impact of car commercials upon viewers
Content based video indexing and retrieval (CBVIR) is a lively area of research which focuses on automating the indexing, retrieval and management of videos. This area has a wide spectrum of promising applications where assessing the impact of audiovisual productions emerges as a particularly interesting and motivating one. In this paper we present a computational model capable to predict the impact (i.e. positive or negative) upon viewers of car advertisements videos by using a set of visual saliency descriptors. Visual saliency provides information about parts of the image perceived as most important, which are instinctively targeted by humans when looking at a picture or watching a video. For this reason we propose to exploit visual information, introducing it as a new feature which reflects high-level semantics objectively, to improve the video impact categorization results. The suggested salience descriptors are inspired by the mechanisms that underlie the attentional abilities of the human visual system and organized into seven distinct families according to different measurements over the identified salient areas in the video frames, namely population, size, location, geometry, orientation, movement and photographic composition. Proposed approach starts by computing saliency maps for all the video frames, where two different visual saliency detection frameworks have been considered and evaluated: the popular graph based visual saliency (GBVS) algorithm, and a state-of-the-art DNN-based approach.This work has been partially supported by the National Grants RTC-2016-5305-7 and TEC2014-53390-P of the Spanish Ministry of Economy and Competitiveness.Publicad
A client-server architecture for distributed and scalable multimedia content analysis: an Android app for assisting phone users in shooting aesthetically valuable pictures
Nowadays developing modern scientific image and video analysis algorithms faces the issue of distributing them among the open community with multiple versions for very different platforms. This requires software development skills usually unknown by the researchers outside of the computer science world. Client/server communications have acquired a leading role by abstracting the business logic of applications from thin clients running on small devices like smartphones which end users can carry with them.
The present work describes the design, modeling, development and testing of a client/server architecture that has the ability to perform computations on image and video characteristics on independent Matlab® instances and offer production efficient SQL persistence to store the results. All of this, immersed in a user authenticated environment. This project has been specifically focused on a currently ongoing study by researchers from Universidad Carlos III and Universidad Politécnica de Madrid. Their main goal is to estimate the aesthetic value of images and videos by the computation of audiovisual content. However, the architecture has been designed and built with the objective of being applicable to any kind of biomedical, audiovisual or any other engineering image or video analysis study.Hoy en día, desarrollar nuevos algoritmos científicos que analicen videos o imágenes lleva consigo el problema de la distribución abierta a la comunidad con las múltiples versiones de las distintas plataformas utilizadas. Para que ello sea posible, se requieren habilidades de desarrollo de software que normalmente son desconocidas por parte de los investigadores no inmersos en campo de la informática. Las plataformas cliente/servidor han adquirido un rol primordial al abstraer la funcionalidad principal de las aplicaciones de los clientes livianos como los teléfonos inteligentes que pueden llevarse en el bolsillo.
Este trabajo describe el diseño, modelado, desarrollo y prueba de una arquitectura cliente/servidor que tiene la habilidad de realizar cálculos de características de imágenes y videos en instancias independientes de Matlab® y ofrecer persistencia de datos SQL al nivel de un entorno de producción donde guardar los resultados obtenidos, todo ello sumergido en un ambiente donde los usuarios están completamente autentificados. Este proyecto ha estado particularmente enfocado a una investigación actualmente en desarrollo por investigadores de la Universidad Carlos III y la Universidad Politécnica de Madrid. Esta investigación tiene como objetivo el estudio del valor estético de imágenes y videos a través del cálculo de descriptores objetivos. De todas maneras, la arquitectura se ha diseñado y construido con el objetivo de posibilitar la aplicación a cualquier otro estudio dentro de la ingeniería biomédica, audiovisual u otra ingeniería donde se requiera el análisis de video o imagen.Ingeniería Biomédic
Extracción de descriptores de movimiento en vídeos para la evaluación de la estética
The growth of video streaming has increased noticeably through
the last decade. Because of it, the task of searching and recommending
videos is becoming more and more difficult. Whereas before
the existence of video streaming information retrievals was only based
on text and metadata, nowadays content-based image and video
retrieval is starting to be researched. In order to add value and success
to user’s searchings, it is interesting to assess the quality and
aesthetic value of the information it is retrieved.
On this thesis we are going to extract several motion related descriptors
in order to aesthetically assess a car commercial database.
The videos in the database are extracted from YouTube and labeled
in order to metadata provided by the website. Specifically three
kinds of labeling are going to be used: based on quality or likes/-
dislikes, quantity or number of views and the combination of both
of them. Quality and quantity provide a binary labeling, and the
combination clusters the videos in four classes.
As it is usually done in computer vision, the main objective is
suggesting a set of descriptors and designing and providing the procedures
for calculating their values on the corpus of videos. These
values are called descriptors, and they can be obtained by processing
frames and handling the data got on the procedure to get specific
numbers. With their help it may be possible to know whether they
give enough information to determine the aesthetic appealing of the
videos. On this project we are going focus on motion descriptors.
As an approach to get data about the video motion, the optical
flow is estimated between each pair of frames. To do so, a Matlab
friendly C++ code developed is used. This algorithm is based
on the brightness constancy assumption between two frames, leading
to a continuous spatial-temporal function. This is discretized, linearized and the temporal factor removed by assuming only the function
in two frames. Afterwards the zero gradient values are found using
Iterative Reweighted Least Squares (IRLS), a method which iterates
calculating different weights in order to find the ones fulfilling the
zero gradient condition. From this a linear system is obtained and it
is solved by using Successive Over-Relaxation (SOR) method, which
is Gauss’ variant with faster convergence.
The optical flow algorithm needs several parameters to be set.
Because of the difficulty of setting these parameters automatically,
these values are determined by the observation of each performance
and efficiency representing the observed motion. When the optical
flow is calculated, we filter homogeneous texture regions, due to possible
error estimation induced by similar pixel values on the neighborhood.
In order to determine the texture level on different frame
regions, we measure the entropy on each one, which will provide a
measurement of pixel’s randomness. This is made by turning each
frame into gray-scale and dividing them into 60 different windows.
Afterwards, a threshold is set to determine which region will be considered
as a low texture one. This is done considering that filtering
excessively could mean that the descriptors extraction will not be
representative. However, in cases with a lot of very homogeneous
regions (e.g.: completely black) the amount of vectors discarded will
be high no matter what threshold is set. Then, when the region’s
entropy is less than the threshold, it will be considered as a low texture
one and, as a consequence, its optical flow vectors will not be
taken into consideration.
After filtering based on the texture, the first step is getting the
angle and modulus of the movement estimated in every pixel using
the components got. For easy direction interpretation when getting
the different descriptors, the angles are nominalized according to the
8 cardinal points.
By using both cardinals and modulus obtained, it is possible to estimate
approximately which camera motion is taking place on every
frame or shot. For this we are only taking into account those values
on the margins of each frame. Previous to the camera motion detection,
it is necessary to apply some weights to the cardinal values
on the margins. This is done not only to give relevance to N-S-E-W about the camera motion although they do not belong to the purest
pan and tilt motion types. Adding each different weight depending
on the cardinal, we get a percentage in comparison to the ideal motion
type (this is, every pixel moving towards the same direction)
which gives out the “amount” of movement going to each N-S-E-W
direction.
The most common shot type on the database is done by using
fixed cameras and it is detected by setting a threshold to the mean
modulus of the margins of each frame, which should include those
frames which are fixed but have some kind of movement on the margins
because of the captured scene. If it is less than the mentioned
threshold, the frame will be considered as fixed. If not, we begin to
detect zoom presence. In order to do that, margins are divided into 2
vertical and 2 horizontal regions, and each maximum percentage cardinal
is obtained. When detecting which type of zoom it is, we know
the specific directions each margin should have in theory. Starting
from that, we can compare the theoretical value with the maximum
direction got before using weights. When 3 or 4 of the margin’s directions
correspond to the theoretical pixels motion in each zoom
type, the current frame will be considered as one with zoom in or
out, depending on the conditions. Considering the zoom just under
3 conditions is not that restrictive, and this is because having just
those 3 conditions fulfilled is very unlikely unless we have a zoom.
If the zoom is not the case, we evaluate whether it is a pan or tilt
camera motion. Now, the maximum percentage is obtained among
all the margins instead of dividing them into regions, because directions
should be the same along all the frame. If the difference of the
maximum value with respect to the rest of the cardinal percentages
is greater than a threshold, and the maximum value is higher than
another different threshold, it will be considered pan right/left or tilt
up/down depending on the cardinal which belongs to the maximum.
This is done in order to discard those maximums in non predominant
directions. Finally, if none of these conditions are fulfilled, the
frame will have assigned a non-specific motion.
Once frame camera motion is determined, we proceed to detect
shots on each video by computing the Sum of Absolute Differences (SAD) of the gray intensity pixels and its first and second derivatives.
Shot changes are detected when this second derivative has
a value greater than a chosen threshold. After shot detection, the
mode camera motion type is obtained, as well as the percentage of
frames on the shot with that value. When this percentage is greater
than a threshold, it is considered the shot has a predominant value
which corresponds to the most present motion on the frames.
At this point the data computed are not at video level. This is
why we need to use the data in order to obtain single values which
represent each video. In order to get statistical parameters at video
level, it is not possible to iterate creating a matrix with every single
angle or modulus value, because it is highly memory and computing
time consuming, so it is necessary to do it sequentially. This means
getting single values on each frame which will be helpful when computing
statistical video descriptors. As angles have a circular nature,
using circular statistics is compulsory. For this purpose we are going
to store only the sum of each vector component through the whole
set of frames, as well as the sum of modulus. We also obtain the
number of pixels taken into account on the operation because it will
not be constant due to the low texture region filtering. With these
values we get everything needed to compute the mean an standard
deviation at video level. However, when dealing with data like camera
motion type through frames and shots, it is not possible to get
means and standard deviations, and this is why we get the percentage
of each motion type at shot and frame level.
When data handling is done, we extract 27 different descriptors
which are going to be evaluated using the three labeling methods
previously referred. By using machine learning algorithms provided
by Weka, several features and classifiers are tested, getting the best
performance using quantity labeling and angle and modulus related
features, getting a 60% accuracy with tree classifier SimpleCart. Although
in general descriptors performance is not remarkably good,
by using the Experimenter tool provided by Weka, we can find out
which set of features and classifiers really provide a statistically significant
improvement with respect to ZeroR classifier. We observe
that with combination labeling the accuracy is less than those in
quality and quantity labeling. This is because combination deals binary, although it does not mean that combination labeling works
worse than the binary ones, because its improvement with respect to
ZeroR could be better. In fact, combination labeling gives more information
because it has an statistically relevant performance when
choosing angles and modulus related features and SimpleCart and
SimpleLogistic classifiers, which means that the accuracy percentage
is not something casual. We also get significant results when using
quantity labeling and the same set of features with SimpleCart.
These results lead to the conclusion that camera motion is not
particularly relevant when assessing aesthetics on this database. This
is something contradictory to what one might think, because camera
motion is used typically to add drama on an audiovisual context.
An explanation to this could be the fact that in general, fixed and
hand-carried camera motion are noticeably common on the database
and that is why it does not really affect when the user decides
whether he likes it or not. In addition, it is well known that establishing
ground-truth when dealing with people’s likings is not trivial
because of their subjectivity, and this could affect results. The lack
of ca database with camera motion labels is also crucial, because
it makes difficult knowing if the rest of the non-manually labeled
videos behave correctly when the camera motion detection method
is applied.
In this project we check that not always theoretical knowledge
corresponds to what it is observed in a practical context, but we
also provide a way to extract descriptors and analyze them with a
simple approach. This could be improved in the future by labeling
the database with respect to the camera motion and by segmenting
background in order to improve the steady camera detection. Binary
aesthetic labeling could be also improved by using supervised
annotation extracted by measuring involuntary biological responses
experienced by the evaluator.En este proyecto se extraen diferentes características relacionadas
con el movimiento de los vídeos proporcionados en la base de datos,
los cuales se corresponden con anuncios de coches. Para ello, se proporciona
una estimación del flujo óptico haciendo uso del algoritmo
proporcionado.
Mediante un análisis del flujo óptico en los márgenes de los fotogramas
se caracteriza el movimiento de cámara presente en éstos
y se calculan los ángulos y módulos correspondientes al movimiento
de cada píxel. Posteriormente se procede a un cálculo secuencial de
estos valores para obtener descriptores a nivel de vídeo.
Con los datos obtenidos y con tres diferentes etiquetados de los
vídeos basados en calidad, cantidad y en la combinación de éstos,
se aplican métodos de aprendizaje máquina con diferentes conjuntos
de descriptores y clasificadores para la evaluación de la estética, la
cual estará basada en los metadatos proporcionados por los usuarios
a través de YouTube. Se concluye de esta manera que el tipo
de movimiento de cámara no afecta notablemente a la evaluación
estética por parte de los usuarios, mientras que sí lo hacen el ángulo y módulo presentes en cada vídeo.Ingeniería en Tecnologías de Telecomunicació
Um método para a construção de taxonomias utilizando a DBpedia
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia e Gestão do Conhecimento, Florianópolis, 2017.O processo de criação de taxonomias demanda esforço de especialistas de domínio, engenheiros de taxonomias, investimento financeiro e tempo. Devido às limitações existentes em fornecer estes recursos em sua integralidade em diversas organizações, muitos projetos que envolvem a construção de taxonomias não atingem o êxito esperado. Este trabalho pretende auxiliar na construção de taxonomias através da proposição de um método automatizado para a sua construção. Para a construção deste método, foi adotada uma série de procedimentos metodológicos, que se iniciou com o levantamento do referencial teórico sobre taxonomias e sua construção. Em sequência, foi realizada uma busca sistemática no domínio de construção automatizada de taxonomias, buscando encontrar abordagens e procedimentos já existentes neste campo de estudo. A partir desta revisão, foi elaborado um método para a geração de taxonomias a partir de repositórios de informações textuais com o apoio de bases de conhecimento, que fornecem as relações hierárquicas para verificação das relações taxonômicas entre os termos. Uma implementação deste método em formato de software foi realizada, utilizando uma amostra de currículos da área de conhecimento das Ciências Agrárias cadastrados na Plataforma Lattes como repositório de informações. A versão em português da DBpedia foi adotada como base de conhecimento neste experimento. Esta implementação também adota um processo de reconhecimento de entidades para a descoberta dos termos relevantes que podem ser cadastrados nas taxonomias. As propostas de taxonomias geradas pela implementação foram comparadas estatisticamente com o tesauro AGROVOC, referência na área da agricultura. Com a análise, verificou-se que 60% a 80% dos termos encontrados nas taxonomias geradas pela implementação também estão presentes no AGROVOC, sendo esta oscilação pertinente aos parâmetros de filtragem informados na entrada do método, o repositório de informações textuais utilizado e a base de conhecimento empregada para validação das relações hierárquicas.Abstract : The process of creating taxonomies demands effort from domain experts, taxonomy engineers, financial investment and time. Due to the limitations of providing these resources in their entirety in several organizations, many projects that involve the construction of taxonomies do not achieve the expected success. This work intends to assist in the construction of taxonomies through the proposition of an automated method for its construction. For the construction of this method, a series of methodological procedures was adopted, which began with the survey of the theoretical reference on taxonomies and their construction. In sequence, a systematic search was made in the field of automated taxonomy construction, seeking to find approaches and procedures that already exist in this field of study. From this review, a method was developed for the generation of taxonomies from textual information repositories with the support of knowledge bases, which provide the hierarchical relationships for the verification of the taxonomic relations between the terms. An implementation of this method in software format was performed, using a sample of curricula from the Agrarian Sciences knowledge area registered in the Plataforma Lattes as a repository of information. The DBpedia?s Portuguese language version was adopted as knowledge base in this experiment. This implementation also adopts a process of entity recognition for the discovery of the relevant terms that can be registered in the taxonomies. The taxonomy proposals generated by the implementation were compared statistically with the AGROVOC thesaurus, reference in the area of agriculture. With the analysis, it was verified that 60% to 80% of the terms found in the taxonomies generated by the implementation are also present in AGROVOC, being this oscillation pertinent to the filter parameters informed in the method entry, the textual information repository used and the knowledge base used to validate hierarchical relationships
Digital Light
Light symbolises the highest good, it enables all visual art, and today it lies at the heart of billion-dollar industries. The control of light forms the foundation of contemporary vision. Digital Light brings together artists, curators, technologists and media archaeologists to study the historical evolution of digital light-based technologies. Digital Light provides a critical account of the capacities and limitations of contemporary digital light-based technologies and techniques by tracing their genealogies and comparing them with their predecessor media. As digital light remediates multiple historical forms (photography, print, film, video, projection, paint), the collection draws from all of these histories, connecting them to the digital present and placing them in dialogue with one another.
Light is at once universal and deeply historical. The invention of mechanical media (including photography and cinematography) allied with changing print technologies (half-tone, lithography) helped structure the emerging electronic media of television and video, which in turn shaped the bitmap processing and raster display of digital visual media. Digital light is, as Stephen Jones points out in his contribution, an oxymoron: light is photons, particulate and discrete, and therefore always digital. But photons are also waveforms, subject to manipulation in myriad ways. From Fourier transforms to chip design, colour management to the translation of vector graphics into arithmetic displays, light is constantly disciplined to human purposes. In the form of fibre optics, light is now the infrastructure of all our media; in urban plazas and handheld devices, screens have become ubiquitous, and also standardised. This collection addresses how this occurred, what it means, and how artists, curators and engineers confront and challenge the constraints of increasingly normalised digital visual media.
While various art pieces and other content are considered throughout the collection, the focus is specifically on what such pieces suggest about the intersection of technique and technology. Including accounts by prominent artists and professionals, the collection emphasises the centrality of use and experimentation in the shaping of technological platforms. Indeed, a recurring theme is how techniques of previous media become technologies, inscribed in both digital software and hardware. Contributions include considerations of image-oriented software and file formats; screen technologies; projection and urban screen surfaces; histories of computer graphics, 2D and 3D image editing software, photography and cinematic art; and transformations of light-based art resulting from the distributed architectures of the internet and the logic of the database.
Digital Light brings together high profile figures in diverse but increasingly convergent fields, from academy award-winner and co-founder of Pixar, Alvy Ray Smith to feminist philosopher Cathryn Vasseleu
Digital light
Light symbolises the highest good, it enables all visual art, and today it lies at the heart of billion-dollar industries. The control of light forms the foundation of contemporary vision. Digital Light brings together artists, curators, technologists and media archaeologists to study the historical evolution of digital light-based technologies. Digital Light provides a critical account of the capacities and limitations of contemporary digital light-based technologies and techniques by tracing their genealogies and comparing them with their predecessor media. As digital light remediates multiple historical forms (photography, print, film, video, projection, paint), the collection draws from all of these histories, connecting them to the digital present and placing them in dialogue with one another.Light is at once universal and deeply historical. The invention of mechanical media (including photography and cinematography) allied with changing print technologies (half-tone, lithography) helped structure the emerging electronic media of television and video, which in turn shaped the bitmap processing and raster display of digital visual media. Digital light is, as Stephen Jones points out in his contribution, an oxymoron: light is photons, particulate and discrete, and therefore always digital. But photons are also waveforms, subject to manipulation in myriad ways. From Fourier transforms to chip design, colour management to the translation of vector graphics into arithmetic displays, light is constantly disciplined to human purposes. In the form of fibre optics, light is now the infrastructure of all our media; in urban plazas and handheld devices, screens have become ubiquitous, and also standardised. This collection addresses how this occurred, what it means, and how artists, curators and engineers confront and challenge the constraints of increasingly normalised digital visual media.While various art pieces and other content are considered throughout the collection, the focus is specifically on what such pieces suggest about the intersection of technique and technology. Including accounts by prominent artists and professionals, the collection emphasises the centrality of use and experimentation in the shaping of technological platforms. Indeed, a recurring theme is how techniques of previous media become technologies, inscribed in both digital software and hardware. Contributions include considerations of image-oriented software and file formats; screen technologies; projection and urban screen surfaces; histories of computer graphics, 2D and 3D image editing software, photography and cinematic art; and transformations of light-based art resulting from the distributed architectures of the internet and the logic of the database.Digital Light brings together high profile figures in diverse but increasingly convergent fields, from academy award-winner and co-founder of Pixar, Alvy Ray Smith to feminist philosopher Cathryn Vasseleu
Transforming Culture in the Digital Age International Conference in Tartu 14-16 April 2010
A short history of cultural participation by Nico Carpentier
Accessible Digital Culture for Disabled People by Marcus Weisen
Understanding Visitors’ Experiences with Multimedia Guides in Cultural Spaces by Kamal Othman, Helen Petrie & Christopher Power
Can you be friends with an art museum? Rethinking the art museum through Facebook by Lea Schick & Katrine Damkjær
On Scientific Mentality in Cultural Memory by Raffaele Mascella & Paolo Lattanzio
Paranoid, not an Android: Dystopic and Utopic Expressions in Playful Interaction with Technology and everyday surroundings by Maaike de Jong
Theorizing Web 2.0: including local to become universal by Selva Ersoz Karakulakoglu
How Web 3.0 combines User-Generated and Machine-Generated Content by Stijn Bannier & Chris Vleugels
Artificial Culture as a Metaphor and Tool by Kurmo Konsa
Playful Public Connectivity by Anne Kaun
Habermasian Online Debate of a Rational Critical Nature: Transforming
Political Culture. A case study of the “For Honesty in Politics!” message group Latvia, 2007 by Ingus Bērziņš
Transformation of Cultural Preferences in Estonia by Maarja Lõhmus & Anu Masso
Taste 2.0. Social Network Site as Cultural Practice by Antonio Di Stefano
Online Communication A New Battlefield for Forming Elite Culture in China by Nanyi Bi
Internet, blogs and Social Networks for Independent and Personal Learning of Information Theory and Other Subjects in Journalism, Advertising and Media by Graciela Padilla & Eva Aladro
The Artist and Digital Self-presentation: a Reshuffle of Authority? by Joke Beyl
Communicative Image Construction in Online Social Networks. New Identity Opportunities in the Digital Age by Bernadette Kneidinger
Digital Identity: The Private and Public Paradox by Stacey M. Koosel
Mystory in Myspace Rhetoric of Memory in New Median by Petra Aczél
Life Publishing on the internet – a playful field of life-telling by Sari Östman
From the Gutenberg Galaxy to the Internet Galaxy. Digital Textuality and the Change of Cultural Landscape by Raine Koskimaa
The “Open” Ideology of Digital Culture by Robert Wilkie
Digital Poetry and/in the Poetics of the Automatic by Juri Joensuu
Re: appearing and Disappearing Classics. Case Study on Poetics of Two Digital Rewritings by a Finnish Poet by Marko Niemi, Kristian Blomberg
Cybertextuality meets transtextuality by Markku Eskelinen
Metafictionality and deterritorilization of the literary in the hypertexts by Anna Wendorff
The Public Sphere of Poetry and the Art of Publishing by Risto Niemi-Pynttäri
Solitude in Cyberspace by Piret Viires & Virve Sarapik
Reprogramming Systems Aesthetics: A Strategic Historiography by Edward A. Shanken
Stepping towards the immaterial: Digital technology revolutionizing art by Christina Grammatikopoulou
Creativity in Surveillance Environment: Jill Magid and the Integrated Circuit by Amy Christmas
Audience Interaction in the Cinema: An Evolving Experience by Chris Hales
Delay and non-materiality in telecommunication art by Raivo Kelomees
Robot: Ritual Oracle and Fetish by Thomas Riccio
Digital art and children’s formal and informal practices: Exploring curiosities and challenging assumptions by Steven Naylor
Locative Media and Augmented Reality: Bridges and Borders between Real and Virtual Spaces by Marisa Luisa Gómez Martíne