207 research outputs found
Digital tools in media studies: analysis and research. An overview
Digital tools are increasingly used in media studies, opening up new perspectives for research and analysis, while creating new problems at the same time. In this volume, international media scholars and computer scientists present their projects, varying from powerful film-historical databases to automatic video analysis software, discussing their application of digital tools and reporting on their results. This book is the first publication of its kind and a helpful guide to both media scholars and computer scientists who intend to use digital tools in their research, providing information on applications, standards, and problems
Digital Tools in Media Studies
Digital tools are increasingly used in media studies, opening up new perspectives for research and analysis, while creating new problems at the same time. In this volume, international media scholars and computer scientists present their projects, varying from powerful film-historical databases to automatic video analysis software, discussing their application of digital tools and reporting on their results. This book is the first publication of its kind and a helpful guide to both media scholars and computer scientists who intend to use digital tools in their research, providing information on applications, standards, and problems
Recommended from our members
Generating 3D product design models in real-time using hand motion and gesture
This thesis was submitted for the degree of Master of Philosophy and awarded by Brunel University.Three dimensional product design models are widely used in conceptual design and in the early stage of prototyping during the design processes. A product design specification often demands a substantial amount of 3D models to be constructed within a short period of time. Current methods begin with designers sketching product concepts in 2D using pencil and paper, which in turn are then translated into 3D models by a design individual with CAD expertise, using a 3D modelling software package such as Pro Engineer, Solid Works, Auto CAD etc. Several novel methods have been used to incorporate hand motion as a way of interacting with computers. There are three main types of technology available to capture motion data, capable of translating human motion into numeric data which can be read by a computer system. The first being, hand gesture glove-based systems such as “Cyberglove”, these systems are generally used to capture hand gesture and joint angle information. The second is full body motion capture systems, optical and non-optical-based, and finally vision based gesture recognition systems which capture full degree of - freedom (DOF) hand motion estimation. There has yet to be a method using any of the above mentioned input devices to rapidly produce 3D product design models in real time, using hand motion and gestures. In this research, a novel method is presented, using a motion capture system to capture hand gestures and motion in real time, to recreate 3D curves and surfaces, which can be translated into 3D product design models. The main aim of this research is to develop a hand motion and gesture-based rapid 3D product modelling method, allowing designers to interactively sketch out 3D concepts in real time using a virtual workspace.
A database of a number of hand signs was built for both architectural hand signs (preliminary study) and Product Design hand signs. A marker set model with a total of eight markers (five on the left hand and three on right hand/marker pen) was designed and used in the capture of hand gestures with the use of an Optical Motion Capture System. A preliminary testing session was successfully completed to determine whether the Motion Capture system would be suitable for a real-time application, by effectively modelling a train station in an offline state using hand motion and gesture. An OpenGL software application was programmed using C++ and the Microsoft Foundation Classes which was used to communicate and pass information of captured motion from the EVaRT system to the user
Recommended from our members
Hand gesture recognition using deep learning neural networks
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHuman Computer Interaction (HCI) is a broad field involving different types of interactions including gestures. Gesture recognition concerns non-verbal motions used as a means of communication in HCI. A system may be utilised to identify human gestures to convey information for device control. This represents a significant field within HCI involving device interfaces and users. The aim of gesture recognition is to record gestures that are formed in a certain way and then detected by a device such as a camera. Hand gestures can be used as a form of communication for many different applications. It may be used by people who possess different disabilities, including those with hearing-impairments, speech impairments and stroke patients, to communicate and fulfil their basic needs.
Various studies have previously been conducted relating to hand gestures. Some studies proposed different techniques to implement the hand gesture experiments. For image processing there are multiple tools to extract features of images, as well as Artificial Intelligence which has varied classifiers to classify different types of data. 2D and 3D hand gestures request an effective algorithm to extract images and classify various mini gestures and movements. This research discusses this issue using different algorithms. To detect 2D or 3D hand gestures, this research proposed image processing tools such as Wavelet Transforms and Empirical Mode Decomposition to extract image features. The Artificial Neural Network (ANN) classifier which used to train and classify data besides Convolutional Neural Networks (CNN). These methods were examined in terms of multiple parameters such as execution time, accuracy, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood, negative likelihood, receiver operating characteristic, area under ROC curve and root mean square. This research discusses four original contributions in the field of hand gestures. The first contribution is an implementation of two experiments using 2D hand gesture video where ten different gestures are detected in short and long distances using an iPhone 6 Plus with 4K resolution. The experiments are performed using WT and EMD for feature extraction while ANN and CNN for classification. The second contribution comprises 3D hand gesture video experiments where twelve gestures are recorded using holoscopic imaging system camera. The third contribution pertains experimental work carried out to detect seven common hand gestures. Finally, disparity experiments were performed using the left and the right 3D hand gesture videos to discover disparities. The results of comparison show the accuracy results of CNN being 100% compared to other techniques. CNN is clearly the most appropriate method to be used in a hand gesture system.Imam Abdulrahman bin Faisal Universit
Recommended from our members
The syntax and semantics of resultative constructions in Deutsche Gebärdensprache (DGS) and American Sign Language (ASL)
Complex cause-result events such as wiping a table off can be encoded linguistically with a single verb (clean), a resultative (wipe the table clean), or a multiclausal construction (wipe the table until it’s clean). Languages differ markedly in the kinds of events that can be described in a single clause; hence the present work explores whether Deutsche Gebärdensprache (DGS) and American Sign Language (ASL) can encode both manner of causation and result state within a single clause. Since an investigation of clause-level constructions presupposes a thorough understanding of clause boundaries, this dissertation starts by reviewing and adding to the existing clausehood diagnostics in spoken and signed languages. Using these diagnostics in combination with video elicitation tasks and grammaticality judgments, I show that DGS has two monoclausal resultative constructions that differ in the order of the causing and result predicates. The constructions both allow Control and ECM resultatives and may take a stative or change-of-state secondary predicate. Their semantics differ in that resultatives with [Result Cause] word order exhibit event-to-scale homomorphy while those with [Cause Result] word order do not. ASL has a single monoclausal resultative construction that encodes at least Control resultatives but, in contrast to English, does not exhibit homomorphic mappings.
ASL shares a different aspect of resultative semantics with English: directness of causation. The present work presents the first empirical investigation of directness of causation and its effect on the acceptability of resultatives in English and ASL. It finds that both English and ASL resultatives are significantly less acceptable as descriptors of causative scenarios in which there is a temporal delay between causing and result events. This study further shows a significant decrease in acceptability of English and ASL resultatives when an intermediate causer intervenes between ultimate causer and result. Through controlled experiments on resultatives in both languages, I show that temporal delays and intervening causers decrease directness independently and to significantly different degrees. Lastly, this study identifies subtle differences in the semantics of ASL resultatives and their English counterparts. While the degree of indirectness of an intervening causer is attenuated by the ultimate causer’s intentionality in English, no such effect is found for ASL.
In summary, the present work demonstrates that sign languages like DGS and ASL have syntactic resources for packaging event-structural information densely. These resources exhibit different constraints on usage than their German and English counterparts and are well-integrated into the grammars of DGS and ASL.Linguistic
Recent Developments in Smart Healthcare
Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine
Scalable video compression with optimized visual performance and random accessibility
This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved.
The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling.
The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field.
The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate.
For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video
Development of whole-heart myocardial perfusion magnetic resonance imaging
Myocardial perfusion imaging is of huge importance for the detection of
coronary artery disease (CAD), one of the leading causes of morbidity
and mortality worldwide, as it can provide non-invasive detection at the
early stages of the disease. Magnetic resonance imaging (MRI) can assess
myocardial perfusion by capturing the rst-pass perfusion (FPP) of a
gadolinium-based contrast agent (GBCA), which is now a well-established
technique and compares well with other modalities. However, current MRI
methods are restricted by their limited coverage of the left ventricle. Interest
has therefore grown in 3D volumetric \whole-heart" FPP by MRI, although
many challenges currently limit this. For this thesis, myocardial perfusion
assessment in general, and 3D whole-heart FPP in particular, were reviewed
in depth, alongside MRI techniques important for achieving 3D FPP. From
this, a 3D `stack-of-stars' (SOS) FPP sequence was developed with the aim
of addressing some current limitations. These included the breath-hold
requirement during GBCA rst-pass, long 3D shot durations corrupted by
cardiac motion, and a propensity for artefacts in FPP. Parallel imaging and
compressed sensing were investigated for accelerating whole-heart FPP, with
modi cations presented to potentially improve robustness to free-breathing.
Novel sequences were developed that were capable of individually improving
some current sequence limits, including spatial resolution and signal-to-noise
ratio, although with some sacri ces. A nal 3D SOS FPP technique was
developed and tested at stress during free-breathing examinations of CAD
patients and healthy volunteers. This enabled the rst known detection of an
inducible perfusion defect with a free-breathing, compressed sensing, 3D FPP
sequence; however, further investigation into the diagnostic performance is
required. Simulations were performed to analyse potential artefacts in 3D
FPP, as well as to examine ways towards further optimisation of 3D SOS
FPP. The nal chapter discusses some limitations of the work and proposes
opportunities for further investigation.Open Acces
- …