8,581 research outputs found
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Audio-Visual Automatic Speech Recognition Towards Education for Disabilities
Education is a fundamental right that enriches everyone’s life. However, physically challenged people often debar from the general and advanced education system. Audio-Visual Automatic Speech Recognition (AV-ASR) based system is useful to improve the education of physically challenged people by providing hands-free computing. They can communicate to the learning system through AV-ASR. However, it is challenging to trace the lip correctly for visual modality. Thus, this paper addresses the appearance-based visual feature along with the co-occurrence statistical measure for visual speech recognition. Local Binary Pattern-Three Orthogonal Planes (LBP-TOP) and Grey-Level Co-occurrence Matrix (GLCM) is proposed for visual speech information. The experimental results show that the proposed system achieves 76.60 % accuracy for visual speech and 96.00 % accuracy for audio speech recognition
Examples of works to practice staccato technique in clarinet instrument
Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato
geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı
sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de
durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt
çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham
verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her
aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır.
Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine
yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini
içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin
kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür
taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de
kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt
çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve
güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının
girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken
doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir
kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına
bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği
vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan
çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur.
Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir.
Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır.
Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların
yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve
sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır
Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends
Aiming at obtaining structural information and 3D motion of dynamic scenes, scene flow estimation has been an interest of research in computer vision and computer graphics for a long time. It is also a fundamental task for various applications such as autonomous driving. Compared to previous methods that utilize image representations, many recent researches build upon the power of deep analysis and focus on point clouds representation to conduct 3D flow estimation. This paper comprehensively reviews the pioneering literature in scene flow estimation based on point clouds. Meanwhile, it delves into detail in learning paradigms and presents insightful comparisons between the state-of-the-art methods using deep learning for scene flow estimation. Furthermore, this paper investigates various higher-level scene understanding tasks, including object tracking, motion segmentation, etc. and concludes with an overview of foreseeable research trends for scene flow estimation
In vitro investigation of the effect of disulfiram on hypoxia induced NFκB, epithelial to mesenchymal transition and cancer stem cells in glioblastoma cell lines
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Glioblastoma multiforme (GBM) is one of the most aggressive and lethal cancers with a poor prognosis. Advances in the treatment of GBM are limited due to several resistance mechanisms and limited drug delivery into the central nervous system (CNS) compartment by the blood-brain barrier (BBB) and by actions of the normal brain to counteract tumour-targeting medications. Hypoxia is common in malignant brain tumours such as GBM and plays a significant role in tumour pathobiology. It is widely accepted that hypoxia is a major driver of GBM malignancy. Although it has been confirmed that hypoxia induces GBM stem-like-cells (GSCs), which are highly invasive and resistant to all chemotherapeutic agents, the detailed molecular pathways linking hypoxia, GSC traits and chemoresistance remain obscure. Evidence shows that hypoxia induces cancer stem cell phenotypes via epithelial-to-mesenchymal transition (EMT), promoting therapeutic resistance in most cancers, including GBM.
This study demonstrated that spheroid cultured GBM cells consist of a large population of hypoxic cells with CSC and EMT characteristics. GSCs are chemo-resistant and displayed increased levels of HIFs and NFκB activity. Similarly, the hypoxia cultured GBM cells manifested GSC traits, chemoresistance and invasiveness. These results suggest that hypoxia is responsible for GBM stemness, chemoresistance and invasiveness. GBM cells transfected with nuclear factor kappa B-p65 (NFκB-p65) subunit exhibited CSC and EMT markers indicating the essential role of NFκB in maintaining GSC phenotypes. The study also highlighted the significance of NFκB in driving chemoresistance, invasiveness, and the potential role of NFκB as the central regulator of hypoxia-induced stemness in GBM cells. GSC population has the ability of self-renewal, cancer initiation and development of secondary heterogeneous cancer. The very poor prognosis of GBM could largely be attributed to the existence of GSCs, which promote tumour propagation, maintenance, radio- and chemoresistance and local infiltration.
In this study, we used Disulfiram (DS), a drug used for more than 65 years in alcoholism clinics, in combination with copper (Cu) to target the NFκB pathway, reverse chemoresistance and block invasion in GSCs. The obtained results showed that DS/Cu is highly cytotoxic to GBM cells and completely eradicated the resistant CSC population at low dose levels in vitro. DS/Cu inhibited the migration and invasion of hypoxia-induced CSC and EMT like GBM cells at low nanomolar concentrations.
DS is an FDA approved drug with low toxicity to normal tissues and can pass through the BBB. Further research may lead to the quick translation of DS into cancer clinics and provide new therapeutic options to improve treatment outcomes in GBM patients
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos
Data charts are widely used in our daily lives, being present in regular media,
such as newspapers, magazines, web pages, books, and many others. A well constructed
data chart leads to an intuitive understanding of its underlying data
and in the same way, when data charts have wrong design choices, a redesign
of these representations might be needed. However, in most cases, these
charts are shown as a static image, which means that the original data are not
usually available. Therefore, automatic methods could be applied to extract the
underlying data from the chart images to allow these changes. The task of
recognizing charts and extracting data from them is complex, largely due to the
variety of chart types and their visual characteristics.
Computer Vision techniques for image classification and object detection are
widely used for the problem of recognizing charts, but only in images without
any disturbance. Other features in real-world images that can make this task
difficult are not present in most literature works, like photo distortions, noise,
alignment, etc. Two computer vision techniques that can assist this task and
have been little explored in this context are perspective detection and
correction. These methods transform a distorted and noisy chart in a clear
chart, with its type ready for data extraction or other uses. The task of
reconstructing data is straightforward, as long the data is available the
visualization can be reconstructed, but the scenario of reconstructing it on the
same context is complex.
Using a Visualization Grammar for this scenario is a key component, as these
grammars usually have extensions for interaction, chart layers, and multiple
views without requiring extra development effort.
This work presents a model for automated support for custom recognition, and
reconstruction of charts in images. The model automatically performs the
process steps, such as reverse engineering, turning a static chart back into its
data table for later reconstruction, while allowing the user to make modifications
in case of uncertainties. This work also features a model-based architecture
along with prototypes for various use cases. Validation is performed step by
step, with methods inspired by the literature. This work features three use
cases providing proof of concept and validation of the model.
The first use case features usage of chart recognition methods focused on
documents in the real-world, the second use case focus on vocalization of
charts, using a visualization grammar to reconstruct a chart in audio format,
and the third use case presents an Augmented Reality application that
recognizes and reconstructs charts in the same context (a piece of paper)
overlaying the new chart and interaction widgets. The results showed that with
slight changes, chart recognition and reconstruction methods are now ready for
real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando
presentes nos meios de comunicação regulares, tais como jornais, revistas,
páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma
compreensão intuitiva dos seus dados inerentes e da mesma forma, quando
os gráficos de dados têm escolhas de conceção erradas, poderá ser
necessário um redesenho destas representações. Contudo, na maioria dos
casos, estes gráficos são mostrados como uma imagem estática, o que
significa que os dados originais não estão normalmente disponíveis. Portanto,
poderiam ser aplicados métodos automáticos para extrair os dados inerentes
das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de
reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande
parte devido à variedade de tipos de gráficos e às suas características visuais.
As técnicas de Visão Computacional para classificação de imagens e deteção
de objetos são amplamente utilizadas para o problema de reconhecimento de
gráficos, mas apenas em imagens sem qualquer ruído. Outras características
das imagens do mundo real que podem dificultar esta tarefa não estão
presentes na maioria das obras literárias, como distorções fotográficas, ruído,
alinhamento, etc. Duas técnicas de visão computacional que podem ajudar
nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e
correção da perspetiva. Estes métodos transformam um gráfico distorcido e
ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados
ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que
os dados estejam disponíveis a visualização pode ser reconstruída, mas o
cenário de reconstrução no mesmo contexto é complexo.
A utilização de uma Gramática de Visualização para este cenário é um
componente chave, uma vez que estas gramáticas têm normalmente
extensões para interação, camadas de gráficos, e visões múltiplas sem exigir
um esforço extra de desenvolvimento.
Este trabalho apresenta um modelo de suporte automatizado para o
reconhecimento personalizado, e reconstrução de gráficos em imagens
estáticas. O modelo executa automaticamente as etapas do processo, tais
como engenharia inversa, transformando um gráfico estático novamente na
sua tabela de dados para posterior reconstrução, ao mesmo tempo que
permite ao utilizador fazer modificações em caso de incertezas. Este trabalho
também apresenta uma arquitetura baseada em modelos, juntamente com
protótipos para vários casos de utilização. A validação é efetuada passo a
passo, com métodos inspirados na literatura. Este trabalho apresenta três
casos de uso, fornecendo prova de conceito e validação do modelo.
O primeiro caso de uso apresenta a utilização de métodos de reconhecimento
de gráficos focando em documentos no mundo real, o segundo caso de uso
centra-se na vocalização de gráficos, utilizando uma gramática de visualização
para reconstruir um gráfico em formato áudio, e o terceiro caso de uso
apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói
gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos
gráficos e widgets de interação. Os resultados mostraram que com pequenas
alterações, os métodos de reconhecimento e reconstrução dos gráficos estão
agora prontos para os gráficos do mundo real, tendo em consideração o
tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic
Interference mitigation in LiFi networks
Due to the increasing demand for wireless data, the radio frequency (RF) spectrum has
become a very limited resource. Alternative approaches are under investigation to support
the future growth in data traffic and next-generation high-speed wireless communication
systems. Techniques such as massive multiple-input multiple-output (MIMO), millimeter
wave (mmWave) communications and light-fidelity (LiFi) are being explored. Among
these technologies, LiFi is a novel bi-directional, high-speed and fully networked wireless
communication technology. However, inter-cell interference (ICI) can significantly restrict the
system performance of LiFi attocell networks. This thesis focuses on interference mitigation
in LiFi attocell networks.
The angle diversity receiver (ADR) is one solution to address the issue of ICI as well as
frequency reuse in LiFi attocell networks. With the property of high concentration gain and
narrow field of view (FOV), the ADR is very beneficial for interference mitigation. However,
the optimum structure of the ADR has not been investigated. This motivates us to propose the
optimum structures for the ADRs in order to fully exploit the performance gain. The impact
of random device orientation and diffuse link signal propagation are taken into consideration.
The performance comparison between the select best combining (SBC) and maximum ratio
combining (MRC) is carried out under different noise levels. In addition, the double source
(DS) system, where each LiFi access point (AP) consists of two sources transmitting the same
information signals but with opposite polarity, is proven to outperform the single source (SS)
system under certain conditions.
Then, to overcome issues around ICI, random device orientation and link blockage, hybrid
LiFi/WiFi networks (HLWNs) are considered. In this thesis, dynamic load balancing (LB)
considering handover in HLWNs is studied. The orientation-based random waypoint (ORWP)
mobility model is considered to provide a more realistic framework to evaluate the performance
of HLWNs. Based on the low-pass filtering effect of the LiFi channel, we firstly propose
an orthogonal frequency division multiple access (OFDMA)-based resource allocation (RA)
method in LiFi systems. Also, an enhanced evolutionary game theory (EGT)-based LB scheme
with handover in HLWNs is proposed.
Finally, due to the characteristic of high directivity and narrow beams, a vertical-cavity
surface-emitting laser (VCSEL) array transmission system has been proposed to mitigate
ICI. In order to support mobile users, two beam activation methods are proposed. The
beam activation based on the corner-cube retroreflector (CCR) can achieve low power
consumption and almost-zero delay, allowing real-time beam activation for high-speed users.
The mechanism based on the omnidirectional transmitter (ODTx) is suitable for low-speed
users and very robust to random orientation
Socio-endocrinology revisited: New tools to tackle old questions
Animals’ social environments impact their health and survival, but the proximate links between sociality and fitness are still not fully understood. In this thesis, I develop and apply new approaches to address an outstanding question within this sociality-fitness link: does grooming (a widely studied, positive social interaction) directly affect glucocorticoid concentrations (GCs; a group of steroid hormones indicating physiological stress) in a wild primate? To date, negative, long-term correlations between grooming and GCs have been found, but the logistical difficulties of studying proximate mechanisms in the wild leave knowledge gaps regarding the short-term, causal mechanisms that underpin this relationship. New technologies, such as collar-mounted tri-axial accelerometers, can provide the continuous behavioural data required to match grooming to non-invasive GC measures (Chapter 1). Using Chacma baboons (Papio ursinus) living on the Cape Peninsula, South Africa as a model system, I identify giving and receiving grooming using tri-axial accelerometers and supervised machine learning methods, with high overall accuracy (~80%) (Chapter 2). I then test what socio-ecological variables predict variation in faecal and urinary GCs (fGCs and uGCs) (Chapter 3). Shorter and rainy days are associated with higher fGCs and uGCs, respectively, suggesting that environmental conditions may impose stressors in the form of temporal bottlenecks. Indeed, I find that short days and days with more rain-hours are associated with reduced giving grooming (Chapter 4), and that this reduction is characterised by fewer and shorter grooming bouts. Finally, I test whether grooming predicts GCs, and find that while there is a long-term negative correlation between grooming and GCs, grooming in the short-term, in particular giving grooming, is associated with higher fGCs and uGCs (Chapter 5). I end with a discussion on how the new tools I applied have enabled me to advance our understanding of sociality and stress in primate social systems (Chapter 6)
The Role of Transient Vibration of the Skull on Concussion
Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury
- …