81 research outputs found
Smart Sensor Technologies for IoT
The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT
Towards Computational Efficiency of Next Generation Multimedia Systems
To address throughput demands of complex applications (like Multimedia), a next-generation system designer needs to co-design and co-optimize the hardware and software layers. Hardware/software knobs must be tuned in synergy to increase the throughput efficiency. This thesis provides such algorithmic and architectural solutions, while considering the new technology challenges (power-cap and memory aging). The goal is to maximize the throughput efficiency, under timing- and hardware-constraints
Runtime methods for energy-efficient, image processing using significance driven learning.
Ph. D. Thesis.Image and Video processing applications are opening up a whole
range of opportunities for processing at the "edge" or IoT applications
as the demand for high accuracy processing high resolution images
increases. However this comes with an increase in the quantity of data
to be processed and stored, thereby causing a significant increase in
the computational challenges. There is a growing interest in developing
hardware systems that provide energy efficient solutions to this
challenge. The challenges in Image Processing are unique because the
increase in resolution, not only increases the data to be processed but
also the amount of information detail scavenged from the data is also
greatly increased. This thesis addresses the concept of extracting the
significant image information to enable processing the data intelligently
within a heterogeneous system.
We propose a unique way of defining image significance, based on
what causes us to react when something "catches our eye", whether it
be static or dynamic, whether it be in our central field of focus or our
peripheral vision. This significance technique proves to be a relatively
economical process in terms of energy and computational effort.
We investigate opportunities for further computational and energy
efficiency that are available by elective use of heterogeneous system
elements.
We utilise significance to adaptively select regions of interest for selective
levels of processing dependent on their relative significance.
We further demonstrate that exploiting the computational slack time
released by this process, we can apply throttling of the processor
speed to effect greater energy savings. This demonstrates a reduction
in computational effort and energy efficiency a process that we term
adaptive approximate computing.
We demonstrate that our approach reduces energy in a range of 50 to
75%, dependent on user quality demand, for a real-time performance
requirement of 10 fps for a WQXGA image, when compared with the
existing approach that is agnostic of significance. We further hypothesise
that by use of heterogeneous elements that savings up to 90%
could be achievable in both performance and energy when compared
with running OpenCV on the CPU alone
Algoritmo de estimação de movimento e sua arquitetura de hardware para HEVC
Doutoramento em Engenharia EletrotécnicaVideo coding has been used in applications like video surveillance, video
conferencing, video streaming, video broadcasting and video storage. In a
typical video coding standard, many algorithms are combined to compress a
video. However, one of those algorithms, the motion estimation is the most
complex task. Hence, it is necessary to implement this task in real time by
using appropriate VLSI architectures. This thesis proposes a new fast motion
estimation algorithm and its implementation in real time. The results show that
the proposed algorithm and its motion estimation hardware architecture out
performs the state of the art. The proposed architecture operates at a
maximum operating frequency of 241.6 MHz and is able to process
1080p@60Hz with all possible variables block sizes specified in HEVC
standard as well as with motion vector search range of up to ±64 pixels.A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância,
vídeo-conferência, video streaming e armazenamento de vídeo.
Numa norma de codificação de vídeo, diversos algoritmos são combinados
para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de
movimento é a tarefa mais complexa. Por isso, é necessário implementar esta
tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese
propõe um algoritmo de estimação de movimento rápido bem como a sua
implementação em tempo real. Os resultados mostram que o algoritmo e a
arquitetura de hardware propostos têm melhor desempenho que os existentes.
A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é
capaz de processar imagens de resolução 1080p@60Hz, com todos os
tamanhos de blocos especificados na norma HEVC, bem como um domínio de
pesquisa de vetores de movimento até ±64 pixels
Laser Scanning as a Methodology for the 3-D Digitization of Archaeological Ship Timbers: A Case Study Using the World Trade Center Shipwreck
Accurate documentation of cultural heritage material is essential to its study and interpretation by archaeologists. In order to continually refine the documentation process, technological advances are incorporated into traditional methodologies. This study demonstrates the utility of high-definition laser scanning for the documentation of disarticulated timbers from the ship remains found during the excavation of the former site of the World Trade Center in New York City. Laser scanned models of the timbers were used to virtually re-assemble the ship, produce traditional scaled drawings for standard documentation, loft a series of ship lines for reconstruction modeling, and to produce a scaled 3-D printed model of the ship
Low complexity in-loop perceptual video coding
The tradition of broadcast video is today complemented with user generated content, as portable devices support video coding. Similarly, computing is becoming ubiquitous, where Internet of Things (IoT) incorporate heterogeneous networks to communicate with personal and/or infrastructure devices. Irrespective, the emphasises is on bandwidth and processor efficiencies, meaning increasing the signalling options in video encoding. Consequently, assessment for pixel differences applies uniform cost to be processor efficient, in contrast the Human Visual System (HVS) has non-uniform sensitivity based upon lighting, edges and textures. Existing perceptual assessments, are natively incompatible and processor demanding, making perceptual video coding (PVC) unsuitable for these environments. This research allows existing perceptual assessment at the native level using low complexity techniques, before producing new pixel-base image quality assessments (IQAs). To manage these IQAs a framework was developed and implemented in the high efficiency video coding (HEVC) encoder. This resulted in bit-redistribution, where greater bits and smaller partitioning were allocated to perceptually significant regions. Using a HEVC optimised processor the timing increase was < +4% and < +6% for video streaming and recording applications respectively, 1/3 of an existing low complexity PVC solution. Future work should be directed towards perceptual quantisation which offers the potential for perceptual coding gain
Avaliação da qualidade de experiência de vídeo em várias tecnologias
Mestrado em Engenharia Eletrónica e TelecomunicaçõesNowadays the internet is associated with many services. Combined
with this fact, there is a marked increase of the users joining this
service. In this perspective, it is required that the service providers
guarantee a minimum quality to the network services.
The Quality of Experience of services is quite crucial in the development
of services in networks. Also noteworthy, the tra c increase in multimedia
services, including video streaming, increases the probability of
congesting the networks. In the perspective of the service provider, the
monitoring is a solution to avoid saturation in network.
This way, this dissertation proposes to develop a platform that allows
a multimedia tra c monitoring in the Meo Go service provided by the
operator Portugal Telecom Communications.
The architecture of the adaptive streaming over HTTP has been studied
and tested to obtain the quality of experience metrics. This adaptive
streaming technique presents the smooth streaming, an architecture
made by Microsoft company, and it is used in the Meo Go service.
Then, it is monitored the metrics obtained with the video player. This
analysis is done objectively and subjectively. In this phase, the objective
implementation of the method allows to obtain the prediction value of
the Quality of Experience by consumers. The selected metrics were
derived from the state / performance of network and terminal device.
The obtained metrics aim to simulate human action in video score
quality. Otherwise, subjectively, it is conducted a survey based in a
questionnaire to compare methods. In this phase it was created an
on-line platform to allow the obtain a greater number of rankings and
data processing.
In the obtained results, rstly in the smooth streaming player, it is
shown the adaptive streaming implementation technique. On the next
phase, test scenarios were created to demonstrate the functioning of
the method in many cases, with greater relevance for those ones with
higher dynamic complexity. From the perspective of subjective and
objective methods, these have values that con rm the architecture of
the implemented module. Over time, the performance of the scoring
the quality of video streaming services approaches the one in a human
mental action.Nos dias de hoje a Internet é um dos meios com mais serviços associados.
Conjugado a este facto, existe um acentuado aumento de utilizadores a aderir a este serviço. Nesta perspectiva existe a necessidade de garantir uma qualidade mínima por parte dos prestadores de serviços.
A Qualidade de Experiência que os consumidores têm dos serviços é bastante crucial no desenvolvimento e optimização dos serviços nas redes. É ainda de salientar que o aumento do tráfego multimédia, nomeadamente os streamings de vídeo, apresenta incrementos na probabilidade de as redes se congestionarem. Na perspectiva do prestador de serviços a monitorização é a solução para evitar a saturação total.
Neste sentido, esta dissertação pretende desenvolver uma plataforma
que permite a monitorização do tráfego de multimédia do serviço do
Meo Go, fornecido pela operadora Portugal Telecom Comunicações.
Neste trabalho foi necessário investigar e testar a arquitectura do streaming adaptativo sobre HTTP para ser possível obter métricas de qualidade de experiência. Este streaming adaptativo apresenta a técnica de smooth streaming, sendo esta arquitectura projectada pela empresa Microsoft e utilizada no serviço Meo Go.
Posteriormente foram monitorizadas as métricas que se obtiveram no player de vídeo. Esta análise foi realizada de forma objectiva e subjectiva.
Nesta fase da implementação objectiva do método em que se pretende obter uma predição do valor de Qualidade de Experiência por parte do consumidor, foram seleccionadas as métricas oriundas do estado/desempenho da rede e do dispositivo terminal. As métricas obtidas entram num processo de tratamento que pretende simular a ação humana nas classificações da qualidade dos vídeos. De outra forma, subjectivamente, foi realizada uma pesquisa, com base num questionário, de modo a comparar os métodos. Nesta etapa foi gerada uma plataforma online que possibilitou obter um maior número de classificações dos vídeos para posteriormente se proceder ao tratamento de dados.
Nos resultados obtidos, primeiramente ao nível do player de smooth streaming, estes permitem analisar a técnica de implementação de streaming adaptativo. Numa fase seguinte foram criados cenários de teste para comprovar o funcionamento do método em diversas situações, tendo com maior relevância aqueles que contêm dinâmicas mais complexas. Na perspectiva dos métodos subjectivo e objectivo, estes apresentam valores que confirmam a arquitectura do módulo implementado.
Adicionalmente, o desempenho do método em classificar a qualidade de serviço de vídeo streaming, ao longo do tempo, apresentou valores que se aproximam da dinâmica esperada numa ação mental humana
- …