7 research outputs found
Techniques for effective and efficient fire detection from social media images
Social media could provide valuable information to support decision making in
crisis management, such as in accidents, explosions and fires. However, much of
the data from social media are images, which are uploaded in a rate that makes
it impossible for human beings to analyze them. Despite the many works on image
analysis, there are no fire detection studies on social media. To fill this
gap, we propose the use and evaluation of a broad set of content-based image
retrieval and classification techniques for fire detection. Our main
contributions are: (i) the development of the Fast-Fire Detection method
(FFDnR), which combines feature extractor and evaluation functions to support
instance-based learning, (ii) the construction of an annotated set of images
with ground-truth depicting fire occurrences -- the FlickrFire dataset, and
(iii) the evaluation of 36 efficient image descriptors for fire detection.
Using real data from Flickr, our results showed that FFDnR was able to achieve
a precision for fire detection comparable to that of human annotators.
Therefore, our work shall provide a solid basis for further developments on
monitoring images from social media.Comment: 12 pages, Proceedings of the International Conference on Enterprise
Information Systems. Specifically: Marcos Bedo, Gustavo Blanco, Willian
Oliveira, Mirela Cazzolato, Alceu Costa, Jose Rodrigues, Agma Traina, Caetano
Traina, 2015, Techniques for effective and efficient fire detection from
social media images, ICEIS, 34-4
Fire detection from social media images by means of instance-based learning
Social media can provide valuable information to support decision making in crisis management, such as in accidents, explosions, and fires. However, much of the data from social media are images, which are uploaded at a rate that makes it impossible for human beings to analyze them. To cope with that problem, we design and implement a database-driven architecture for fast and accurate fire detection named FFireDt. The design of FFireDt uses the instance-based learning through indexed similarity queries expressed as an extension of the relational Structured Query Language. Our contributions are: (i) the design of the Fast-Fire Detection (FFireDt), which achieves efficiency and efficacy rates that rival to the state-of-the-art techniques; (ii) the sound evaluation of 36 image descriptors, for the task of image classification in social media; (iii) the evaluation of content-based indexing with respect to the construction of instance-based classification systems; and (iv) the curation of a ground-truth annotated dataset of fire images from social media. Using real data from Flickr, the experiments showed that system FFireDt was able to achieve a precision for fire detection comparable to that of human annotators. Our results are promising for the engineering of systems to monitor images uploaded to social media services.FAPESPCNPqCAPESSTIC-AmSudRESCUER project, funded by the European Commission (Grant: 614154) and by the CNPq/MCTI (Grant: 490084/2013-3)International Conference on Enterprise Information Systems - ICEIS (17. 2015 Barcelona
Fire detection from social media images by means of instance-based learning
Social media can provide valuable information to support decision making in crisis management, such as in accidents, explosions, and fires. However, much of the data from social media are images, which are uploaded at a rate that makes it impossible for human beings to analyze them. To cope with that problem, we design and implement a database-driven architecture for fast and accurate fire detection named FFireDt. The design of FFireDt uses the instance-based learning through indexed similarity queries expressed as an extension of the relational Structured Query Language. Our contributions are: (i) the design of the Fast-Fire Detection (FFireDt), which achieves efficiency and efficacy rates that rival to the state-of-the-art techniques; (ii) the sound evaluation of 36 image descriptors, for the task of image classification in social media; (iii) the evaluation of content-based indexing with respect to the construction of instance-based classification systems; and (iv) the curation of a ground-truth annotated dataset of fire images from social media. Using real data from Flickr, the experiments showed that system FFireDt was able to achieve a precision for fire detection comparable to that of human annotators. Our results are promising for the engineering of systems to monitor images uploaded to social media services.FAPESPCNPqCAPESSTIC-AmSudRESCUER project, funded by the European Commission (Grant: 614154) and by the CNPq/MCTI (Grant: 490084/2013-3)International Conference on Enterprise Information Systems - ICEIS (17. 2015 Barcelona
Content-Based indexing and Retrieval Using MPEG-7 and X-Query in Video Data Management Systems
Current advances in multimedia technology enable ease of capturing and encoding digital video. As a result, video data is rapidly growing and becoming very important in our life. It is because video can transfer a large amount of knowledge by providing combination of text, graphics, or even images. Despite the vast growth of video, the effectiveness of its usage is very limited due to the lack of a complete technology for the organization and retrieval of video data. To date, there is no "perfect" solution for a complete video data-management technology, which can fully capture the content of video and index the video parts according to the contents, so that users can intuitively retrieve specific video segments. We have found that successful content-based video datamanagement systems depend on three most important components: key-segments extraction, content descriptions and video retrieval. While it is almost impossible for current computer technology to perceive the content of the video to identify correctly its key-segments, the system can understand more accurately the content of a specific video type by identifying the typical events that happens just before or after the key-segments (specific-domainapproach). Thus, we have proposed a concept of customisable video segmentation module, which integrates the suitable segmentation techniques for the current type of video. The identified key-segments are then described using standard video content descriptions to enable content-based retrievals. For retrieval, we have implemented XQuery, which currently is the most recent XML query language and the most powerful compared to older languages, such as XQL and XML-QL
Aquisição, tratamento, arquivo e difusão de exames de endoscopia
Dissertação de mestrado integrado em Engenharia BiomédicaDe entre os diversos tipos de exames de endoscopia, a esofagogastroduodenoscopia assume um papel
preponderante devido a ser o método ideal para examinar a mucosa do trato digestivo alto, bem como
para detetar inúmeras patologias gastrenterológicas. O resultado deste tipo de exames é, geralmente, um
relatório composto por um conjunto de frames capturados durante o exame, eventualmente acompanhado
por um vídeo. Hoje em dia, apenas as imagens juntamente com o relatório endoscópico, são arquivadas.
O facto de o vídeo não ser arquivado pode conduzir a um incómodo no bem-estar do paciente, assim
como a um acréscimo de custos e tempo despendido, pois frequentemente o mesmo é necessário para
revisão e validação da hipótese de diagnóstico, bem como para comparação de segmentos do vídeo com
exames futuros. Mesmo nos casos em que a informação é arquivada, a falta de reutilização e partilha de
informação e vídeos entre entidades contribui, mais uma vez, para uma repetição desnecessária de
exames.
A existência de um arquivo de vídeos endoscópicos seria uma mais-valia, pois além de resolver os
problemas referidos ainda possibilitaria a sua utilização para fins de pesquisa e investigação, além de
disponibilizar exames para servirem como referência para estudo de casos similares.
Neste trabalho é proposta uma solução abrangente para a aquisição, tratamento, arquivo e difusão de
exames de endoscopia. O objetivo passa por disponibilizar um sistema capaz de gerir toda a informação
clínica e administrativa (incluindo conteúdo audiovisual) desde o seu processo de aquisição até ao
processo de pesquisa de exames antigos, para comparação com novos casos. De forma a garantir a
compatibilidade lexical da informação partilhada no sistema, foi utilizado um vocabulário endoscópico
estandardizado, o Minimal Standard Terminology (MST). Neste contexto foi planeado um dispositivo
(MIVbox) orientado à aquisição do vídeo endoscópico, independentemente da câmara endoscópica
utilizada. Toda a informação é armazenada de forma estruturada e normalizada, possibilitando a sua
reutilização e difusão. Para facilitar este processo de partilha, o vídeo sofre algumas etapas de
processamento, de forma a ser obtido um vídeo reduzido e as respetivas características do conteúdo.
Deste modo, a solução proposta contempla um sistema de anotação que habilita a pesquisa por conteúdo,
servindo assim como uma ferramenta versátil para a investigação nesta área. Este sistema é ainda dotado
de um módulo de streaming, no qual é transmitido, em tempo real, o exame endoscópico,
disponibilizando um canal de comunicação com vídeo unidirecional e áudio bidirecional, permitindo que os
profissionais ausentes da sala do exame deem a sua opinião remotamente.Among the different kinds of endoscopic procedures, esophagogastroduodenoscopy plays a major role
because it is the ideal method to examine the upper digestive tract, as well as to detect numerous
gasteroentologic diseases. The result of such procedures is usually a written report that comprises a set of
frames captured during the examination, sometimes complemented with a video. Nowadays only the
images are stored along with the endoscopic report. Not storing the video may lead to discomfort
concerning the patient’s well-being, as well as an increase of costs and time spent, because it is often
necessary to review and validate the diagnostic hypothesis, and compare video segments in future exams.
Even in the cases in which the information is stored, the lack of reutilization and share of information and
videos among entities contributes, once again, for an unnecessary repetition of exams.
Besides solving the problems mentioned above, the existence of an endoscopic video archive would be an
asset because it would enable research and investigation activities. Furthermore it would make available
exams to serve as a reference for the study of similar cases.
In this work, an extended solution of acquisition, processing, archiving and diffusion of endoscopic
procedures is proposed. The aim is to provide a system capable of managing all the administrative and
clinical information (including audiovisual content) from its acquisition process to the searching process of
previous exams, for comparison with new cases. In order to ensure compatibility of lexical information
shared in the system, a standardized endoscopic vocabulary, the Minimal Standard Terminology (MST)
was used. In this context, a device for the acquisition of the endoscopic video was designed (MIVbox),
regardless of the endoscopic camera that is used. All the information is stored in a structured and
standardized way, allowing its reuse and sharing. To facilitate this sharing process, the video undergoes
some processing steps in order to obtain a summarized video and the respective content characteristics.
The proposed solution provides an annotation system that enables content querying, thus becoming a
versatile tool for research in this area. This system is also provided with a streaming module in which the
endoscopic video is transmitted in real time. This process uses a communication channel with one-way
video and two-way audio, allowing professionals absent from the exam room to give their opinion remotely
Content-based video indexing for sports applications using integrated multi-modal approach
This thesis presents a research work based on an integrated multi-modal approach for sports video indexing and retrieval. By combining specific features extractable from multiple (audio-visual) modalities, generic structure and specific events can be detected and classified. During browsing and retrieval, users will benefit from the integration of high-level semantic and some descriptive mid-level features such as whistle and close-up view of player(s). The main objective is to contribute to the three major components of sports video indexing systems. The first component is a set of powerful techniques to extract audio-visual features and semantic contents automatically. The main purposes are to reduce manual annotations and to summarize the lengthy contents into a compact, meaningful and more enjoyable presentation. The second component is an expressive and flexible indexing technique that supports gradual index construction. Indexing scheme is essential to determine the methods by which users can access a video database. The third and last component is a query language that can generate dynamic video summaries for smart browsing and support user-oriented retrievals