973 research outputs found
Process mining: A recent framework for extracting a model from event logs
Business Process Management (BPM) is a well-known discipline, with roots in previous theories related with optimizing management and improving businesses results. One can trace BPM back to the beginning of this century, although it was in more recent years when it gained a special focus of attention. Usually, traditional BPM approaches start from top and analyse the organization according some known rules from its structure or from the type of business. Process Mining (PM) is a completely different approach, since it aims to extract knowledge from event logs, which are widely present in many of today’s organizations. PM uses specialized data-mining algorithms, trying to uncover patterns and trends in these logs, and it is an alternative approach where formal process specification is not easily obtainable or is not cost-effective. This paper makes a literature review of major works issued about this theme.(undefined
Inventário automático de sinais de trânsito: um sistema de mapeamento móvel
A inventariação e georeferenciação do mobiliário urbano é um processo fundamental para entidades responsáveis pela gestão destas infra-estruturas, fornecendo informação indispensável para o apoio à tomada de decisões. No entanto, o levantamento e processamento da informação necessária para esta inventariação são processos morosos devido à extensão da malha urbana. A utilização de Sistemas de Mapeamento Móveis (SMM) permite acelerar este levantamento, mas a informação deve ainda ser processada para localizar e identificar os objectos de interesse. Este artigo apresenta uma abordagem baseada em técnicas de Visão por Computador, que permite automatizar a localização e identificação de mobiliário urbano numa sequência de imagens obtida com um SMM. Esta abordagem restringe-se, nesta fase de estudo de viabilidade e prototipagem, à sinalização vertical de trânsito. A principal contribuição deste artigo consiste na introdução de um novo método de localização e identificação de sinais de trânsito em ambientes exteriores, baseado na segmentação pela cor, reconhecimento de formas usando assinaturas de contornos e identificação do sinal através de correlação monocromática. Os resultados obtidos demonstram uma taxa de sucesso global na ordem dos 75% e um aumento muito significativo de produtividade na fase de processamento de informação.Agência de Inovação (AdI
A comprehensive taxonomy for three-dimensional displays
Even though three-dimensional (3D) displays have been introduced in relatively recent times in the context of display technology, they have undergone a rapid evolution, to the point that a plethora of equipment able to reproduce dynamic three-dimensional scenes in real time is now becoming commonplace in the consumer market.
This paper’s main contributions are (1) a clear definition of a 3D display, based on the visual depth cues supported, and (2) a hierarchical taxonomy of classes and subclasses of 3D displays, based on a set of properties that allows an unambiguous and systematic classification scheme for three-dimensional displays.
Five main types of 3D displays are thus defined –two of those new–, aiming to provide a taxonomy that is largely backwards-compatible, but that also clarifies prior inconsistencies in the literature. This well-defined outline should also enable exploration of the 3D display space and devising of new 3D display systems.Fundação para a Ciência e Tecnologi
Guest editorial: high dynamic range imaging
High Dynamic Range (HDR) imagery is a step-change in
imaging technology that is not limited to the 8-bits per pixel for each color channel that traditional or low-dynamic range digital images have been constrained to. These restrictions have meant that the current and relatively novel imaging technologies including stereoscopic, HD and ultraHD imaging do not provide an accurate representation of the lighting available in a real world environment. HDR technology has enabled the capture, storage, handling and display of content that supports real world luminance
and facilitated the use of rendering methods in special
effects, video games and advertising via novel rendering methods such as image-based lighting; it is also compatible with the other imaging methods and will certainly be a requirement of future high-fidelity imaging format specifications. However, HDR still has challenges to overcome before it can become a fully fledged
commercially successful technology. This special issue goes someway in to rectify any limitations and also shines a light on future potential uses and directions of HDR
Instant global illumination on the GPU using OptiX
OptiX, a programmable ray tracing engine, has been recently made available by NVidia, relieving rendering researchers from the idiosyncrasies of efficient ray tracing programming and allowing them to
concentrate on higher level algorithms, such as interactive global illumination.
This paper evaluates the performance of the Instant Global
Illumination algorithm on OptiX as well as the impact of three di fferent optimization techniques: imperfect visibility, downsampling and interleaved sampling. Results show that interactive frame rates are indeed achievable, although the combination of all optimization techniques leads to the appearance of artifacts that compromise image quality. Suggestions are presented on possible ways to overcome these limitations
Workload distribution for ray tracing in multi-core systems
One of the features that made interactive ray tracing possible over the last few years was the careful exploitation of the computational power and parallelism available on modern multicore processors. Multithreaded interactive ray tracing engines have to share the workload (rays to be processed) among rendering threads. This may be achieved by storing tasks on a shared FIFO-queue, accessed by all threads. Accessing this shared data structure requires a data access control mechanism, which ensures that the data structure is not corrupted. This access mechanism must incur minimal overheads such that performance is not penalized. This paper proposes a lock-free data access control mechanism to such queue, which avoids all locks by carefully reordering instructions. This technique
is compared with a classical lock-based approach and with a conservative local technique, where each thread
maintains its local queue of tasks and shares nothing with other threads. Although the local approach outperforms
the other two due to very good load balancing conditions, we demonstrate that the lock-free approach outperforms
the lock-based one for large processor counts. Efficient and reliable sharing of data structures within a shared
memory system is becoming a very relevant problem with the advent of many core processors. Lock free approaches are a promising manner of achieving such goal
Safety Risk Assessment Methodologies – The Hi Fly operator case
This dissertation arises due to the constant growth of the aviation industry and increased
exposure to hazards and risks. This increase causes the growing need to create,
implement and improve risk management systems by air operators, airports, and
maintenance organisations to ensure the safety of operations, passengers, crews, and
staff.
This dissertation focuses on choosing a risk analysis method most suitable for Hi Fly.
The method must comply with all legislation in force and standards imposed by Hi Fly.
To this end, a literature review of risk analysis methods suitable for aviation was carried
out. This analysis begins by describing the risk management system proposed by the
International Civil Aviation Organisation.
Then, each risk analysis method's process, structure, and objectives are described; a
multi-criteria decision analysis is used, specifically the MACBETH methodology
(Measuring attractiveness by a categorical-based evaluation technique) To choose the
most suitable method for Hi Fly's operations. The first phase of the application of this
technique consists of the creation of key performance areas and performance indicators.
From these, the second phase consists of creating a survey for the Safety Links in each
department and the members who make up the Safety department, where the questions
are adapted so that the answer offers a rating scale. It is asked to rank each key
performance area by its relevance and to answer each question on a scale of one to five.
As the Safety department is more familiar with the existing risk analysis methods,
another survey is made to compare each method for each indicator. The last phase
involves assigning weights to the key performance areas according to their average
relevance. The evaluation is done through the M-MACBETH software tool, designed to
perform the MACBETH methodology. This evaluation provides the most suitable
method for Hi Fly's operations.
The survey results of the various departments show that the most relevant area in a risk
analysis is the ability to predict the impact and severity of the event and the creation of
mitigating barriers and controls. The survey of the Safety department confirmed the
result described above. As for the indicators, the results differ, but both agree that the
least relevant indicator is the frequency and quality of the updates that the methods
receive. This data defines the weights entered in the software and, together with the
judgement made between the indicators for each method by the two specialists in the Safety department, satisfies the criteria necessary for using the decision support
software.
Using the data entered, the tool dictated that the risk analysis method best suited to Hi
Fly's operations is the European Risk Classification System (ERCS).Esta dissertação surge devido ao constante crescimento da indústria da aviação e com
isto um aumento à exposição aos perigos e riscos. Este aumento, provoca a crescente
necessidade de criar, implementar e melhorar os sistemas de gestão de risco por parte de
operadores aéreos, aeroportos e empresas de manutenção de forma a garantir a
segurança das operações, passageiros, tripulações e trabalhadores.
Esta dissertação foca-se na escolha de um método de análise de risco que seja o mais
adequado às operações da Hi Fly. O método tem de cumprir com toda a legislação em
vigor e padrões impostos pela Hi Fly. Para tal, foi feita uma análise bibliográfica dos
métodos de análise de risco adequados à aviação. Esta análise inicia-se com a descrição
do sistema de gestão de risco proposto pela organização da aviação civil internacional.
Em seguida, é descrito o processo, a estrutura e os objetivos de cada método de análise
de risco.
Para a escolha do método mais adequado às operações da Hi Fly é utilizado a
metodologia de análise de decisão multicritério, mais concretamente a metodologia
MACBETH (Measuring attractiveness by a categorical based evaluation technique). A
primeira fase para a aplicação desta metodologia consiste na criação de áreas chave de
desempenho e de indicadores de desempenho. A partir destes, a segunda fase consiste
na criação de um inquérito efetuado aos Safety Links de cada departamento da Hi Fly e
aos membros que compõe o departamento de Safety desta empresa, onde as questões são
adaptadas de modo que a resposta ofereça uma escala de classificação. É pedido para
ordenarem cada área chave de desempenho pela sua relevância e também para
responderem a cada questão numa escala de um a cinco. Para o departamento de Safety,
como estão mais entrosados com os métodos de análise de risco existentes é feito outro
inquérito tem como objetivo ter uma comparação entre cada método de cada indicador.
A última fase consiste em atribuição de pesos às áreas chave de desempenho conforme a
sua média de relevância. A avaliação é feita através da ferramenta M-MACBETH
software que é um software desenhado para efetuar a metodologia MACBETH. Esta
avaliação permite-nos obter o método mais adequado às operações da Hi Fly.
Os resultados do inquérito aos vários departamentos mostram que a área mais relevante
numa análise de risco é a capacidade de prever o impacto e a severidade do evento bem
como a criação de barreiras e controlos de mitigação. O inquérito efetuado ao
departamento de Safety da empresa confirmou o resultado descrito acima. Quanto aos
indicadores os resultados diferem, mas ambos concordam que o indicador menos relevante é a frequência e qualidade das atualizações que os métodos recebem. Estes
dados definem os pesos introduzidos no software e juntamente com o julgamento
efetuado entre os indicadores para cada método pelos dois especialistas no
departamento de Safety satisfazem os critérios necessários para a utilização do software
de apoio à decisão.
Através dos dados introduzidos, a ferramenta ditou que o método de análise de risco
mais adequado às operações da Hi Fly é o European Risk Classification System (ERCS)
Foreword to the special section on the Spring Conference on Computer Graphics 2015 (SCCG'2015)
[Excerpt] It is our pleasure to present this special section of Computers & Graphics (C&G), featuring the selected best papers presented at the 31st Spring Conference on Computer Graphics 2015 (www. sccg.sk), which was held April 22–24, 2015 in Smolenice, Slovakia. The venue is probably the oldest regular annual meeting of computer graphics in Central Europe, covering all relevant innovative ideas in computer graphics, image processing and their applications. The philosophy of SCCG is to bring together top experts and young researchers in CG in order to support a good and sustained communication channel for East–West European exchange of prospective ideas. [...]info:eu-repo/semantics/publishedVersio
A real-time distributed software infrastructure for cooperating mobile autonomous robots
Cooperating mobile autonomous robots have been generating a growing interest in fields such as rescue, demining and security. These applications require a real time middleware and wireless communication protocol that can effecient and timely support the fusion of the distributed perception and the development of coordinated behaviors. This paper proposes an affordable middleware, based on low-cost and open-source COTS technologies, which relies on a real-time database partially replicated in all team members, containing both local and remote state variables, in a distributed shared memory style. This provides seamless access to the complete team state, with fast non-blocking local operations. The remote data is updated autonomously in the background by a WiFi-based wireless communication protocol, at an adequate refresh rate. The software infrastruture is complemented with a task manager that provides scheduling and synchronization services to the application processes on top of the Linux operating system. Such infrastructure has been successfully used for four years in one RoboCup middle-size soccer team, and it has proved to be dependable in the presence of uncontrolled spurious traffic in the communication channel, using an adaptive technique to synchronizating the robots in the team and reconfiguring the communications dynamically and automatically according, to the number of currently active team members
- …