252 research outputs found
Graphics mini manual
The computer graphics capabilities available at the Center are introduced and their use is explained. More specifically, the manual identifies and describes the various graphics software and hardware components, details the interfaces between these components, and provides information concerning the use of these components at LaRC
Recommended from our members
Evaluating human-centered approaches for geovisualization
Working with two small group of domain experts I evaluate human-centered approaches to application development which are applicable to geovisualization, following an ISO13407 taxonomy that covers context of use, eliciting requirements, and design. These approaches include field studies and contextual analysis of subjects' context; establishing requirements using a template, via a lecture to communicate geovisualization to subjects and by communicating subjects' context to geovisualization experts with a scenario; autoethnography to understand the geovisualization design process; wireframe, paper and digital interactive prototyping with alternative protocols; and a decision making process for prioritising application improvement. I find that the acquisition and use of real user data is key; that a template approach and teaching subjects about visualization tools and interactions both fail to elicit useful requirements for a visualization application. Consulting geovisualization experts with a scenario of user context and samples of user data does yield suggestions for tools and interactions of use to a visualization designer. The complex and composite natures of both visualization and human-centered domains, incorporating learning from both domains, with user context, makes design challenging. Wireframe, paper and digital interactive prototypes mediate between the user and visualization domains successfully, eliciting exploratory behaviour and suggestions to improve prototypes. Paper prototypes are particularly successful at eliciting suggestions and especially novel visualization improvements. Decision-making techniques prove useful for prioritising different possible improvements, although domain subjects select data-related features over more novel alternative and rank these more inconsistently. The research concludes that understanding subject context of use and data is important and occurs throughout the process of engagement with domain experts, and that standard requirements elicitation techniques are unsuccessful for geovisualization. Engagement with subjects at an early stage with simple prototypes incorporating real subject data and moving to successively more complex prototypes holds the best promise for creating successful geovisualization applications
Forensic analysis of office open XML spreadsheets
Thesis submitted in partial fulfillment of the requirements for the Degree of Master of Science in Information Systems Security (MSc.ISS) at Strathmore UniversityDigital Forensics is the science of acquiring, preserving, analysing and presenting digital evidence from computers, digital devices and networks in a manner that is admissible in a court of law to support an investigation. Microsoft Office, LibreOffice, OpenOffice, NeoOffice and Google documents spreadsheets and presentations are widely used to store and circulate data and information especially within organisations. They are often rich in information deeply embedded in them that can be retrieved by examining metadata or deleted material still present in the files.OOXML is a standard developed by Microsoft and registered by ECMA (as ECMA-376), and approved by the ISO and IEC (as ISO/IEC 29500:2008) as an open standard for the development of Office documents, spreadsheets and presentations. Documents, spreadsheets and presentations created using this standard consist of zipped file containers, parts and relationships which upon extraction and analysis reveals forensically interesting information. Existing forensic tools have limitations as far as extracting and analysing OOXML spreadsheet metadata is concerned in that most of them can extract only limited and basic metadata.The objective of this research is to carry out forensic analysis of metadata in OOXML spreadsheets by studying limitations of existing forensic tools in extracting and analysing metadata in OOXML spreadsheets and designing and developing a Proof of Concept (PoC) implementation of a forensic tool that supports automated forensic analysis of OOXML spreadsheets with improved visualization, efficiency and advanced reporting functionality. This research adopts a methodology to review OOXML spreadsheet metadata extraction and analysis capabilities of existing forensic tools using sample spreadsheet datasets, carry out system analysis, design and PoC implementation of a forensic tool. In addition, the research carries out manual, functional, and security tests; quality assurance; and validation of the developed Proof of Concept implementation. The developed tool is able to extract and analyse relevant metadata from OOXML spreadsheets and present results in a forensic report
From 3D Models to 3D Prints: an Overview of the Processing Pipeline
Due to the wide diffusion of 3D printing technologies, geometric algorithms
for Additive Manufacturing are being invented at an impressive speed. Each
single step, in particular along the Process Planning pipeline, can now count
on dozens of methods that prepare the 3D model for fabrication, while analysing
and optimizing geometry and machine instructions for various objectives. This
report provides a classification of this huge state of the art, and elicits the
relation between each single algorithm and a list of desirable objectives
during Process Planning. The objectives themselves are listed and discussed,
along with possible needs for tradeoffs. Additive Manufacturing technologies
are broadly categorized to explicitly relate classes of devices and supported
features. Finally, this report offers an analysis of the state of the art while
discussing open and challenging problems from both an academic and an
industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and
Innovation action; Grant agreement N. 68044
Methods for high-precision subsurface imaging using spatially dense seismic data
Current state-of-the-art depth migration techniques are regularly applied in marine seismic exploration, where they deliver accurate and reliable pictures of Earth’s interior. The question is how these algorithms will perform in different environments, not related to oil and gas exploration. For example, how to utilise those techniques in an elusive environment of hard rocks? The main challenge there is to image highly complex, subvertical piece-wise geology, represented by often low reflectivity, in a noisy environment
Análise colaborativa de grandes conjuntos de séries temporais
The recent expansion of metrification on a daily basis has led to the production
of massive quantities of data, and in many cases, these collected metrics
are only useful for knowledge building when seen as a full sequence of
data ordered by time, which constitutes a time series. To find and interpret
meaningful behavioral patterns in time series, a multitude of analysis software
tools have been developed. Many of the existing solutions use annotations
to enable the curation of a knowledge base that is shared between a group
of researchers over a network. However, these tools also lack appropriate
mechanisms to handle a high number of concurrent requests and to properly
store massive data sets and ontologies, as well as suitable representations
for annotated data that are visually interpretable by humans and explorable by
automated systems. The goal of the work presented in this dissertation is to
iterate on existing time series analysis software and build a platform for the
collaborative analysis of massive time series data sets, leveraging state-of-the-art technologies for querying, storing and displaying time series and annotations.
A theoretical and domain-agnostic model was proposed to enable
the implementation of a distributed, extensible, secure and high-performant
architecture that handles various annotation proposals in simultaneous and
avoids any data loss from overlapping contributions or unsanctioned changes.
Analysts can share annotation projects with peers, restricting a set of collaborators
to a smaller scope of analysis and to a limited catalog of annotation
semantics. Annotations can express meaning not only over a segment of time,
but also over a subset of the series that coexist in the same segment. A novel
visual encoding for annotations is proposed, where annotations are rendered
as arcs traced only over the affected series’ curves in order to reduce visual
clutter. Moreover, the implementation of a full-stack prototype with a reactive
web interface was described, directly following the proposed architectural and
visualization model while applied to the HVAC domain. The performance of
the prototype under different architectural approaches was benchmarked, and
the interface was tested in its usability. Overall, the work described in this dissertation
contributes with a more versatile, intuitive and scalable time series
annotation platform that streamlines the knowledge-discovery workflow.A recente expansão de metrificação diária levou à produção de quantidades
massivas de dados, e em muitos casos, estas métricas são úteis para
a construção de conhecimento apenas quando vistas como uma sequência
de dados ordenada por tempo, o que constitui uma série temporal. Para se
encontrar padrões comportamentais significativos em séries temporais, uma
grande variedade de software de análise foi desenvolvida. Muitas das soluções
existentes utilizam anotações para permitir a curadoria de uma base
de conhecimento que é compartilhada entre investigadores em rede. No entanto,
estas ferramentas carecem de mecanismos apropriados para lidar com
um elevado número de pedidos concorrentes e para armazenar conjuntos
massivos de dados e ontologias, assim como também representações apropriadas
para dados anotados que são visualmente interpretáveis por seres
humanos e exploráveis por sistemas automatizados. O objetivo do trabalho
apresentado nesta dissertação é iterar sobre o software de análise de séries
temporais existente e construir uma plataforma para a análise colaborativa
de grandes conjuntos de séries temporais, utilizando tecnologias estado-de-arte
para pesquisar, armazenar e exibir séries temporais e anotações. Um
modelo teórico e agnóstico quanto ao domínio foi proposto para permitir a
implementação de uma arquitetura distribuída, extensível, segura e de alto
desempenho que lida com várias propostas de anotação em simultâneo e
evita quaisquer perdas de dados provenientes de contribuições sobrepostas
ou alterações não-sancionadas. Os analistas podem compartilhar projetos
de anotação com colegas, restringindo um conjunto de colaboradores a uma
janela de análise mais pequena e a um catálogo limitado de semântica de
anotação. As anotações podem exprimir significado não apenas sobre um
intervalo de tempo, mas também sobre um subconjunto das séries que coexistem
no mesmo intervalo. Uma nova codificação visual para anotações é
proposta, onde as anotações são desenhadas como arcos traçados apenas
sobre as curvas de séries afetadas de modo a reduzir o ruído visual. Para
além disso, a implementação de um protótipo full-stack com uma interface
reativa web foi descrita, seguindo diretamente o modelo de arquitetura e visualização
proposto enquanto aplicado ao domínio AVAC. O desempenho do
protótipo com diferentes decisões arquiteturais foi avaliado, e a interface foi
testada quanto à sua usabilidade. Em geral, o trabalho descrito nesta dissertação
contribui com uma abordagem mais versátil, intuitiva e escalável para
uma plataforma de anotação sobre séries temporais que simplifica o fluxo de
trabalho para a descoberta de conhecimento.Mestrado em Engenharia Informátic
Creative Cooperation in Distributed Working Situations: Towards a Design-Process-Based Cooperation System
Due to the present developments of the Internet and its technical components, the skills of the web experts have to be more and more complex and specific. The Internet experts in the creative field are located distributedly around the whole world. As a result, many companies have problems to find the needed experts on site and are dependent on creative cooperations and virtual teams with the help of technical tools. The virtual working place is an important issue, particularly in modern times and the market offers more and more cooperation systems for exactly this purpose: Creative cooperation in distributed working situations. This thesis examines the approaches of creative cooperation and cooperation technologies with an analysis about existing cooperation systems with a creative context. It spans a wide range of tools. On the one hand, there are approaches which offer only straightforward solutions for single design tasks. On the other hand, there are providers which recognised the great need of creative cooperation systems and working at full speed to extend their systems. The examined areas of this work lead to a design process oriented approach with flexible frames and enough space for the creative development of every single user. The cooperation in a creative context stays in the foreground and is the base for future approaches for the web design sector
Formal Insertion Reactions Of Stannylenes And Germylenes Into Phosphorus-Halogen Bonds: A Structural And Mechanistic Investigation
Insertion reactions of Group 14 carbenoids, divalent species of the form (R2N)2M (M = Ge or Sn) into the P-halogen bond of halophosphines have been known for some time. However, very few examples have been reported and no evidence has been presented regarding the mechanism by which these reactions take place. Comparatively, insertion of the same or analogous carbenoid species into C-halogen bonds have been
thoroughly explored for scope and application, and the mechanism has been investigated multiple times.
In this dissertation, numerous new examples of insertion products of Group 14 carbenoids into P-halogen bonds are presented. This array of products has been characterized by 31P{1H} and 1H NMR spectroscopy and single-crystal X-ray diffraction analysis. In addition, purity of the obtained compounds has been confirmed by elemental analyses.
In concert with a diverse group of products, kinetic experiments were employed to examine the possible mechanistic pathways. All reasonable pathways for these reactions are discussed, analyzed and compared. Additionally, as most tin-containing insertion products are unstable, the likely mechanisms for their decomposition are discussed in detail
Towards Data-Driven Large Scale Scientific Visualization and Exploration
Technological advances have enabled us to acquire extremely large
datasets but it remains a challenge to store, process, and extract
information from them. This dissertation builds upon recent advances
in machine learning, visualization, and user interactions to
facilitate exploration of large-scale scientific datasets. First, we
use data-driven approaches to computationally identify regions of
interest in the datasets. Second, we use visual presentation for
effective user comprehension. Third, we provide interactions for
human users to integrate domain knowledge and semantic information
into this exploration process.
Our research shows how to extract, visualize, and explore informative
regions on very large 2D landscape images, 3D volumetric datasets,
high-dimensional volumetric mouse brain datasets with thousands of
spatially-mapped gene expression profiles, and geospatial trajectories
that evolve over time. The contribution of this dissertation include:
(1) We introduce a sliding-window saliency model that discovers
regions of user interest in very large images; (2) We develop visual
segmentation of intensity-gradient histograms to identify meaningful
components from volumetric datasets; (3) We extract boundary surfaces
from a wealth of volumetric gene expression mouse brain profiles to
personalize the reference brain atlas; (4) We show how to efficiently
cluster geospatial trajectories by mapping each sequence of locations
to a high-dimensional point with the kernel distance framework.
We aim to discover patterns, relationships, and anomalies that would
lead to new scientific, engineering, and medical advances. This work
represents one of the first steps toward better visual understanding
of large-scale scientific data by combining machine learning and human
intelligence
- …