3,294 research outputs found
Epicurus: a platform for the visualisation of forensic documents based on a linguistic approach
This paper presents a tool to visualize a cognitive model of human discourse processing known as Text World Theory (TWT) which is used to facilitate forensic discourse analysis. XML files are designed based on a linguistic annotation scheme. It encompasses the range of descriptive categories defined in TWT. Epicurus is a tool that can parse and visualize those XML files into HTML. The tool is designed for ease of language data annotation and to facilitate evidential analysis by i) visualizing the complex narratives (text-worlds) projected from any given forensic text and ii) reconstructing and visualizing reported events in timeline fashion
A method for visualizing poetic texts using L-systems (L-sisลญt\u27em ลญl iyonghan si t\u27eksลญtลญ sikakhwa pangbลp L์์คํ ์ ์ด์ฉํ ์ ํ ์คํธ ์๊ฐํ ๋ฐฉ๋ฒ)
๋ณธ ๋ฐ๋ช
์ ์ ํ
์คํธ ์๊ฐํ ๋ฐฉ๋ฒ์ ๊ดํ ๊ฒ์ผ๋ก, (a) ์๊ฐํ ์ฅ์น์ ์
๋ ฅ๋ ์์ ํ
์คํธ๋ฅผ ์์๋๋ก ์ค์บ๋ํ๋ ๋จ๊ณ; (b) L ์์คํ
๋ชจ๋ธ์ ํตํด ์๊ธฐ ์์ ๊ตฌ์กฐ๋ฅผ ์๋ฌผ์ ๊ตฌ์กฐ์ ๋งค์นญ์ํค๊ณ , ์๊ธฐ ํ
์คํธ์ ํํ์ ๋ฐ ๋ชจ์์ ์๋ฌผ์ ์ฑ๋์ ๋ช
๋์ ๋งค์นญ์์ผ ์ปดํ์ผ๋ง ํ๋ ๋จ๊ณ; (c) ์ปดํ์ผ๋ ์๊ธฐ ํ
์คํธ์ ๋ช
๋ น์ด์ ๋ฐ๋ผ ๋ ๋๋งํ์ฌ ํ์ธํ
ํ๊ณ , ์๊ธฐ ์๋ฌผ์ ๊ทธ๋ฆผ์ ์์ฑํ๋ ๋จ๊ณ; ๋ฐ (d) ์์ฑ๋ ์๊ธฐ ๊ทธ๋ฆผ์ ์๊ธฐ ์๊ฐํ ์ฅ์น์ ํ์์ฐฝ์ ํ์ํ๋ ๋จ๊ณ๋ฅผ ํฌํจํ๋ค.์ด์ ๊ฐ์ด ๋ณธ ๋ฐ๋ช
์ ์ ํ
์คํธ์ ์๊ฐํ์ ์ ๋ณด ์ ๋ฌ ๊ธฐ๋ฅ ์ธ์ ์ฌ๋ฏธ์ฑ์ ๋ถ๊ฐํ์ฌ ์๊ฐํ ๊ฒฐ๊ณผ ์์ฒด๊ฐ ์ํฐํ
์ธ๋จผํธ์ ์ธ ์์๋ฅผ ๊ฐ์ง๊ณ ๊ฐ๋ฉฐ ์ง๊ด์ ์ธ ์๊ฐ๊น์ง ์ค ์ ์๋ ์ฐฝ์๋ฌผ์ด ๋๋๋ก ํ๋ ์๋ก์ด ์๊ฐํ ๋ฐฉ๋ฒ์ ์ ๊ณตํ๋ค.
The invention relates to the hour text visualization method comprising the step in order, of scanning the text of the hour inputted to (a) apparatus for visualizing; the step it matches the structure in the above with the structure of the plant through (b) L system model ; and of matching the morpheme and vowel of the text with the chromaticness and brightness of the plant and compiling; and the step paint ; and of producing the drawing of the plant, and the step of indicating drawing generated with (d) in the display of the apparatus for visualizing it renders according to the instruction of the text compiled (c). In this way, the present invention is to provide the new visualization method for doing it becomes the creation which it can give till the instinctive brainwave it goes with the element in which the visualization effect itself is the entertainment it adds the esthetics to the visualization of the hour text besides the information expression function. (Translation made via KIPRIS patent translation
An overview of decision table literature 1982-1995.
This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.
Transforming structured descriptions to visual representations. An automated visualization of historical bookbinding structures.
In cultural heritage, the documentation of artefacts can be both iconographic and textual, i.e. both pictures and drawings on the one hand, and text and words on the other are used for documentation purposes.
This research project aims to produce a methodology to transform automatically verbal descriptions of material objects, with a focus on bookbinding structures, into standardized and scholarly-sound visual representations.
In the last few decades, the recording and management of documentation data about material objects, including bookbindings, has switched from paper-based archives to databases, but sketches and diagrams are a form of documentation still carried out mostly by hand. Diagrams hold some unique information, but often, also redundant information already secured through verbal means within the databases. This project proposes a methodology to harness verbal information stored within a database and automatically generate visual representations.
A number of projects within the cultural heritage sector have applied semantic modelling to generate graphic outputs from verbal inputs. None of these has considered bookbindings and none of these relies on information already recorded within databases. Instead they develop an extra layer of modelling and typically gather more data, specifically for the purpose of generating a pictorial output. In these projects qualitative data (verbal input) is often mixed with quantitative data (measurements, scans, or other direct acquisition methods) to solve the problems of indeterminateness found in verbal descriptions. Also, none of these projects has attempted to develop a general methodology to ascertain the minimum amount ii of information that is required for successful verbal-to-visual transformations for material objects in other fields. This research has addressed these issues.
The novel contributions of this research include: (i) a series of methodological recommendations for successful automated verbal-to-visual intersemiotic translations for material objects โ and bookbinding structures in particular โ which are possible when whole/part relationships, spatial configurations, the objectโs logical form, and its prototypical shapes are communicated; (ii) the production of intersemiotic transformations for the domain of bookbinding structures; (iii) design recommendations for the generation of standardized automated prototypical drawings of bookbinding structures; (iv) the application โ never considered before โ of uncertainty visualization to the field of the archaeology of the book. This research also proposes the use of automatically generated diagrams as data verification tools to help identify meaningless or wrong data, thus increasing data accuracy within databases
An Introduction to Programming for Bioscientists: A Python-based Primer
Computing has revolutionized the biological sciences over the past several
decades, such that virtually all contemporary research in the biosciences
utilizes computer programs. The computational advances have come on many
fronts, spurred by fundamental developments in hardware, software, and
algorithms. These advances have influenced, and even engendered, a phenomenal
array of bioscience fields, including molecular evolution and bioinformatics;
genome-, proteome-, transcriptome- and metabolome-wide experimental studies;
structural genomics; and atomistic simulations of cellular-scale molecular
assemblies as large as ribosomes and intact viruses. In short, much of
post-genomic biology is increasingly becoming a form of computational biology.
The ability to design and write computer programs is among the most
indispensable skills that a modern researcher can cultivate. Python has become
a popular programming language in the biosciences, largely because (i) its
straightforward semantics and clean syntax make it a readily accessible first
language; (ii) it is expressive and well-suited to object-oriented programming,
as well as other modern paradigms; and (iii) the many available libraries and
third-party toolkits extend the functionality of the core language into
virtually every biological domain (sequence and structure analyses,
phylogenomics, workflow management systems, etc.). This primer offers a basic
introduction to coding, via Python, and it includes concrete examples and
exercises to illustrate the language's usage and capabilities; the main text
culminates with a final project in structural bioinformatics. A suite of
Supplemental Chapters is also provided. Starting with basic concepts, such as
that of a 'variable', the Chapters methodically advance the reader to the point
of writing a graphical user interface to compute the Hamming distance between
two DNA sequences.Comment: 65 pages total, including 45 pages text, 3 figures, 4 tables,
numerous exercises, and 19 pages of Supporting Information; currently in
press at PLOS Computational Biolog
Example Based Caricature Synthesis
The likeness of a caricature to the original face image is an essential and often overlooked part of caricature
production. In this paper we present an example based caricature synthesis technique, consisting of shape
exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set
of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial
features. The relationship exaggeration step introduces two definitions which facilitate global facial feature
synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an
intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion
form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance
(MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a
number of constraints. The effectiveness of our algorithm is demonstrated with experimental results
- โฆ