286 research outputs found
Air Traffic Management Abbreviation Compendium
As in all fields of work, an unmanageable number of abbreviations are used today in aviation for terms, definitions, commands, standards and technical descriptions. This applies in general to the areas of aeronautical communication, navigation and surveillance, cockpit and air traffic control working positions, passenger and cargo transport, and all other areas of flight planning, organization and guidance. In addition, many abbreviations are used more than once or have different meanings in different languages.
In order to obtain an overview of the most common abbreviations used in air traffic management, organizations like EUROCONTROL, FAA, DWD and DLR have published lists of abbreviations in the past, which have also been enclosed in this document. In addition, abbreviations from some larger international projects related to aviation have been included to provide users with a directory as complete as possible. This means that the second edition of the Air Traffic Management Abbreviation Compendium includes now around 16,500 abbreviations and acronyms from the field of aviation
Multimedia databases in managing the intangible cultural heritage
Motivacija za izradu ove doktorske disertacije je multimedijalna kolekcija
koja je rezultat vi²egodi²njih terenskih istraºivanja istraºiva£a iz Balkanolo²kog
instituta Srpske akademije nauka i umetnosti. Kolekcija se sastoji od materijala u
vidu snimljenih intervjua, snimljenih raznih obi£aja, pridruºenih tekstualnih opisa
(protokola) i brojnih drugih dokumenata.
Predmet istraºivanja ove disertacije je prou£avanje mogu¢nosti i razvoj novih
metoda kojima bi se zapo£elo re²avanje problema upravljanja nematerijalnim kulturnim
nasle em Balkana. Podzadaci koji se tom prilikom otvaraju su razvoj adekvatnog
dizajna i implementacije multimedijalne baze podataka nematerijalnog kulturnog
nasle a koja bi odgovarala potrebama razli£itih vrsta korisnika, automatska
semanti£ka anotacija protokola uz pomo¢ metoda obrade prirodnih jezika, kao osnova
za polu-automatsku anotaciju multimedijalne kolekcije i uspe²nu pretragu po
metapodacima koji su u skladu sa CIDOC CRM standardom, istraºivanje dodatnih
mogu¢nosti pretrage ove kolekcije u cilju dobijanja novih znanja, kao i razvoj
izabranih metoda.
Glavni problem sa dostupnim metodama je u tome ²to jo² uvek nema dovoljno
razvijene infrastrukture u kontekstu obrade teksta na prirodnom jeziku, organizacije
i upravljanja u domenu kulturnog nasle a na prostoru Balkana i posebno za
slu£aj srpskog jezika, koja bi se mogla efektivno koristiti za re²avanje postavljenog
problema. Stoga, postoji izraºena potreba za razvojem metoda kojima bi se do²lo
do odgovaraju¢eg re²enja.
Za polu-automatsku anotaciju multimedijalnih materijala kori²¢ena je automatska
semanti£ka anotacija protokola koji su pridruºeni materijalima. Ona je sprovedena
metodama ekstrakcije informacija, prepoznavanja imenovanih entiteta i ekstrakcije
tema, tehnikama zasnovanim na pravilima uz pomo¢ dodatnih resursa poput
elektronskih re£nika, tezaurusa i re£nika re£i iz speci£nog domena.
Za klasikaciju tekstualnih protokola u odnosu na tematiku, izvedeno je istraºivanje
o metodama koje se mogu primeniti za re²avanje problema klasikacije tekstova
na srpskom jeziku, i ponu ena je metoda koja je prilago ena speci£nom domenu
koji se obra uje (nematerijalno kulturno nasle e), speci£nim problemima koji se
re²avaju (klasikacija protokola u odnosu na tematiku) i srpskom jeziku, kao jednom
od morfolo²ki bogatih jezika...The motivation for writing this doctoral dissertation is a multimedia collection
that is the result of many years of eld research conducted by researchers from
the Institute for Balkan studies of the Serbian Academy of Sciences and Arts. The
collection consists of materials in the form of recorded interviews, various recorded
customs, associated textual descriptions (protocols) and numerous other documents.
The subject of research of this dissertation is the study of possibilities and the
development of new methods that could be used as a starting point in solving the
problem of managing the intangible cultural heritage of the Balkans. The subtasks
that emerge in this endeavor are the development of adequate design and implementation
of a multimedia database of intangible cultural heritage that would meet the
needs of dierent types of users, automatic semantic annotation of protocols using
natural language processing methods, as a basis for semi-automatic annotation of
the multimedia collection, and successful search by metadata which comply with
the CIDOC CRM standard, study of additional search possibilities of this collection
in order to gain new knowledge, as well as development of selected methods.
The main problem with the available methods is that there is still not enough
developed infrastructure in the context of natural language processing, organization
and management in the eld of cultural heritage in the Balkans and especially for
the Serbian language, which could be eectively used to solve the proposed problem.
There is thus a strong need to develop methods to reach an appropriate solution.
For the semi-automatic annotation of multimedia materials, automatic semantic
annotation of the protocols associated with the materials was used. It was carried
out by methods of information extraction, recognition of named entities and topic
extraction, using rule-based techniques with the help of additional resources such
as electronic dictionaries, thesauri and vocabularies from a specic domain.
To classify textual protocols in relation to the topic, research was conducted on
methods that can be used to solve the problem of classifying texts in the Serbian
language, and a method was oered that is adapted to the specic domain being
processed (intangible cultural heritage), to the specic problems being solved (classi
cation of protocols in relation to the topic) and to the Serbian language, as one
of the morphologically rich languages..
Threshold concepts and teaching programming
This thesis argues that the urge to build and the adoption of a technocratic disposition have influenced and affected the pursuit and development of a deeper understanding of the discipline of computing and its pedagogy. It proposes the introduction to the discipline of the
threshold concept construct to improve both the understanding and the pedagogy. The research examines the threshold concept construct using the theory of concepts. The examination establishes the conceptual coherence of the features attributed to threshold concepts and formalises the basis for threshold concept scholarship. It also provides a refutation for critiques of threshold concepts. The examination reveals the inextricable links between threshold concepts and pedagogic content knowledge. Both rely on the expertise of reflective pedagogues and are situated at the site of student learning difficulties and their encounters with troublesome knowledge. Both have deep understanding of discipline content knowledge at their centre. The two ideas are mutually supportive. A framework for identifying threshold concepts has been developed. The framework uses an elicitation instrument grounded in pedagogic content knowledge and an autoethnographic approach. The framework is used to identify state as a threshold concept in computing. The significant results of the research are two-fold. First, the identification of state as a threshold concept provides an insight into the disparate difficulties that have been persistently reported in the computer science education literature as stumbling blocks for novice programmers and enhances and develops the move towards discipline understanding and teaching for understanding. Second, the embryonic research area of threshold concept scholarship has been provided with a theoretical framework that can act as an organising principle to explicate existing research and provide a coherent focus for further research
A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems
As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation.
Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way.
The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved.
This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches.
A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system.
This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach.
While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach
BIG DATA и анализ высокого уровня : материалы конференции
В сборнике опубликованы результаты научных исследований и разработок в области BIG DATA and Advanced Analytics для оптимизации IT-решений и бизнес-решений, а также тематических исследований в области медицины, образования и экологии
Diseño e Implementación de un Sistema de Supervisión, Control y Adquisición de Datos (SCADA) para un prototipo a escala de parque eólico
[ES] El veloz avance tecnológico durante las últimas décadas ha generado un crecimiento sin precedentes en las nuevas formas de generación de energía y más concretamente en las energías de origen renovable.
La imperiosa necesidad de quemar etapas en el desarrollo tecnológico de las energías renovables hasta alcanzar su plena disponibilidad en el mercado, ha provocado que se dediquen muchos esfuerzos y recursos a la investigación con la finalidad de optimizar estas nuevas vías.
En este escenario, la correcta monitorización de las centrales de generación energética se antoja fundamental para analizar posibles errores y evitar la utilización de recursos de forma innecesaria. El presente Trabajo de Fin de Grado tiene como objetivo el diseño e implementación de un sistema de supervisión, control y adquisición de datos (SCADA) para un prototipo a escala de parque eólico.
Para ello se ha generado un código completo, utilizando el software de programación de LabVIEW; incluido un sistema de identificación para garantizar la seguridad del acceso a la información. Desde este sistema, el cliente tendrá la posibilidad de realizar diversas tareas: control de un parque en tiempo real, simulación, consulta de parámetros en la base de datos, realización de maniobras sobre los equipos de protección y consulta del archivo de errores entre otros.[VL] El veloç avanç tecnològic durant les últimes dècades ha generat un creixement sense precedents
en les noves formes de generació d’energia i més concretament en les energies d’origen renovable.
La imperiosa necessitat de cremar etapes en el desenvolupament tecnològic de les energies renovables fins a aconseguir la seua plena disponibilitat en el mercat, ha provocat que es dediquen
molts esforços i recursos a la investigació amb la finalitat d’optimitzar aquestes noves vies.
En aquesta situació, el correcte monitoratge de les centrals de generació d’energia s’antulla fonamental per a analitzar possibles errors i evitar la utilització de recursos de manera innecessària.
El present Treball de Fi de Grau té com objectiu el disseny i implementació d’un sistema de
supervisió, control i adquisició de dades (SCADA) per un prototip a escala de parc eòlic.
Per a això s’ha generat un codi complet, utilitzant el codi de programació de LabVIEW; inclòs un
sistema d’identificació per a garantir la seguretat de l’accés a la informació. Des d’aquest sistema,
el client tindrà la possibilitat de fer diverses tasques: control d’un parc en temps real, simulació,
consulta de paràmetres a la base de dades, realització de maniobres sobre els equips de protecció
i consulta de l’arxiu d’errors entre d’altres.[EN] The rapid technological progress during the last decades has generated an unprecedented growth in new forms of energy generation and more specifically in energies of renewable sources.
The imperative need to burn out stages in the technological development of renewable energies until they are fully available on the market, has led to a great deal of effort and resources being devoted to research in order to optimise these new ways.
In this scenario, the correct monitoring of the new power generation plants appears to be fundamental for analysing possible errors and avoiding the use of resources in an unnecessary way. This Tesis aims to design and implement a supervision, control, and data acquisition system (SCADA) for a wind farm scale prototype.
For this purpose, a complete code has been generated, using LabVIEW's programming software; including an identification system to guarantee the security of the access to the information. From this system, the user will have the possibility to perform various tasks: control of a wind farm in real time, simulation, query of parameters in the database, carrying out manoeuvres on the protective equipment and query of the error file among others.Yustas Talamantes, J. (2020). Diseño e Implementación de un Sistema de Supervisión, Control y Adquisición de Datos (SCADA) para un prototipo a escala de parque eólico. http://hdl.handle.net/10251/157398TFG
Re-Crafting Games: The inner life of Minecraft modding.
Prior scholarship on game modding has tended to focus on the relationship between commercial developers and modders, while the preponderance of existing work on the open-world sandbox game Minecraft has tended to focus on children’s play or the program’s utility as an educational platform. Based on participant observation, interviews with modders, discourse analysis, and the techniques of software studies, this research uncovers the inner life of Minecraft modding practices, and how they have become central to the way the game is articulated as a cultural artifact. While the creative activities of audiences have previously been described in terms of de Certeau’s concept of “tactics,” this paper argues that modders are also engaged in the development of new strategies. Modders thus become “settlers,” forging a new identity for the game property as they expand the possibilities for play. Emerging modder strategies link to the ways that the underlying game software structures computation, and are closely tied to notions of modularity, interoperability, and programming “best practices.” Modders also mobilize tactics and strategies in the discursive contestation and co-regulation of gameplay meanings and programming practices, which become more central to an understanding of game modding than the developer-modder relationship. This discourse, which structures the circulation of gaming capital within the community, embodies both monologic and dialogic modes, with websites, forum posts, chatroom conversations, and even software artifacts themselves taking on persuasive inflections
Recommended from our members
An automated method mapping parametric features between computer aided design software
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonEnterprise efficiency is limited by data exchange. A product designer might specify the geometry of a product with a Computer Aided Design program, an engineer might re-use that geometry data to calculate physical properties of the product using a Finite Element Analysis program. These different domains place different requirements on the product representation. Representations of product data required for different tasks is dependent on the vendor software associated with those tasks, sharing data between different vendor programs is limited by incompatibility of the vendor formats used. In the case of Computer Aided Design where the virtual form of an object is modelled, no standard data format captures complete model data. Common data standards transfer model surface geometry without capturing the topological elements from which these geometries are constructed. There are prescriptive data representations to allow these features to be specified in a neutral format, but little incentive for vendors to adopt these schemes. Recent efforts instead focus on identifying similar feature elements between different vendor CAD programs, however this approach relies on onerous manual identification requiring frequent revision.
This research develops methods to automate the task of mapping relationships between different data format representations. Two independent matching techniques identify similar CAD feature functions between heterogeneous programs. Text similarity and object geometry matching techniques are combined to match the data formats associated with CAD programs. An efficient search for matching function parameters is performed using a genetic algorithm that incorporates semantic data matching and geometry data matching. A greedy semantic matching algorithm is developed that compares with the Doc2vec short text matching technique over the API dataset tested. A SVD geometric surface registration technique is developed that requires fewer calculations than an equivalent Iterative Closest Point method
- …