1,282 research outputs found
Intellectual Capital Architectures and Bilateral Learning: A Framework For Human Resource Management
Both researchers and managers are increasingly interested in how firms can pursue bilateral learning; that is, simultaneously exploring new knowledge domains while exploiting current ones (cf., March, 1991). To address this issue, this paper introduces a framework of intellectual capital architectures that combine unique configurations of human, social, and organizational capital. These architectures support bilateral learning by helping to create supplementary alignment between human and social capital as well as complementary alignment between people-embodied knowledge (human and social capital) and organization-embodied knowledge (organizational capital). In order to establish the context for bilateral learning, the framework also identifies unique sets of HR practices that may influence the combinations of human, social, and organizational capital
Proceedings of the Workshop on Linear Logic and Logic Programming
Declarative programming languages often fail to effectively address many aspects of control and resource management. Linear logic provides a framework for increasing the strength of declarative programming languages to embrace these aspects. Linear logic has been used to provide new analyses of Prolog\u27s operational semantics, including left-to-right/depth-first search and negation-as-failure. It has also been used to design new logic programming languages for handling concurrency and for viewing program clauses as (possibly) limited resources. Such logic programming languages have proved useful in areas such as databases, object-oriented programming, theorem proving, and natural language parsing.
This workshop is intended to bring together researchers involved in all aspects of relating linear logic and logic programming. The proceedings includes two high-level overviews of linear logic, and six contributed papers.
Workshop organizers: Jean-Yves Girard (CNRS and University of Paris VII), Dale Miller (chair, University of Pennsylvania, Philadelphia), and Remo Pareschi, (ECRC, Munich)
JupyterLab_Voyager: A Data Visualization Enhancement in JupyterLab
With the emergence of big data, scientific data analysis and visualization (DAV) tools are critical components of the data science software ecosystem; the usability of these tools is becoming extremely important to facilitate next-generation scientific discoveries. JupyterLab has been considered as one of the best polyglot, web-based, open-source data science tools. As the next phase of extensible interface for the classic iPython Notebooks, this tool supports interactive data science and scientific computing across multiple programming languages with great performances. Despite these advantages, previous heuristics evaluation studies have shown that JupyterLab has some significant flaws in the data visualization side. The current DAV system in JupyterLab heavily relies on users’ understanding and familiarity with certain visualization libraries, and doesn’t support the golden visual-information-seeking mantra of “overview first, zoom and filter, then details-on-demand”. These limitations often lead to a workflow bottleneck at the start of a project.
In this thesis, we present ‘JupyterLab_Voyager’, an extension for JupyterLab that provides a graphical user interface (GUI) for data visualization operations and couples faceted browsing with visualization recommendation to support exploration of multivariate, tabular data, as a solution to improve the usability of the DAV system. The new plugin works with various types of datasets in the JupyterLab ecosystem; using the plugin you can perform a high-level graphical analysis of fields within your dataset sans coding without leaving the JupyterLab environment. It helps analysts learn about the dataset and engage in both open-ended exploration and target specific answers from the dataset. User testings and evaluations demonstrated that this implementation has good usability and significantly improves the DAV system in JupyterLab
GAMESPECT: A Composition Framework and Meta-Level Domain Specific Aspect Language for Unreal Engine 4
Game engine programming involves a great number of software components, many of which perform similar tasks; for example, memory allocation must take place in the renderer as well as in the creation routines while other tasks such as error logging must take place everywhere. One area of all games which is critical to the success of the game is that of game balance and tuning. These balancing initiatives cut across all areas of code from the player and AI to the mission manager. In computer science, we’ve come to call these types of concerns “cross cutting”. Aspect oriented programming was developed, in part, to solve the problems of cross cutting: employing “advice” which can be incorporated across different pieces of functionality.
Yet, despite the prevalence of a solution, very little work has been done to bring cross cutting to game engine programming. Additionally, the discipline involves a heavy amount of code rewriting and reuse while simultaneously relying on many common design patterns that are copied from one project to another. In the case of game balance, the code may be wildly different across two different games despite the fact that similar tasks are being done. These two problems are exacerbated by the fact that almost every game engine has its own custom DSL (domain specific language) unique to that situation. If a DSL could showcase the areas of cross cutting concerns while highlighting the ability to capture design patterns that can be used across games, significant productivity savings could be achieved while simultaneously creating a common thread for discussion of shared problems within the domain.
This dissertation sought to do exactly that- create a metalanguage called GAMESPECT which supports multiple styles of DSLs while bringing aspect-oriented programming into the DSL’s to make them DSAL (domain specific aspect languages). The example cross cutting concern was game balance and tuning since it’s so pervasive and important to gaming. We have created GAMESPECT as a language and a composition framework which can assist engine developers and game designers in balancing their games, forming one central place for game balancing concerns even while these concerns may cross different languages and locations inside the source code. Generality was measured by showcasing the composition specifications in multiple contexts and languages.
In addition to evaluating generality and performance metrics, effectiveness was be measured. Specifically, comparisons were made between a balancing initiative when performed with GAMESPECT vs a traditional methodology. In doing so, this work shows a clear advantage to using a Metalanguage such as GAMESPECT for this task. In general, a line of code reduction of 9-40% per task was achieved with negligible effects to performance. The use of a metalanguage in Unreal Engine 4 is a starting point to further discussions concerning other game engines. In addition, this work has implications beyond video game programming. The work described highlights benefits which might be achieved in other disciplines where design pattern implementations and cross-cutting concern usage is high; the real time simulation field and the field of Windows GUI programming are two examples of future domains
House generation using procedural modeling with rules
This is a work dedicated to the development of a software for creating 3D models of houses with modern design. The main goal is to generate a tool that makes use of procedural modeling techniques to generate a wide variety of models. The result is an application capable of generating realistic model renderings with an input consisting of a set of rules. The most important part of this project will be the development of grammar and of all each operation will start to be simple and basic in the form of 3D. Combinations of operations such as extrusions, subdivisions or prefabrications will then be done to increase the complexity of the whole model that makes it look like a modern house. Thispart was carried out with some tests based on a correct performance of the same algorithm, as this is the core of the whole application.Aquest és un treball dedicat al desenvolupament d'un software de creació de models 3D de cases amb disseny modern. L'objectiu principal és generar una eina que faci ús de tècniques de modelat procedural per generar una gran varietat de models. El resultat és una aplicació capaç de generar renders de models realistes amb una entrada que consisteix en un conjunt de regles. La part més important d’aquest projecte consistirà en el desenvolupament de la gramàtica i cada operació començarà a ser senzilla i bàsica en forma de 3D. A continuació, es faran combinacions d’operacions com extrusions, subdivisions o prefabricacions augmenta la complexitat de tot el model que fa que sembli una casa actual. Aquesta part es va dur a terme amb algunes proves basades en un correcte rendiment del mateix algorisme, ja que aquest és el nucli de tota l'aplicació.Este es un trabajo dedicado al desarrollo de un software de creación de modelos 3D de casas con diseño moderno. El objetivo principal es generar una herramienta que haga uso de técnicas de modelado procedural para generar una gran variedad de modelos. El resultado es una aplicación capaz de generar renders de modelos realistas con una entrada que consiste en un conjunto de reglas. La parte más importante de este proyecto consistirá en el desarrollo de la gramática y cada operación empezará a ser sencilla y básica en forma de 3D. A continuación, se harán combinaciones de operaciones como extrusiones, subdivisiones o prefabricaciones aumenta la complejidad de todo el modelo que hace que parezca una casa actual. Esta parte se llevó a cabo con algunas pruebas basadas en un correcto rendimiento del mismo algoritmo, puesto que este es el núcleo de toda la aplicación
Proceedings of the Workshop on the Reuse of Web based Information
The proceedings are currently available online at: http://www-rocq.inria.fr/~vercoust/REUSE/WWW7-reuse.html where individual papers can be downloaded. However, this URL must not be regarded as permanent.These are the Proceeding of theWorkshop on the Reuse of Web Information that was held in conjunction with the Seventh International World Wide Web Conference, Brisbane, 14 April 19998
Enhancing automatic level generation for platform videogames
This dissertation addresses the challenge of improving automatic level generation processes for plat-form videogames. As Procedural Content Generation (PCG) techniques evolved from the creation of simple elements to the construction of complete levels and scenarios, the principles behind the generation algorithms became more ambitious and complex, representing features that beforehand were only possible with human design. PCG goes beyond the search for valid geometries that can be used as levels, where multiple challenges are represented in an adequate way. It is also a search for user-centred design content and the creativity sparks of humanly created content.
In order to improve the creativity capabilities of such generation algorithms, we conducted part of our research directed to the creation of new techniques using more ambitious design patterns. For this purpose, we have implemented two overall structure generation algorithms and created an addi-tional adaptation algorithm. The later can transform simple branched paths into more compelling game challenges by adding items and other elements in specific places, such as gates and levers for their activation. Such approach is suitable to avoid excessive level linearity and to represent certain design patterns with additional content richness.
Moreover, content adaptation was transposed from general design domain to user-centred principles. In this particular case, we analysed success and failure patterns in action videogames and proposed a set of metrics to estimate difficulty, taking into account that each user has a different perception of that concept. This type of information serves the generation algorithms to make them more directed to the creation of personalised experiences.
Furthermore, the conducted research also aimed to the integration of different techniques into a common ground. For this purpose, we have developed a general framework to represent content of platform videogames, compatible with several titles within the genre. Our algorithms run over this framework, whereby they are generic and game independent. We defined a modular architecture for the generation process, using this framework to normalise the content that is shared by multiple modules. A level editor tool was also created, which allows human level design and the testing of automatic generation algorithms. An adapted version of the editor was implemented for the semi-automatic creation of levels, in which the designer may simply define the type of content that he/she desires, in the form of quests and missions, and the system creates a corresponding level structure. This materialises our idea of bridging human high-level design patterns with lower level automated generation algorithms.
Finally, we integrated the different contributions into a game prototype. This implementation allowed testing the different proposed approaches altogether, reinforcing the validity of the proposed archi-tecture and framework. It also allowed performing a more complete gameplay data retrieval in order to strengthen and validate the proposed metrics regarding difficulty perceptions
In Defense of the Lone Wolf: Collaboration in Language Documentation
Collaboration has become a hot topic in the field of language documentation, with many authors insisting that lone wolf research is unethical research. We take issue with the viewpoints that documentary linguists must collaborate with the community, that the linguist’s goals should be subordinate to the goals of community members, and that solo research is necessarily unethical research. Collaborating with community members in language documentation projects is not the only method of treating the community fairly and reciprocating their generosity. There will not always be community members interested in language documentation, nor will there always be community members capable of participation. Even in cases where community members are interested, capable, and willing, both the researcher and the community should be allowed to decide when, where, how, and whether to collaborate. Moreover, we suggest that the insistence on collaboration can cause guilt when collaboration is difficult, or can lead researchers into unproductive or even dangerous situations. On the other hand, we welcome collaboration if both parties retain autonomy in decision-making and both truly want to work collaboratively. There
is nothing unethical about setting one’s own research agenda and conducting linguistic fieldwork alone. Lone wolf linguistics isn’t necessarily unethical linguistics.National Foreign Language Resource Cente
- …