90 research outputs found

    The Influence Of Allocentric Cues on Transsaccadic Integration of Multiple Objects

    Get PDF
    This thesis explores the role of stable allocentric information on the integration of visual information across eye movements. In this series of studies, I tested transsaccadic integration of multiple objects each with varying orientations in the presence or absence of reliable landmarks. Participants compared the orientations of two target stimuli presented before and after an eye movement and indicated whether the change was clockwise or counterclockwise. The landmarks were either intrinisic (directly related to the spatial location of the stimuli) or extrinsic (spatially independent). Results showed that the intrinsic landmarks were not able to temper this decrease in performance, but the extrinsic landmark had a significant effect. A control experiment was conducted to explain the extrinsic landmarks role of as a spatial cue. These results show that extrinsic visual landmarks can aid the visual systems ability to integrate visual information across eye movements

    Efficient multielement ray tracing with site-specific comparisons using measured MIMO channel data

    Get PDF

    Towards a taxonomy of distributed-object models

    Get PDF
    Different ideas about object-orientation and distributed computing have resulted in a large number of distributed-object models. Use of the same terminology with different meanings makes these models hard to compare. What is currently missing is a framework for describing object models which can be used to compare and classify them. An attempt at defining such a framework is presented in this paper.

    Process Driven Software Engineering Environments

    Get PDF
    Software development organizations have begun using Software Engineering Environments (SEEs) with the goal of enhancing the productivity of software developers and improving the quality of software products. The encompassing nature of a SEE means that it is typically very tightly coupled with the way an organization does business. To be most effective, the components of a SEE must be well integrated and the SEE itself must be integrated with the organization. The challenge of tool integration increases considerably when the components of the environment come from different vendors and support varying degrees of “openness”. The challenge of integration with the organization increases in a like manner when the environment must support a variety of different organizations over a long period of time. In addition to these pressures, any SEE must perform well and must “scale” well as the size of the organization changes. This paper proposes basing the Software Engineering Environment on the software development process used in an organization in order to meet the challenges of integration, performance, and scaling. The goals and services of distributed operating systems and Software Engineering Environments are outlined in order to more clearly define their roles. The motivation for using a well defined software development process is established along with the benefits of basing the Software Engineering Environment on the software development process. Components of a SEE that could effectively support the process and provide integration, performance, and scaling benefits are introduced along with an outline of an Ada program used to model the proposed components. The conclusion provides strong support for process driven SEEs, encourages the expansion of the concept into other “environments,” and cautions against literal interpretations of “process integration” that may slow the acceptance of this powerful approach

    New opportunities of computer assessment of knowledge based on fractal modeling

    Full text link
    [EN] In this article the urgent problem of control systems modeling of professional competences and mathematical knowledge assessment of students is discussed. Pedagogical expediency in management of students` cognitive activity by using of new informational technologies instruments in monitoring and assessment of knowledge is proved. The possibility of fractal methods application in perfecting of the system of computer monitoring of mathematical knowledge of students as a part of the adaptive training systems is considered and realized. The design of the technology of computer monitoring of students` knowledge on the basis of fractal methods includes the following stages: development of cross-disciplinary fractal and organized base of key mathematical concepts; the creation of the expansible bank of educational and cognitive and research tasks, which is coordinated with fractal structure of the conceptual framework; the development of the program module, which is focused on individual estimation of quality of students` educational cognitive activity in two characteristics — depth of knowledge based on the Hurst exponent and the size of synergetic effect of educational cognitive activity. The software realization of technology of the computerized control of training in mathematical disciplines quality as a part of the adaptive training system is enabled in the programming language C#. The experience of implementation and operation of the controlling systems realized in the adaptive training systems based on the fractal model operation showed reliability of their work and allowed to increase the quality of educational process management and its effectiveness in general. Using of fractal techniques in computer assessment of mathematical knowledge of students makes it possible to increase the accuracy and speed of evaluation of the students` knowledge.Work was carried out with the support of RNF, the project № 16-18-10304.Dvoryatkina, S.; Smirnov, E.; Lopukhin, A. (2017). New opportunities of computer assessment of knowledge based on fractal modeling. En Proceedings of the 3rd International Conference on Higher Education Advances. Editorial Universitat Politècnica de València. 854-864. https://doi.org/10.4995/HEAD17.2017.544585486

    Urban energy simulation based on 3d city models: a service-oriented approach

    Get PDF
    Recent advancements in technology has led to the development of sophisticated software tools revitalizing growth in different domains. Taking advantage of this trend, urban energy domain have developed several compute intensive physical and data driven models. These models are used in various distinct simulation softwares to simulate the whole life-cycle of energy flow in cities from supply, distribution, conversion, storage and consumption. Since some simulation software target a specific energy system, it is necessary to integrate them to predict present and future urban energy needs. However, a key drawback is that, these tools are not compatible with each other as they use custom or propriety formats. Furthermore, they are designed as desktop applications and cannot be easily integrated with third-party tools (open source or commercial). Thereby, missing out on potential model functionalities which are required for sustainable urban energy management. In this paper, we propose a solution based on Service Oriented Architecture (SOA). Our approach relies on open interfaces to offer flexible integration of modelling and computational functionality as loosely coupled distributed services

    Flexible and transparent fault tolerance for distributed object-oriented applications

    Get PDF
    This report describes an approach enabling automatic structural reconfigurations of distributed applications based on configuration management in order to compensate for node and network failures. The major goal of the approach is to maintain the relevant application functionality after failures automatically.This goalis achieved by a dedicated system model and by a decentralized reconfiguration algorithm based on it. The system model provides support for redundant application object storage and for application-level consistency based on distributed checkpoints. The reconfiguration algorithm detects failures, computes a compensating configuration, and realizes this new configuration. The report emphasizes flexibility in the sense ofadaptable levels of fault tolerance, as well as transparency in the sense of fully-automatic reaction to failures

    Doctor of Philosophy

    Get PDF
    dissertationScene labeling is the problem of assigning an object label to each pixel of a given image. It is the primary step towards image understanding and unifies object recognition and image segmentation in a single framework. A perfect scene labeling framework detects and densely labels every region and every object that exists in an image. This task is of substantial importance in a wide range of applications in computer vision. Contextual information plays an important role in scene labeling frameworks. A contextual model utilizes the relationships among the objects in a scene to facilitate object detection and image segmentation. Using contextual information in an effective way is one of the main questions that should be answered in any scene labeling framework. In this dissertation, we develop two scene labeling frameworks that rely heavily on contextual information to improve the performance over state-of-the-art methods. The first model, called the multiclass multiscale contextual model (MCMS), uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates crossobject and interobject information into one probabilistic framework, and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. The second model, called the contextual hierarchical model (CHM), learns contextual information in a hierarchy for scene labeling. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. The CHM then incorporates the resulting multiresolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. We demonstrate the performance of CHM on different challenging tasks such as outdoor scene labeling and edge detection in natural images and membrane detection in electron microscopy images. We also introduce two novel classification methods. WNS-AdaBoost speeds up the training of AdaBoost by providing a compact representation of a training set. Disjunctive normal random forest (DNRF) is an ensemble method that is able to learn complex decision boundaries and achieves low generalization error by optimizing a single objective function for each weak classifier in the ensemble. Finally, a segmentation framework is introduced that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy images
    • …
    corecore