23 research outputs found

    TEACHING ALGORITHMS PROFILE-ORIENTED: A PROPOSED METHODOLOGY TO ELEMENTARY SCHOOL

    Get PDF
    Abstract One of the first challenges in the programming field, apart from ones related to the traditional teaching methods, is the introduction of technological tools that can help improving children's learning experience by presenting them with the most appropriate problems according to their ages. This choice is of great importance since it allows students to develop their capacity of abstraction in a natural way. Moreover, it also improves students' creativity and problem-solving abilities. In order to overcome these challenges, we propose a methodology based on VARK questionnaire that defines the students' profile and, consequently, what are the most appropriate educational materials based on technologies. Once the students' profiles are defined it is suggested the use of Scratch programming language to develop the student's skills on the resolution of problems through the following concepts: sequencing, repetition and conditions

    Java for Cost Effective Embedded Real-Time Software

    Get PDF

    Development of a Software Package for Chemical Engineering Thermodynamic Research and Calculations

    Get PDF

    Reverse code engineering of .NET applications

    Get PDF

    Relaxing Synchronization in Distributed Simulated Annealing

    Get PDF
    Simulated annealing is an attractive, but expensive, heuristic for approximating the solution to combinatorial optimization problems. Since simulated annealing is a general purpose method, it can be applied to the broad range of NP-complete problems such as the traveling salesman problem, graph theory, and cell placement with a careful control of the cooling schedule. Attempts to parallelize simulated annealing, particularly on distributed memory multicomputers, are hampered by the algorithm’s requirement of a globally consistent system state. In a multicomputer, maintaining the global state S involves explicit message traffic and is a critical performance bottleneck. One way to mitigate this bottleneck is to amortize the overhead of these state updates over as many parallel state changes as possible. By using this technique, errors in the actual cost C(S) of a particular state S will be introduced into the annealing process. This dissertation places analytically derived bounds on the cost error in order to assure convergence to the correct result. The resulting parallel Simulated Annealing algorithm dynamically changes the frequency of global updates as a function of the annealing control parameter, i.e. temperature. Implementation results on an Intel iPSC/2 are reported

    Software composition with templates

    Get PDF
    Software composition systems are systems that concentrate on the composition of components. Thes.e systems represent a growi~ subfield of software engineering. Traditional software composition approaches define components as black-boxes. Black-boxes are characterised by their visible behaviour, but not their visible structure. They describe what can be done, rather than how it can be done. Basically, black-boxes are structurally monolithic units that can be composed together via provided interfaces. Growing complexity of software systems and dynamically changing requirements to these systems demand better parameterisation of components. State of the art approaches have tried to increase parameterisation of systems with so-called grey-box components (grey-boxes). These types of components introduced a structural configurability of components. Greyboxes could improve composability, reusability, extensibility and adaptability of software systems. However, there is still there is a big gap between grey-box approaches and business. ,' We see two main reasons for this. Firstly, a structurally non-monolithic nature of grey-boxes results in a significantly increased number of components and relationships that may form a software system. This makes grey-box approaches more complex and their development more expensive. There is a lack of tools to decrease the complexity of grey-box approaches. Secondly, grey-box composition approaches are oriented to the experts with a technical background in programming languages and software architectures. Up to now, state-of-the-art approaches have not addressed the question of their efficient applicability by domain experts with no technical background in programming languages. We consider a structural visibility of grey-boxes gives a chance to provide better externalisation of business logic, so that even a non-expert in programming language could design a software system for hislher special domain. In this thesis, we propose a holistic approach, called Neurath Composition Framework, to compose software systems according to well-defined requirements which have been externalised, giving the ownership of the design to the end-user. We show how externalisation of business logic can be achieved using grey-box composition systems augmented with the domain-specific visual interfaces. We define our own grey-box composition system based on the Parametric Code Templates component model and Molecular Operations composition technique. With this composition system awareness 'of a design, comprehensive development and the reuse of program code templates can be achieved. Finally, we present a sample implementation that shows the applicability of the composition framework to solve real-life business tasks.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Managing complex taxonomic data in an object-oriented database.

    Get PDF
    This thesis addresses the problem of multiple overlapping classifications in object-oriented databases through the example of plant taxonomy. These multiple overlapping classifications are independent simple classifications that share information (nodes and leaves), therefore overlap. Plant taxonomy was chosen as the motivational application domain because taxonomic classifications are especially complex and have changed over long periods of time, therefore overlap in a significant manner. This work extracts basic requirements for the support of multiple overlapping classifications in general, and in the context of plant taxonomy in particular. These requirements form the basis on which a prototype is defmed and built. The prototype, an extended object-oriented database, is extended from an object-oriented model based on ODMG through the provision of a relationship management mechanism. These relationships form the main feature used to build classifications. This emphasis on relationships allows the description of classifications orthogonal to the classified data (for reuse and integration of the mechanism with existing databases and for classification of non co-operating data), and allows an easier and more powerful management of semantic data (both within and without a classification). Additional mechanisms such as integrity constraints are investigated and implemented. Finally, the implementation of the prototype is presented and is evaluated, from the point of view of both usability and expressiveness (using plant taxonomy as an application), and its performance as a database system. This evaluation shows that the prototype meets the needs of taxonomists

    Modèle et simulateur à grande échelle d'une rétine biologique, avec contrôle de gain

    Get PDF
    The retina is a complex neural structure. The characteristics of retinal processing are reviewed extensively in Part I of this work: It is a very ordered structure, which proceeds to band-pass spatio-temporal enhancements of the incoming light, along different parallel output pathways with distinct spatio-temporal properties. The spike trains emitted by the retina have a complex statistical structure, such that precise spike timings may play a role in the code conveyed by the retina. Several mechanisms of gain control provide a constant adaptation of the retina to luminosity and contrast. The retina model that we have defined and implemented in Part II can account for a good part of this complexity. It can model spatio-temporal band-pass behavior with adjustable filtering scales, with the inclusion of plausible mechanisms of contrast gain control and spike generation. The gain control mechanism proposed in the model provides a good fit to experimental data, and it can induce interesting effects of local renormalization in the output retinal image. Furthermore, a mathematical analysis confirms that the gain control behaves well under simple sinusoidal stimulation. Finally, the simulator /Virtual Retina/ implements the model on a large-scale, so that it can emulate up to around 100,000 cells with a processing speed of about 1/100 real time. It is ready for use in various applications, while including a number of advanced retinal functionalities which are too often overlooked.La rétine est une structure neuronale complexe, qui non seulement capte la lumière incidente au fond de l'oeil, mais procède également à des transformations importantes du signal lumineux. Dans la Partie I de ce travail, nous résumons en détail les caractéristiques fonctionnelles de la rétine des vertébrés: Il s'agit d'une structure très ordonnée, qui réalise un filtrage passe-bande du stimulus visuel, selon différents canaux parallèles d'information aux propriétés spatio-temporelles distinctes. Les trains de potentiels d'action émis par la rétine ont également une structure statistique complexe, susceptible de véhiculer une information importante. De nombreux mécanismes de contrôle de gain permettent une adaptation constante à la luminosité et au contraste. Le modèle de rétine défini et implémenté dans la Partie II de ce travail prend en compte une part importante de cette complexité. Il reproduit le comportement passe-bande, à l'aide de filtres linéaires spatio-temporels appropriés. Des mécanismes non-linéaires d'adaptation au contraste et de génération de potentiels d'action sont également inclus. Le mécanisme de contrôle du gain au contraste proposé permet une bonne reproduction des données expérimentales, et peut également véhiculer d'importants effets d'égalisation spatiale des contrastes en sortie de rétine. De plus, une analyse mathématique confirme que notre mécanisme a le comportement escompté en réponse à une stimulation sinusoïdale. Enfin, le simulateur /Virtual Retina/ implémente le modèle à grande échelle, permettant la simulation d'environ 100 000 cellules en un temps raisonnable (100 fois le temps réel)

    Program crash analysis : evaluation and application of current methods

    Get PDF
    After decades of development in computer science, memory corruption bugs still pose a threat to the reliability of software. Automatic crash reporting and fuzz testing are effective ways of gathering information about program bugs. However, these methods can potentially produce thousands of crash dumps, motivating the need for grouping and prioritizing crashes. In addition, the time necessary to analyze the root cause of crashes and to implement a reliable fix in source code should be reduced. This thesis demonstrates how fuzzing can produce a large set of different crashes in a real program. An empirical study explores methods for analyzing these crashes. Automatic bucketing and classification is performed. Call stack based grouping algorithms are compared, and modifications are suggested. Taint analysis is demonstrated as a complementary method to automatic classification based on crash dumps. Dynamic analysis using execution traces is demonstrated as a method for root cause analysis. The empirical study suggests some general results regarding program crash analysis. Crashes should be grouped based on related crash locations and identified similarities in call stacks. A distance algorithm can be used for call stack based grouping and to identify relations between groups. It is suggested that a weighted priority model should be used for prioritizing crashes based on a strategic policy. Some possible metrics are frequency, reliability, severity estimate and relations to already fixed bugs. In order to properly fix a memory corruption bug, the underlying cause should be understood at machine-level. Execution traces with logged operands, differential debugging, Crash Graphs and input analysis might help developers analyze different aspects of memory corruption bugs
    corecore