246,373 research outputs found

    Learning practices and teaching methods as factors in gender inequity in undergraduate computer science programs.

    Get PDF
    The primary purpose of this study is to detect student difficulties in adapting to the undergraduate computer science program. The research was conducted in the Department of Computer Science at a medium sized urban university in Ontario. Subjects were 16 students (ten males and six females) from the first to the third year of study and two professors. For this research mixed methods methodologies (QUAL+quan) were used. Qualitative methods were preponderant and were used in order to explore differences and difficulties both genders have in computer science program and modalities to deal with them. Quantitative methods were used to compare and analyze some of the details. Most female students had initial experience in using computers but few of them had previous experience in programming. During the program they were focused more on academic achievements but they were not oriented to developing practical projects and preparing for the realities of work in the IT industry. In relation to teaching, female students were more sensitive to teaching than male students. During the program, anxiety, lack of confidence and underachievement of female students progressed. The research revealed that the majority of males had initial experience in computer programming. During the program, they acquired more confidence and greater experience in programming and had more mature thoughts about the IT career than their female colleagues. Male students were oriented more on achieving real experience. Due to the fact that males were working in different informal settings, this helped them to extend and diversify their experience. Male students were more independent of teacher performances, being more willing to take ownership of learning process, especially in cases when teaching was not effective. Male students easily formed social networks that were able to help them. Female students had better social and communicational skills. However, because they were small in number and lacked initiative and support, they failed to coagulate social networks able to support themselves. Related feminism approaches, the author appreciate that liberal feminism is most likely to succeed in preparing women for a traditionally male dominated workplace. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2006 .S76. Source: Masters Abstracts International, Volume: 45-01, page: 0045. Thesis (M.Ed.)--University of Windsor (Canada), 2006

    Hardware Design, Prototyping and Studies of the Explicit Multi-Threading (XMT) Paradigm

    Get PDF
    With the end of exponential performance improvements in sequential computers, parallel computers, dubbed "chip multiprocessor", "multicore", or "manycore", has been introduced. Unfortunately, programming current parallel computers tends to be far more difficult than programming sequential computers. The Parallel Random Access Model (PRAM) is known to be an easy-to-program parallel computer model and has been widely used by theorists to develop parallel algorithms because it abstracts away architecture details and allows algorithm designers to focus on critical issues. The eXplicit Multi-Threading (XMT) PRAM-On-Chip project seeks to build an easy-to-program on-chip parallel processor by supporting a PRAM-like programming (performance) model. This dissertation focuses on the design, study of the micro-architecture of the XMT processor as well as performance optimization. The main contributions are:(1) Presented a scalable micro-architecture of the XMT based on high level description of the architecture. (2) Designed a synthesizable Verilog HDL (hardware design language) description of XMT, which lead to the first commitment to the silicon of the XMT processor, a 75 MHz XMT FPGA computer. With the same design, we expect to see the first XMT ASIC processor using IBM 90nm technology. (3) Proposed and implemented some architecture upgrades to the XMT: (i)value broadcasting, (ii)hardware/software co-managed prefetch buffers and (iii) hardware/software co-managed read-only buffers. (4) Quantitatively studied the performance of XMT using non-trivial application kernels with the 75 MHz XMT FPGA computer, in addition, the performance of a 800MHz XMT processor is projected. (5) The choice of not having local private caches in the XMT architecture is studied by comparing current architecture with an alternative one that includes conventional coherent private caches

    Workshop on Easy Java Simulations and the ComPADRE Digital Library

    Get PDF
    The premise of Easy Java Simulations (EJS) modeling is that when students are not actively involved in modeling they lose out on much of what can be learned from computer simulations. Although the modeling method can be used without computers, the use of computers allows students to study problems that are difficult and time consuming, to visualize their results, and to communicate their results with others. EJS is a free open-source Java application that simplifies the modeling process by breaking it into activities: 1) documentation, 2) modeling, and 3) interface design. The EJS program and examples of models will be available on CD. EJS models, documentation, and sample curricular material can also be downloaded from Open Source Physics collection in the comPADRE NSF Digital Library http://www.compadre.org/osp and from the Easy Java Simulations http://www.um.es/fem/Ejs website. Easy Java Simulations (EJS) is a modeling and authoring tool that helps science teachers and students create interactive simulations of scientific phenomena. These simulations can then be used in computer laboratories with students to better explain difficult concepts, to motivate them to study science, or to let students work with the simulations or (for more advanced students) even create their own ones. Both activities have proven to be very powerful didactical resources. EJS has been specifically designed to be used by people with no advanced programming skills. Hence, it tries very hard to make all the technical tasks easy. Authors still need to define the model of the phenomenon studied and design the visualization and interface for the data of the simulation. This means authors need to learn how to program scientific algorithms into Java language. But the extensive help provided by EJS make this far easier than what is traditionally called “learning to program”

    Loop pipelining with resource and timing constraints

    Get PDF
    Developing efficient programs for many of the current parallel computers is not easy due to the architectural complexity of those machines. The wide variety of machine organizations often makes it more difficult to port an existing program than to reprogram it completely. Therefore, powerful translators are necessary to generate effective code and free the programmer from concerns about the specific characteristics of the target machine. This work focuses on techniques to be used by an important class of translators, whose objective is to transform sequential programs into equivalent more parallel programs. The transformations are performed at instruction level in order to exploit low level parallelism and increase memory locality.Most of the current applications are programmed in languages which do not allow us to express parallelism between high-level sentences (as Pascal, C or Fortran). Furthermore, a lot of applications written ten or more years ago are still used today, and it is not feasible to rewrite such applications for many reasons (not only technical reasons, but also economic ones). Translators enable programmers to write the application in a familiar sequential programming language, without concerning their selves with the architecture of the target machine. Current compilers for parallel architectures not only translate a program written on a high-level language to the appropriate machine language, but also perform some transformations in the final code in order to execute the program in a more parallel way. The transformations improve the performance in the execution of the program by making use of the knowledge that the compiler has about the machine architecture. The semantics of the program remain intact after any transformation.Experiments show that limiting parallelization to basic blocks not included in loops limits maximum speedup. This is because loops often comprise a large portion of the parallelism available to be exploited in a program. For this reason, a lot of effort has been devoted in the recent years to parallelize loop execution. Several parallel computer architectures and compilation techniques have been proposed to exploit such a parallelism at different granularities. Multiprocessors exploit coarse grained parallelism by distributing entire loop iterations to different processors. Systems oriented to the high-level synthesis (HLS) of VLSI circuits, superscalar processors and very long instruction word (VLIW) processors exploit fine-grained parallelism at instruction level. This work addresses fine-grained parallelization of loops addressed to the HLS of VLSI circuits. Two algorithms are proposed for resource constraints and for timing constraints. An algorithm to reduce the number of registers required to execute a loop in a given architecture is also proposed.Postprint (published version

    Development and application of ab initio electron dynamics on traditional and quantum compute architectures

    Get PDF
    Electron dynamics processes are of utmost importance in chemistry. For example, light-induced processes are used in the field of photocatalysis to generate a wide variety of products by charge transfer, bond breaking, or electron solvation. Also in the field of materials science, more and more such processes are known and utilized, for example, to design more efficient solar cells. Even the formation of bonds in molecules is an electron dynamics process. Through experimental progress, it is now even possible to trigger specific processes and chemical reactions with special laser pulses. To study all these processes, computer-aided simulations are an indispensable tool. Depending on the size of the molecules considered and the desired accuracy, however, the underlying quantum-mechanical properties result in numerical formulas whose computation far exceeds the capabilities of even modern supercomputers. In this thesis, three projects are presented to demonstrate modern use cases of electron dynamics and show how recent developments in computer technology and software design can be used to develop more efficient and user-friendly programs. In the first project, the inter-Coulombic decay (ICD), an ultrafast energy transfer process, between two isolated chemical structures is investigated. After the excitation of one structure, the energy is transferred to the other, which is ionized as a result. The process has already been shown experimentally in atoms and molecules and is studied here for quantum dots, focusing on systems with more quantum dots and higher dimensions for the continuum than in previous studies. These elaborate studies are made possible by implementing computationally intensive program parts of the Heidelberg MCTDH program used on graphics processing units (GPUs). The performed studies show how the ICD process behaves with multiple partners as well as which competing decay processes occur and thus provide relevant information for the development of technologies based on quantum dots such as quantum dot qubits for use in quantum computers. Electron dynamics processes are not only relevant in the development of new quantum computers, but conversely, quantum computers can also provide the ability to perform electron dynamics with significantly more interacting electrons and a smaller error than it would ever be possible with traditional computers. In another project, therefore, a quantum algorithm was developed that could enable such simulations and their analysis in the future. The quantum algorithm was implemented in the dynamics program Jellyfish, which was also developed in the context of this dissertation. The program is based on a graphical user interface oriented on dataflow programming, which simultaneously leads to a modular structure. The resulting modules can be combined flexibly, which allows Jellyfish to be used for a wide variety of applications. In addition to dynamic algorithms, novel analysis methods were developed and demonstrated on laser-driven electronic excitations in molecules such as hydrogen, lithium cyanide, or guanine. Thus, the generation of electronic wave packets as well as transitions between electronic states were studied in an explicitly time-dependent manner and the formation of the exciton in such processes was described qualitatively by means of densities as well as quantitatively by so-called exciton descriptors such as exciton size or hole and particle position. Thus, in summary, this dissertation presents both new insights into electron dynamic processes and new possibilities for more efficient simulation of these processes using GPU implementations and quantum algorithms. The developed dynamics program Jellyfish offers the potential to be used in many further studies in this area and to be extended to allow for example simulations with a continuum like in the ICD calculations in the future

    Advanced Computer Program Models : A Talking Textbook Based on Three Languages

    Get PDF
    The purpose of this dissertation was to develop a learning instrument, to be used by programmers preparing for the Data Processing Management Association Test as a self study book, or by college business programming and computer science students who have completed a course in data processing and a course in programming a higher level language. The mathematical ability requirement was minimized by developing the algorithms in parallel with the programs. The learner should experience _emphasis in the following .areas: l. The type of activities required to pass the DPMA test (the programming part) 2. Data Structures 3. Fortran (at the level of the DPMA test) I 4. RPG (at the level of the DPMA test) 5. Flow chart reading and writing Fortran and RPG (Report Program Generator) languages were used, since their proficiency is required for the DPMA test; however a subset of IBM Basic Assembler language was used, because the author believed that a person who is more than superficially interested in computers should demonstrate a proficiency with a machine language. An important part of this method of presentation are the cassette recordings which allow the learner to progress outside the classroom. The recordings plus the hard copy of the actual programs, diminished in size, give the learner material which he can move to any location and study without the presence of the instructor

    Refactoring in Automatically Generated Programs

    Get PDF
    Refactoring aims at improving the design of ex- isting code by introducing structural modifications without changing its behaviour. It is used to adjust a system’s design in order to facilitate its maintenance and extendability. Since deciding which refactoring to apply and where it should be applied is not a straightforward decision, search-based approaches to automating the task of software refactoring have been proposed recently. So far, these approaches have been applied only to human-written code. Despite many years of computer programming experience, certain problems are very difficult for programmers to solve. To address this, researches have developed methods where computers automatically create program code from a description of the problem to be solved. One of the most popular forms of automated program creation is called Genetic Programming (GP). The aim of this work is to make GP more effective by introducing an automated refactoring step, based on the refactoring work in the software engineering community. We believe that the refactoring step will enhance the ability of GP to produce code that solves more complex problems, as well as result in evolved code that is both simpler and more idiomatically structured than that produced by traditional GP methods

    Immigrant Youth and Digital Disparty in California

    Get PDF
    This study addresses three key research questions regarding immigrant youth and the digital divide:What are the patterns of home technology use among native-born and immigrant families and youth?What are the causes and consequences of the digital divide for immigrant families and youth?How does technology at CTCs in California benefit immigrant families and youth
    corecore