266 research outputs found

    Interactive 3D simulations

    Get PDF
    “Simulation” is a word that might be familiar to everybody. Its meaning is so expanded in all sectors (medicine, education, biology, engineering, psychology...), that the introduction of this paperwork will mainly focus only on computer simulations subject. To better understand what a computer simulation is, the most suitable and simple definitions are below: “The technique of representing the real world by a computer program”; "a simulation should imitate the internal processes and not merely the results of the thing being simulated"; “is a technique to perform tests using a model written in software” Why do we use computer simulations? The answer to this question may seem very long, but the main reasons for using simulations are easy to find, although they depend on what area we are in: Business simulations: Modern business has to stay competitive by keeping development and training costs and times to a minimum, while still keeping high levels of quality for both. Modeling and simulation of systems can provide solutions for product development and personnel training without the costs usually associated with these. In other words: computer simulations save time and money, and they are as reliable as real world tests. Education simulations: They provide the students with one intermediate space, which joins the reality with the models or theories. In addition, simulations allow interactive manipulation of the models, that will facilitate the acquirement of knowledge of the students. Obviously the subject of this work is to program and to study an example of “educational simulation”. More specifically, the simulation we work on is an interactive simulation, which intends to provide the students with a tool they can play with. The students can better understand the mathematical model used to explain the phenomena under study, because they can check and observe, in an interactive way, the reality the model represents. Our Applet simulation is drawn on a 2D projection of a 3D interactive graph representing two plane waves, one electric and the other magnetic, with z axe as their direction of propagation. It is an interactive Applet because the users can change the values of the wave equations through sliders, buttons and combo boxes

    Platform Embedded Security Technology Revealed

    Get PDF
    Computer scienc

    Interactive 3D simulations

    Get PDF
    “Simulation” is a word that might be familiar to everybody. Its meaning is so expanded in all sectors (medicine, education, biology, engineering, psychology...), that the introduction of this paperwork will mainly focus only on computer simulations subject. To better understand what a computer simulation is, the most suitable and simple definitions are below: “The technique of representing the real world by a computer program”; "a simulation should imitate the internal processes and not merely the results of the thing being simulated"; “is a technique to perform tests using a model written in software” Why do we use computer simulations? The answer to this question may seem very long, but the main reasons for using simulations are easy to find, although they depend on what area we are in: Business simulations: Modern business has to stay competitive by keeping development and training costs and times to a minimum, while still keeping high levels of quality for both. Modeling and simulation of systems can provide solutions for product development and personnel training without the costs usually associated with these. In other words: computer simulations save time and money, and they are as reliable as real world tests. Education simulations: They provide the students with one intermediate space, which joins the reality with the models or theories. In addition, simulations allow interactive manipulation of the models, that will facilitate the acquirement of knowledge of the students. Obviously the subject of this work is to program and to study an example of “educational simulation”. More specifically, the simulation we work on is an interactive simulation, which intends to provide the students with a tool they can play with. The students can better understand the mathematical model used to explain the phenomena under study, because they can check and observe, in an interactive way, the reality the model represents. Our Applet simulation is drawn on a 2D projection of a 3D interactive graph representing two plane waves, one electric and the other magnetic, with z axe as their direction of propagation. It is an interactive Applet because the users can change the values of the wave equations through sliders, buttons and combo boxes

    Building Programmable Wireless Networks: An Architectural Survey

    Full text link
    In recent times, there have been a lot of efforts for improving the ossified Internet architecture in a bid to sustain unstinted growth and innovation. A major reason for the perceived architectural ossification is the lack of ability to program the network as a system. This situation has resulted partly from historical decisions in the original Internet design which emphasized decentralized network operations through co-located data and control planes on each network device. The situation for wireless networks is no different resulting in a lot of complexity and a plethora of largely incompatible wireless technologies. The emergence of "programmable wireless networks", that allow greater flexibility, ease of management and configurability, is a step in the right direction to overcome the aforementioned shortcomings of the wireless networks. In this paper, we provide a broad overview of the architectures proposed in literature for building programmable wireless networks focusing primarily on three popular techniques, i.e., software defined networks, cognitive radio networks, and virtualized networks. This survey is a self-contained tutorial on these techniques and its applications. We also discuss the opportunities and challenges in building next-generation programmable wireless networks and identify open research issues and future research directions.Comment: 19 page

    An Adaptive Modular Redundancy Technique to Self-regulate Availability, Area, and Energy Consumption in Mission-critical Applications

    Get PDF
    As reconfigurable devices\u27 capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. A Sustainable Modular Adaptive Redundancy Technique (SMART) composed of a dual-layered organic system is proposed, analyzed, implemented, and experimentally evaluated. SMART relies upon a variety of self-regulating properties to control availability, energy consumption, and area used, in dynamically-changing environments that require high degree of adaptation. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA) to provide self-repair using a novel approach called a Reconfigurable Adaptive Redundancy System (RARS). The software layer supervises the organic activities within the FPGA and extends the self-healing capabilities through application-independent, intrinsic, evolutionary repair techniques to leverage the benefits of dynamic Partial Reconfiguration (PR). A SMART prototype is evaluated using a Sobel edge detection application. This prototype is shown to provide sustainability for stressful occurrences of transient and permanent fault injection procedures while still reducing energy consumption and area requirements. An Organic Genetic Algorithm (OGA) technique is shown capable of consistently repairing hard faults while maintaining correct edge detector outputs, by exploiting spatial redundancy in the reconfigurable hardware. A Monte Carlo driven Continuous Markov Time Chains (CTMC) simulation is conducted to compare SMART\u27s availability to industry-standard Triple Modular Technique (TMR) techniques. Based on nine use cases, parameterized with realistic fault and repair rates acquired from publically available sources, the results indicate that availability is significantly enhanced by the adoption of fast repair techniques targeting aging-related hard-faults. Under harsh environments, SMART is shown to improve system availability from 36.02% with lengthy repair techniques to 98.84% with fast ones. This value increases to five nines (99.9998%) under relatively more favorable conditions. Lastly, SMART is compared to twenty eight standard TMR benchmarks that are generated by the widely-accepted BL-TMR tools. Results show that in seven out of nine use cases, SMART is the recommended technique, with power savings ranging from 22% to 29%, and area savings ranging from 17% to 24%, while still maintaining the same level of availability

    1997 Research Reports: NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    This document is a collection of technical reports on research conducted by the participants in the 1997 NASA/ASEE Summer Faculty Fellowship Program at the Kennedy Space Center (KSC). This was the 13th year that a NASA/ASEE program has been conducted at KSC. The 1997 program was administered by the University of Central Florida in cooperation with KSC. The program was operated under the auspices of the American Society for Engineering Education (ASEE) with sponsorship and funding from the Education Division, NASA Headquarters, Washington, D.C., and KSC. The KSC Program was one of nine such Aeronautics and Space Research Programs funded by NASA in 1997. The NASA/ASEE Program is intended to be a two-year program to allow in-depth research by the university faculty member. The editors of this document were responsible for selecting appropriately qualified faculty to address some of the many problems of current interest to NASA/KSC

    Designing object-oriented interfaces for medical data repositories

    Get PDF
    Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (leaves 113-116).by Patrick J. McCormick.S.B.and M.Eng

    Internet-based solutions to support distributed manufacturing

    Get PDF
    With the globalisation and constant changes in the marketplace, enterprises are adapting themselves to face new challenges. Therefore, strategic corporate alliances to share knowledge, expertise and resources represent an advantage in an increasing competitive world. This has led the integration of companies, customers, suppliers and partners using networked environments. This thesis presents three novel solutions in the tooling area, developed for Seco tools Ltd, UK. These approaches implement a proposed distributed computing architecture using Internet technologies to assist geographically dispersed tooling engineers in process planning tasks. The systems are summarised as follows. TTS is a Web-based system to support engineers and technical staff in the task of providing technical advice to clients. Seco sales engineers access the system from remote machining sites and submit/retrieve/update the required tooling data located in databases at the company headquarters. The communication platform used for this system provides an effective mechanism to share information nationwide. This system implements efficient methods, such as data relaxation techniques, confidence score and importance levels of attributes, to help the user in finding the closest solutions when specific requirements are not fully matched In the database. Cluster-F has been developed to assist engineers and clients in the assessment of cutting parameters for the tooling process. In this approach the Internet acts as a vehicle to transport the data between users and the database. Cluster-F is a KD approach that makes use of clustering and fuzzy set techniques. The novel proposal In this system is the implementation of fuzzy set concepts to obtain the proximity matrix that will lead the classification of the data. Then hierarchical clustering methods are applied on these data to link the closest objects. A general KD methodology applying rough set concepts Is proposed In this research. This covers aspects of data redundancy, Identification of relevant attributes, detection of data inconsistency, and generation of knowledge rules. R-sets, the third proposed solution, has been developed using this KD methodology. This system evaluates the variables of the tooling database to analyse known and unknown relationships in the data generated after the execution of technical trials. The aim is to discover cause-effect patterns from selected attributes contained In the database. A fourth system was also developed. It is called DBManager and was conceived to administrate the systems users accounts, sales engineers’ accounts and tool trial monitoring process of the data. This supports the implementation of the proposed distributed architecture and the maintenance of the users' accounts for the access restrictions to the system running under this architecture

    An Automated procedure for simulating complex arrival processes: A Web-based approach

    Get PDF
    In industry, simulation is one of the most widely used probabilistic modeling tools for modeling highly complex systems. Major sources of complexity include the inputs that drive the logic of the model. Effective simulation input modeling requires the use of accurate and efficient input modeling procedures. This research focuses on nonstationary arrival processes. The fundamental stochastic model on which this study is conducted is the nonhomogeneous Poisson process (NHPP) which has successfully been used to characterize arrival processes where the arrival rate changes over time. Although a number of methods exist for modeling the rate and mean value functions that define the behavior of NHPPs, one of the most flexible is a multiresolution procedure that is used to model the mean value function for processes possessing long-term trends over time or asymmetric, multiple cyclic behavior. In this research, a statistical-estimation procedure for automating the multiresolution procedure is developed that involves the following steps at each resolution level corresponding to a basic cycle: (a) transforming the cumulative relative frequency of arrivals within the cycle to obtain a linear statistical model having normal residuals with homogeneous variance; (b) fitting specially formulated polynomials to the transformed arrival data; (c) performing a likelihood ratio test to determine the degree of the fitted polynomial; and (d) fitting a polynomial of the degree determined in (c) to the original (untransformed) arrival data. Next, an experimental performance evaluation is conducted to test the effectiveness of the estimation method. A web-based application for modeling NHPPs using the automated multiresolution procedure and generating realizations of the NHPP is developed. Finally, a web-based simulation infrastructure that integrates modeling, input analysis, verification, validation and output analysis is discussed
    • …
    corecore