744 research outputs found

    Methodology for object-oriented real-time systems analysis and design: Software engineering

    Get PDF
    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects

    The fully kinetic Biermann battery and associated generation of pressure anisotropy

    Full text link
    The dynamical evolution of a fully kinetic, collisionless system with imposed background density and temperature gradients is investigated analytically. The temperature gradient leads to the generation of temperature anisotropy, with the temperature along the gradient becoming larger than that in the direction perpendicular to it. This causes the system to become unstable to pressure anisotropy driven instabilities, dominantly to electron Weibel. When both density and temperature gradients are present and non-parallel to each other, we obtain a Biermann-like linear in time magnetic field growth. Accompanying particle in cell numerical simulations are shown to confirm our analytical results.Comment: 5 pages, 2 figures, + Supplementary materials (4 pages, 2 figures

    Unmarked Human Burial Site Policy in Louisiana: Pre-Columbian Context and Community Perspectives

    Get PDF
    Since the passing of the Native American Graves Protection and Repatriation Act of 1990 (NAGPRA), state governments have implemented similar policies that allow for Native American tribes without federal recognition to petition for the repatriation of human remains and objects significant to their culture (Seidemann, 2010). Per La. R.S. 8:671-681, which is the Louisiana Unmarked Human Burial Sites Preservation Act of 1992 (UBA), the Division of Archaeology in the Louisiana State Office of Cultural Development is responsible for overseeing the protection and preservation of unmarked burials. These burials are often of pre-Columbian or historic cultural and temporal context, which warrants consultation and collaboration with associated descendant communities regarding their disposition. Research shows that collaboration with Native American communities in archaeological investigations ensures ethical research practice and fosters a more holistic repatriation process and treatment of human remains regardless of ethnicity, culture, or date of interment (Colwell-Chanthaphonh & Ferguson, 2004; Colwell-Chanthaphonh et al., 2010). This thesis documents perspectives on the UBA of Native American communities in Louisiana to evaluate its effectiveness in preserving unmarked burials in pre-Columbian context, and to provide an opportunity for critical feedback on the UBA regulatory process. Qualitative data analysis of semi-structured interviews with tribal representatives was employed to document these perspectives. Supplemental data, including records of permits issued in accordance with the UBA, were analyzed to assess permit usage of the UBA since 2010. Results showed that Native American communities who participated in interviews were overall content with consultation and collaboration regarding unmarked burial sites. However, their outlook for long-term protection and preservation was dim due to the lack of control tribes have over sites situated on private property that is covered under the UBA. Concerning the permits, only 2 out of 18 permits issued since 2010 involve burials in pre-Columbian context, demonstrating that the UBA has been applied more frequently in historical context. This research found that tribes need more legal control over unmarked burials to preserve sacred sites properly

    Magnetic islands in the heliosheath: Properties and implications

    Get PDF
    In the heliosheath there are sectors of magnetic fields separated by current sheets thinner than the ion inertial length and thus subject to the tearing instability. This instability allows the development of magnetic islands that grow due to magnetic reconnection. Using PIC (particle-in-cell) simulations, we show that these islands are relevant because they quickly grow to fill up the space between the sectors and in the meanwhile generate temperature anisotropies, accelerate particles, and form instabilities based on the anisotropies. The plasma β (the ratio of the plasma pressure to the magnetic pressure) of a system can have a large effect on its dynamics since high β enhances the effects of pressure anisotropies. In our PIC simulations, we investigate a system of stacked current sheets that break up into magnetic islands due to magnetic reconnection, which is analogous to the compressed heliospheric current sheet in the heliosheath. We find that for high β, and for realistic ion-to-electron mass ratios, only highly elongated islands reach finite size. The anisotropy within these islands prevents full contraction, leading to a final state of highly elongated islands in which further reconnection is suppressed. In the heliosheath there is evidence that these elongated islands are present. We performing a scaling of the growth of magnetic islands versus the system size. We thus determine that the islands, although reaching a final elongated state, can continue growing via the merging process until they reach the sector width. The islands achieve this size in much less time than it takes for the islands to convect through the heliosheath. We also find that the electron heating in our simulations has a strong β dependence. Particles are dominantly heated through Fermi reflection in contracting islands during island growth and merging. However, electron anisotropies support the development of a Weibel instability which impedes the Fermi acceleration of the electrons. In the heliosheath, we predict that energization of particles in general is limited by interaction with anisotropy instabilities such as the firehose instability, and by the the Weibel instability for electrons in particular

    The Pimsletter on Business Strategy. Unions and Profits, 1978

    Get PDF
    Statistical analysis of return on investment and productivity in a union environment

    Design of object-oriented distributed simulation classes

    Get PDF
    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out

    Design of Object-Oriented Distributed Simulation Classes

    Get PDF
    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out

    The Application of New Software Technology to the Architecture of the National Cycle Program

    Get PDF
    As part of the Numerical Propulsion System Simulation (NPSS) effort of NASA Lewis in conjunction with the United States aeropropulsion industry, a new system simulation framework, the National Cycle Program (NCP), capable of combining existing empirical engine models with new detailed component-based computational models is being developed. The software architecture of the NCP program involves a generalized object- oriented framework and a base-set of engine component models along with supporting tool kits which will support engine simulation in a distributed environment. As the models are extended to contain two and three dimensions the computing load increases rapidly and it is intended that this load be distributed across multiple work stations executing concurrently in order to get acceptably fast results. The research carried out was directed toward performance analysis of the distributed object system. More specifically, the performance of the actor-based distributed object design I created earlier was desired. To this end, the research was directed toward the design and implementation of suitable performance-analysis techniques and software to demonstrate those techniques. There were three specific results which are reported in two separate reports submitted separately as NASA Technical Memoranda. The results are: (1) Design, implementation, and testing of a performance analysis program for a set of active objects (actor based objects) which allowed the individual actors to be assigned to arbitrary processes on an arbitrary set of machines. (2) The global-balance-equation approach has the fundamental limitation that the number of equations increases exponentially with the number of actors. Hence, unlike many approximate approaches to this problem, the nearest-neighbor approach allows checking of the solution and an estimate of the error. The technique was demonstrated in a prototype analysis program as part of this research. The results of the program were checked against the global-balance solution discussed above. Late during the grant, a much better approximation was developed and this is discussed in result below. As a consequence, a proposal was submitted to continue the research by developing the new approximation including development of a complete program from the prototype. (3) The source of approximation in the nearest-neighbor algorithm is the requirement for estimating some joint probabilities from some marginal distributions. A completely ad hoc estimate was used in the prototype
    corecore