1,721 research outputs found

    Hidden in full sight: kinship, science and the law in the aftermath of the Srebrenica genocide

    Get PDF
    Terms such as “relationship testing,” “familial searching” and “kinship analysis” figure prominently in professional practices of disaster victim identification (DVI). However, despite the dependence of those identification technologies on DNA samples from people who might be related to the dead and despite also the prominence of the notion of “relatedness” as a device for identifying the dead, the concepts of “relatedness” and “kinship” remain elusive both in practice and in analyses of the social and ethical aspects of DVI by DNA; they are hidden in full sight. In this article, we wish to bring kinship more to the fore. We achieve this through a case study of a setting where bio-legal framings dominate, that is, in the trial at the International Criminal Tribunal for the former Yugoslavia (ICTY) of Radovan KaradĆŸić for the Srebrenica genocide in 1995. DNA samples from the families of those massacred in Srebrenica were vital for the identification of individual victims but are now also utilized as “evidence” by both the prosecution and the defense. By viewing practices of science (“evidence” and “identification”) and legal practices (“justice,” “prosecution” and “defence”) through the lens of kinship studies, we will present some alternative and complementary framings for the social accomplishment of ‘relatedness’

    Stochastic multi-period multi-product multi-objective Aggregate Production Planning model in multi-echelon supply chain

    Get PDF
    In this paper a multi-period multi-product multi-objective aggregate production planning (APP) model is proposed for an uncertain multi-echelon supply chain considering financial risk, customer satisfaction, and human resource training. Three conflictive objective functions and several sets of real constraints are considered concurrently in the proposed APP model. Some parameters of the proposed model are assumed to be uncertain and handled through a two-stage stochastic programming (TSSP) approach. The proposed TSSP is solved using three multi-objective solution procedures, i.e., the goal attainment technique, the modified Δ-constraint method, and STEM method. The whole procedure is applied in an automotive resin and oil supply chain as a real case study wherein the efficacy and applicability of the proposed approaches are illustrated in comparison with existing experimental production planning method

    On Japanese Minimalism

    Get PDF
    Shibumi, a Japanese term referring to a subtle elegance, but at times suggestive of austerity or even bitterness, captures a certain sense of restraint that is reflected in much traditional Japanese design. Although concepts derived from Japanese Zen Buddhism, such as ma, wabi-sabi, and iki, may be more commonly known to English-speaking audiences, this article proposes that shibumi is the more appropriate concept to apply when considering the minimalist nature inherent in much Japanese design. Moreover, this article suggests that shibumi and modernist design tastes may be compatible, despite past suggestions to the contrary. To support this viewpoint, I point to evidence in the ongoing design trends in Japanese design that continue to embrace several of the ideals of twentieth-century modernist design

    Water System Complexity and the Misuse of Modeling and Optimization

    Get PDF

    CAPRI: A Geometric Foundation for Computational Analysis and Design

    Get PDF
    CAPRI is a software building tool-kit that refers to two ideas; (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. A complete definition of the geometry and application programming interface can be found in the document CAPRI: Computational Analysis PRogramming Interface appended to this report. In summary the interface is subdivided into the following functional components: 1. Utility routines -- These routines include the initialization of CAPRI, loading CAD parts and querying the operational status as well as closing the system down. 2. Geometry data-base queries -- This group of functions allow all top level applications to figure out and get detailed information on any geometric component in the Volume definition. 3. Point queries -- These calls allow grid generators, or solvers doing node adaptation, to snap points directly onto geometric entities. 4. Calculated or geometrically derived queries -- These entry points calculate data from the geometry to aid in grid generation. 5. Boundary data routines -- This part of CAPRI allows general data to be attached to Boundaries so that the boundary conditions can be specified and stored within CAPRI s data-base. 6. Tag based routines -- This part of the API allows the specification of properties associated with either the Volume (material properties) or Boundary (surface properties) entities. 7. Geometry based interpolation routines -- This part of the API facilitates Multi-disciplinary coupling and allows zooming through Boundary Attachments. 8. Geometric creation and manipulation -- These calls facilitate constructing simple solid entities and perform the Boolean solid operations. Geometry constructed in this manner has the advantage that if the data is kept consistent with the CAD package, therefore a new design can be incorporated directly and is manufacturable. 9. Master Model access This addition to the API allows for the querying of the parameters and dimensions of the model. The feature tree is also exposed so it is easy to see where the parameters are applied. Calls exist to allow for the modification of the parameters and the suppression/unsuppression of nodes in the tree. Part regeneration is performed by a single API call and a new part becomes available within CAPRI (if the regeneration was successful). This is described in a separate document. Components 1-7 are considered the CAPRI base level reader

    Visualization of unsteady computational fluid dynamics

    Get PDF
    The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines

    A Geometry Based Infra-structure for Computational Analysis and Design

    Get PDF
    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive

    Automated Fluid Feature Extraction from Transient Simulations

    Get PDF
    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes

    Visualization of Unsteady Computational Fluid Dynamics

    Get PDF
    The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient problems require dealing with time
    • 

    corecore