229 research outputs found

    A note on Stokes' problem in dense granular media using the Îź(I)\mu(I)--rheology

    Full text link
    The classical Stokes' problem describing the fluid motion due to a steadily moving infinite wall is revisited in the context of dense granular flows of mono-dispersed beads using the recently proposed μ(I)\mu(I)--rheology. In Newtonian fluids, molecular diffusion brings about a self-similar velocity profile and the boundary layer in which the fluid motion takes place increases indefinitely with time tt as νt\sqrt{\nu t}, where ν\nu is the kinematic viscosity. For a dense granular visco-plastic liquid, it is shown that the local shear stress, when properly rescaled, exhibits self-similar behaviour at short-time scales and it then rapidly evolves towards a steady-state solution. The resulting shear layer increases in thickness as νgt\sqrt{\nu_g t} analogous to a Newtonian fluid where νg\nu_g is an equivalent granular kinematic viscosity depending not only on the intrinsic properties of the granular media such as grain diameter dd, density ρ\rho and friction coefficients but also on the applied pressure pwp_w at the moving wall and the solid fraction ϕ\phi (constant). In addition, the μ(I)\mu(I)--rheology indicates that this growth continues until reaching the steady-state boundary layer thickness δs=βw(pw/ϕρg)\delta_s = \beta_w (p_w/\phi \rho g ), independent of the grain size, at about a finite time proportional to βw2(pw/ρgd)3/2d/g\beta_w^2 (p_w/\rho g d)^{3/2} \sqrt{d/g}, where gg is the acceleration due to gravity and βw=(τw−τs)/τs\beta_w = (\tau_w - \tau_s)/\tau_s is the relative surplus of the steady-state wall shear-stress τw\tau_w over the critical wall shear stress τs\tau_s (yield stress) that is needed to bring the granular media into motion... (see article for a complete abstract).Comment: in press (Journal of Fluid Mechanics

    Dynamic Lot Sizing and Scheduling in a Multi-Item Production System

    Get PDF
    In this research, algorithms are developed to address the problem of dynamic lot sizing and scheduling in a single level (or single operation) production system. This research deviates from previous research in this area in that it does not have the kind of assumptions regarding the real world production system that normally were made to reduce the complexity of the problem. Specifically, this research explicitly considers finite capacity, multiple items, known deterministic dynamic demand, sequence dependent setup times and setup costs, setup carryover and variable backlogging. The objective is to simultaneously determine the lot size and the sequence of production runs in each period to minimize the sum of setup, inventory, and backlogging costs. The research here is motivated by observations of a real world production system that has a highly automated operation with sequence dependent setup times. For problems of this kind, optimal solution algorithms do not yet exist and, therefore, heuristic solution algorithms are of interest. Two distinct approaches are proposed to address the problem. The first is a greedy approach that eliminates setups while potential savings are greater than the increase in inventory or backlogging costs incurred. The second approach solves the much easier single item problem optimally for each item and then adapts the solution to account for capacity constraints. An intelligent modification to the second approach is also tried where a overload penalty is used between successive runs of the single product optimization algorithms A common component of each approach is a dynamic programming algorithm implemented to determine the optimal sequence of production within each period and across the scheduling horizon. The addition of sequence dependent considerations introduces a traveling salesman type problem to the lot sizing and sequencing decisions. The algorithms have been tested over several combinations of demand and inventory related cost factors. Specifically the following factors at two levels each have been used: problem size, demand type, utilization, setup cost, backlogging cost, and backlogging limit. The test results indicate that, while the performance of the proposed algorithms appear to be affected by all the factors listed above, overall the regeneration algorithm with overload penalty outperforms all of the other algorithms at all factor level combinations. In summary, the contribution of this research has been the development of three new algorithms for dynamic lot sizing and scheduling of multiple items in a single level production system. Through extensive statistical analysis, it has been shown that these algorithms, in particular the regeneration algorithm with overload penalty , outperform the conventional scheduling techniques such as no lot sizing and economic manufacturing quantity

    Avionics Systems

    Get PDF
    ‘Avionics’ systems, over the decades, have grown from simple communication radios and navigation equipments to complex integrated equipments primarily infiuenced by dominance of digital technology. Continuous growth in integrated circuit technology, functional integration of complete system on chip, very high speed communication channels and fault tolerant communication protocols have brought remarkable advancements in avionics systems. Further Mechanical and Pneumatic functional blocks are being replaced by digital systems progressively and decisively. New generation aircraft are being built around powerful avionics assets to provide stress free cockpit to the pilot.Defence Science Journal, 2013, 63(2), pp.129-130, DOI:http://dx.doi.org/10.14429/dsj.63.426

    Role of high resolution ultrasonography of peripheral nerves in leprosy patients

    Get PDF
    INTRODUCTION: Leprosy mainly affects skin and nerves. Involvement of nerves causes serious disabilities and deformities. Clinical assessment of nerves is very subjective. High resolution ultrasonography of nerves serves as an important objective method of evaluation of peripheral nerves. AIMS AND OBJECTIVES: The aim of this study was to study the clinical spectrum of leprosy patients and to assess the peripheral nerves clinically and then through high resolution ultrasound. To correlate the ultrasound findings with clinical findings. MATERIALS AND METHOD: In this observational study, 30 newly diagnosed leprosy patients and 30 age and sex matched controls were included. An informed written Consent, detailed clinical history, thorough clinical examination and routine investigations were done for patients. Ultrasound and colour Doppler examination were done for both patients and controls. All the data obtained were analysed statistically. RESULTS: Out of 30 patients studied, males and females were in the ratio of 3:2 with mean age of 34.9 years. 14 patients were in Borderline tuberculoid, 6 in Borderline lepromatous, 5 in Lepromatous and 5 in pure neuritic leprosy. Ulnar nerve (62%) was most frequently involved. The nerves were significantly thicker in the leprosy patients with higher mean cross sectional area when compared to controls. (ulnar p< 0.005) . Colour Doppler showed increased vascularity in 13 nerves of patients with reactions. Positive Correlation was observed for clinical thickness of nerve and ultrasound findings like cross sectional area and echotexture (p <0.05). CONCLUSION: High resolution ultrasound and colour Doppler examination of peripheral nerve could be a useful technique in diagnosis and follow up of leprosy patients. This study emphasizes the importance of ultrasound of nerves as an additional tool in management of leprosy patients

    Effect of Neighborhood Approximation on Downstream Analytics

    Get PDF
    Nearest neighbor search algorithms have been successful in finding practically useful solutions to computationally difficult problems. In the nearest neighbor search problem, the brute force approach is often more efficient than other algorithms for high-dimensional spaces. A special case exists for objects represented as sparse vectors, where algorithms take advantage of the fact that an object has a zero value for most features. In general, since exact nearest neighbor search methods suffer from the “curse of dimensionality,” many practitioners use approximate nearest neighbor search algorithms when faced with high dimensionality or large datasets. To a reasonable degree, it is known that relying on approximate nearest neighbors leads to some error in the solutions to the underlying data mining problems the neighbors are used to solve. However, no one has attempted to quantify this error or provide practitioners with guidance in choosing appropriate search methods for their task. In this thesis, we conduct several experiments on recommender systems with a goal to find the degree to which approximate nearest neighbor algorithms are subject to these types of error propagation problems. Additionally, we provide persuasive evidence on the trade-off between search performance and analytics effectiveness. Our experimental evaluation demonstrates that a state-of-the-art approximate nearest neighbor search method (L2KNNGApprox) is not an effective solution in most cases. When tuned to achieve high search recall (80% or higher), it provides a fairly competitive recommendation performance compared to an efficient exact search method but offers no advantage in terms of efficiency (0.1x—1.5x speedup). Low search recall (\u3c60%) leads to poor recommendation performance. Finally, medium recall values (60%—80%) lead to reasonable recommendation performance but are hard to achieve and offer only a modest gain in efficiency (1.5x—2.3x)
    • …
    corecore