10,028 research outputs found
An Intelligent Classification System For Aggregate Based On Image Processing And Neural Network
Bentuk dan tekstur permukaan aggregat mempengaruhi kekuatan dan struktur konkrit. Secara tradisi, mesin pengayakan mekanikal dan pengukuran manual digunakan bagi menentukan kedua-dua saiz dan bentuk aggregat.
Aggregate’s shape and surface texture immensely influence the strength and structure of the resulting concrete. Traditionally, mechanical sieving and manual gauging are used
to determine both the size and shape of the aggregates
Dagstuhl Reports : Volume 1, Issue 2, February 2011
Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn
Progress in hydroxyapatite-starch based sustainable biomaterials for biomedical bone substitution applications
Hydroxyapatite is a calcium phosphate intensively proposed as a bone substitution material because of its resemblance to the constituents of minerals present in natural bone. Since hydroxyapatite’s properties are mainly adequate for nonload bearing applications, different solutions are being tested for improving these properties and upgrading them near the target values of natural bone. On the other hand, starch (a natural and biodegradable polymer) and its blends with other polymers have been proposed as constituents in hydroxyapatite mixtures due to the adhesive, gelling, and swelling abilities of starch particles, useful in preparing well dispersed suspensions and consolidated ceramic bodies. This article presents the perspectives of incorporating starch and starch blends in hydroxyapatite materials. Based on the role of starch within the materials, the review covers its use as (i) a polymeric matrix in hydroxyapatite composites used as adhesives, bone cements, bone waxes, drug delivery devices or scaffolds and (ii) a sacrificial binder for fabrication of porous hydroxyapatite scaffolds. The suitability of these materials for bone reconstruction has becomes a reachable aim considering the recent advancements in ceramic fabrication and the current possibilities of controlling the processing parameters
Harnessing the known and unknown impact of nanotechnology on enhancing food security and reducing postharvest losses : constraints and future prospects
Due to the deterioration of natural resources, low agricultural production, significant postharvest losses, no value addition, and a rapid increase in population, the enhancement of food security and safety in underdeveloped countries is becoming extremely tough. Efforts to incorporate the latest technology are now emanating from scientists globally in order to boost supply and subsequently reduce differences between the demand and the supply chain for food production. Nanotechnology is a unique technology that might increase agricultural output by developing nanofertilizers, employing active pesticides and herbicides, regulating soil features, managing wastewater and detecting pathogens. It is also suitable for processing food, as it boosts food production with high market value, improves its nutrient content and sensory properties, increases its safety, and improves its protection from pathogens. Nanotechnology can also be beneficial to farmers by assisting them in decreasing postharvest losses through the extension of the shelf life of food crops using nanoparticles. This review presents current data on the impact of nanotechnology in enhancing food security and reducing postharvest losses alongside the constraints confronting its application. More research is needed to resolve this technology’s health and safety issues
Performance and power comparisons between Fermi and Cypress GPUs
In recent years, modern graphics processing units have been widely adopted in high performance computing areas to solve large scale computation problems. The leading GPU manufacturers Nvidia and AMD have introduced series of products to the market. While sharing many similar design concepts, GPUs from these two manufacturers differ in several aspects on processor cores and the memory subsystem. In this work, we conduct a comprehensive study to characterize and compare the architectural features of Nvidia’s Fermi and AMD’s Cypress GPUs. We first investigate the performance and power consumptions of an AMD Cypress GPU. By employing a rigorous statistical model to analyze the execution behaviors of representative general-purpose GPU (GPGPU) applications, we conduct insightful investigations on the target GPU architecture. Our results demonstrate that the GPU execution throughput and the power dissipation are dependent on different architectural variables. Furthermore, we design a set of micro-benchmarks to study the power consumption features of different function units on the GPU. Based on those results, we derive instructive principles that can guide the design of power-efficient high performance computing systems. We then make the concentration shift to the Nvidia Fermi GPU and compare it with the product from AMD. Our results indicate that these two products have diverse advantages that are reflected in their performance for different sets of applications. In addition, we also compare the energy efficiencies of these two platforms since power/energy consumption is a major concern in the high performance computing system
Recommended from our members
Scheduling, Characterization and Prediction of HPC Workloads for Distributed Computing Environments
As High Performance Computing (HPC) has grown considerably and is expected to grow even more, effective resource management for distributed computing sys- tems is motivated more than ever. As the computational workloads grow in quantity, it is becoming more crucial to apply efficient resource management and workload scheduling to use resources efficiently while keeping the computational performance reasonably good. The problem of efficiently scheduling workloads on resources while meeting performance standards is hard. Additionally, non-clairvoyance of job dimen- sions makes resource management even harder in real-world scenarios. Our research methodology investigates the scheduling problem compliant for HPC and researches the challenges for deploying the scheduling in real world-scenarios using state of the art machine learning and data science techniques.To this end, this Ph.D. dissertation makes the following core contributions: a) We perform a theoretical analysis of space-sharing, non-preemptive scheduling: we studied this scheduling problem and proposed scheduling algorithms with polyno- mial computation time. We also proved constant upper-bounds for the performance of these algorithms. b) We studied the sensitivity of scheduling algorithms to the accuracy of runtime and devised a meta-learning approach to estimate prediction accuracy for newly submitted jobs to the HPC system. c) We studied the runtime prediction problem for HPC applications. For this purpose, we studied the distri- bution of available public workloads and proposed two different solutions that can predict multi-modal distributions: switching state-space models and Mixture Density Networks. d) We studied the effectiveness of recent recurrent neural network models for CPU usage trace prediction for individual VM traces as well as aggregate CPU usage traces. In this dissertation, we explore solutions to improve the performance of scheduling workloads on distributed systems.We begin by looking at the problem from the theoretical perspective. Modeling the problem mathematically, we first propose a scheduling algorithm that finds a constant approximation of the optimal solution for the problem in polynomial time. We prove that the performance of the algorithm (average completion time is the constant approximation of the performance of the optimal scheduling. We next look at the problem in real-world scenarios. Considering High-Performance Computing (HPC) workload computing environments as the most similar real-world equivalent of our mathematical model, we explore the problem of predicting application runtime. We propose an algorithm to handle the existing uncertainties in the real world and show-case our algorithm with demonstrative effectiveness in terms of response time and resource utilization. After looking at the uncertainty problem, we focus on trying to improve the accuracy of existing prediction approaches for HPC application runtime. We propose two solutions, one based on Kalman filters and one based on deep density mixture networks. We showcase the effectiveness of our prediction approaches by comparing with previous prediction approaches in terms of prediction accuracy and impact on improving scheduling performance. In the end, we focus on predicting resource usage for individual applications during their execution. We explore the application of recurrent neural networks for predicting resource usage of applications deployed on individual virtual machines. To validate our proposed models and solutions, we performed extensive trace-driven simulation and measured the effectiveness of our approaches
Examining the Relationship Between Lignocellulosic Biomass Structural Constituents and Its Flow Behavior
Lignocellulosic biomass material sourced from plants and herbaceous sources is a promising substrate of inexpensive, abundant, and potentially carbon-neutral energy. One of the leading limitations of using lignocellulosic biomass as a feedstock for bioenergy products is the flow issues encountered during biomass conveyance in biorefineries. In the biorefining process, the biomass feedstock undergoes flow through a variety of conveyance systems. The inherent variability of the feedstock materials, as evidenced by their complex microstructural composition and non-uniform morphology, coupled with the varying flow conditions in the conveyance systems, gives rise to flow issues such as bridging, ratholing, and clogging. These issues slow down the conveyance process, affect machine life, and potentially lead to partial or even complete shutdown of the biorefinery. Hence, we need to improve our fundamental understanding of biomass feedstock flow physics and mechanics to address the flow issues and improve biorefinery economics.
This dissertation research examines the fundamental relationship between structural constituents of diverse lignocellulosic biomass materials, i.e., cellulose, hemicellulose, and lignin, their morphology, and the impact of the structural composition and morphology on their flow behavior.
First, we prepared and characterized biomass feedstocks of different chemical compositions and morphologies. Then, we conducted our fundamental investigation experimentally, through physical flow characterization tests, and computationally through high-fidelity discrete element modeling. Finally, we statistically analyzed the relative influence of the properties of lignocellulosic biomass assemblies on flow behavior to determine the most critical properties and the optimum values of flow parameters. Our research provides an experimental and computational framework to generalize findings to a wider portfolio of biomass materials. It will help the bioenergy community to design more efficient biorefining machinery and equipment, reduce the risk of failure, and improve the overall commercial viability of the bioenergy industry
Powder bed monitoring via digital image analysis in additive manufacturing
Due to the nature of Selective Laser Melting process, the built parts suffer from high chances of defects formation. Powders quality have a significant impact on the final attributes of SLM-manufactured items. From a processing standpoint, it is
critical to ensure proper powder distribution and compaction in each layer of the powder bed, which is impacted by particle size distribution, packing density, flowability, and sphericity of the powder particles. Layer-by-layer study of the process can provide better understanding of the effect of powder bed on the final part quality. Image-based processing technique could be used to examine the quality of parts fabricated by Selective Laser Melting through layerwise monitoring and to evaluate the results achieved by other techniques. In this paper, a not supervised methodology based on Digital Image Processing through the build-inmachine camera is proposed. Since the limitation of the optical system in terms of resolution, positioning, lighting, field-of-view, many efforts were paid to the calibration and to the data processing. Its capability to individuate possible defects on SLM parts was evaluated by a Computer Tomography results verification
- …