32 research outputs found

    3D computational simulation and experimental characterization of polymeric stochastic network materials : case studies in reinforced eucalyptus office paper and nanofibrous materials

    Get PDF
    The properties of stochastic fibrous materials like paper and nanowebs are highly dependent on those fibers from which the network structure is made. This work contributes to a better understanding of the effect of fiber properties on the network structural properties, using an original 3D fibrous material model with experimental validation, and its application to different fibrous materials used in reinforced Eucalyptus office paper and nanofibrous networks. To establish the relationships between the fiber and the final structural material properties, an experimental laboratorial plan has been executed for a reinforced fibrous structure, and a physical based 3D model has been developed and implemented. The experimental plan was dedicated to an important Portuguese material: the reinforced Eucalyptus based office paper. Office paper is the principal Portuguese paper industry product. This paper is mainly produced from Eucalyptus globulus bleached kraft pulp with a small incorporation of a softwood pulp to increase paper strength. It is important to access the contribution of different reinforcement pulp fibers with different biometry and coarseness to the final paper properties. The two extremes of reinforcement pulps are represented by a Picea abies kraft softwood pulp, usually considered the best reinforcement fiber, and the Portuguese pine Pinus pinaster kraft pulp. Fiber flexibility was determined experimentally using the Steadman and Luner method with a computerized acquisition device. When comparing two reinforcement fibers, the information about fiber flexibility and biometry is determinant to predict paper properties. The values presented correspond to the two extremes of fibers available as reinforcement fibers, regarding wall thickness, beating ability and flexibility values. Pinus pinaster has the thickest fiber wall, and consequently it is less flexible than the thinner wall fibers: Pinus sylvestris and Picea abies. Experimental results for the evolutions of paper properties, like paper apparent density, air permeability, tensile and tear strength, together with fiber flexibility for the two reinforcement fibers, constitute valuable information, also applicable for other reinforcement fibers, with fiber walls dimensions in this range. After having quantified the influence of fiber flexibility, we identified that this is as a key physical property to be included in our structural model. Therefore, we chose to develop a 3D network model that includes fiber bending in the z direction as an important parameter. The inclusion of fiber flexibility was done for the first time by Niskanen, in a model known as the KCL-Pakka model. We propose an extension of this model, with improvements on the fiber model, as well as an original computational implementation. A simulator has been developed from scratch and the results have been validated experimentally using handmade laboratory structures made from Eucalyptus fibers (hardwood fibers), and also Pinus pinaster, Pinus Sylvestris and Picea abies fibers, which are representative reinforcement fibers. Finally, the model was modified and extended to obtain an original simulator to nanofibrous materials, which is also an important innovation. In the network model developed in this work, the structure is formed by the sequential deposition of fibers, which are modeled individually. The model includes key papermaking fiber properties like morphology, flexibility, and collapsibility and process operations such as fiber deposition, network forming or densification. For the first time, the model considers the fiber microstructure level, including lumen and fiber wall thickness, with a resolution up to 0.05μm for the paper material case and 0.05nm for the nanofibrous materials. The computational simulation model was used to perform simulation studies. In the case of paper materials, it was used to investigate the relative influence of fiber properties such as fiber flexibility, dimensions and collapsibility. The developed multiscale model gave realistic predictions and enabled us to link fiber microstructure and paper properties. In the case of nanofibrous materials, the 3D network model was modified and implemented for Polyamide-6 electrospun and cellulose nanowebs. The influence of computational fiber flexibility and dimensions was investigated. For the Polyamide-6 electrospun network experimental results were compared visually with simulation results and similar evolutions were observed. For cellulose nanowebs the simulation study used literature data to obtain the input information for the nanocellulose fibers. The design of computer experiments was done using a space filling design, namely the Latin hypercube sampling design, and the simulations results were organized and interpreted using regression trees. Both the experimental characterization, and computational modeling, contributed to study the relationships between the polymeric fibers and the network structure formed.As propriedades de materiais estocásticos constituídos por fibras, tais como o papel ou nanoredes poliméricas, dependem fortemente das fibras a partir das quais a estrutura em rede se forma. Este trabalho contribui para uma melhor compreensão da influência das propriedades das fibras nas propriedades estruturais das redes, utilizando um modelo original 3D para materiais constituídos por fibras, com validação experimental, bem como a sua aplicação aos materiais utilizados no papel de escritório de Eucalyptus, com fibras de reforço, e a redes de nanofibras. Para estabelecer as relações entre a fibra e as propriedades estruturais do material, executou-se um planeamento experimental para uma estrutura fibrosa reforçada, e desenvolveu-se e implementou-se um modelo 3D de base física. O plano experimental teve como objecto um material relevante em Portugal: o papel de escritório de Eucalyptus com fibras de reforço. O papel de escritório é o produto principal da indústria de papel Portuguesa. Este tipo de papel é produzido a partir da pasta kraft branqueada de Eucalyptus globulus, com incorporação de uma pequena quantidade de pasta de reforço, “softwood”, para melhorar a resistência do papel. É importante avaliar a contribuição de diferentes fibras de reforço, com biometria e massas linear distinta, nas diferentes propriedades finais do papel. Os dois extremos das fibras de reforço estão representados pela pasta kraft de Picea abies, usualmente considerada a melhor fibra de reforço, e a pasta kraft Portuguesa de Pinus pinaster. A flexibilidade da fibra determinou-se experimentalmente utilizando o método de Steadman e Luner, com um dispositivo de aquisição automatizado. A informação relativa à flexibilidade e biometria da fibra é fundamental para inferir sobre as propriedades do papel. Os valores determinados correspondem a valores dos extremos, paras as fibras de reforço disponíveis no mercado, no que diz respeito a espessura de parede, refinabilidade e valores de flexibilidade. Pode considerar-se a fibra de Pinus pinaster num extremo, sendo a fibra de paredes mais espessas, e consequentemente menos flexível que as fibras de paredes mais finas: Pinus sylvestris e Picea abies. Desta forma, os resultados experimentais obtidos para estas fibras, relativos à evolução de propriedades do papel, nomeadamente densidade, permeabilidade ao ar, resistência à tracção e ao rasgamento, entre outros, constituem informação importante que pode ser aplicada a outras fibras de reforço, que se situem nesta gama. Como consequência lógica da identificação da flexibilidade da fibra como uma propriedade física determinante, e após a quantificação experimental, a escolha do modelo de papel recaiu sobre um modelo que inclui a flexibilidade como propriedade chave. Assim, desenvolvemos um modelo 3D que inclui a flexão das fibras na direcção transversal, isto é, a direcção da espessura do papel, também reconhecida como direcção da coordenada z. A inclusão da flexibilidade da fibra baseia-se no modelo de Niskanen, conhecido como o modelo KCL-Pakka. Apresenta-se uma extensão deste modelo, com modificações no modelo da fibra, bem como uma implementação computacional original. Desenvolveu-se um simulador para matérias em rede, que se validou com resultados experimentais. Efectuaram-se, também, as modificações necessárias para obter um simulador para nanomateriais, o que constitui uma inovação relevante. No modelo deste trabalho, desenvolvido para materiais fibrosos em rede, as fibras modelam-se individualmente e a estrutura forma-se sequencialmente pela sua deposição e conformação à estrutura existente. O modelo inclui propriedades das fibras determinantes, tais como morfologia, flexibilidade e colapsabilidade. Bem como etapas do processo, nomeadamente a deposição das fibras e a formação da rede, isto é, a densificação da estrutura. De uma forma original, o modelo da fibra inclui a espessura do lúmen e da parede da fibra, com uma resolução de 0.05μm para as fibras do papel e 0.05nm no caso das nanofibras. O modelo computacional desenvolvido utilizou-se na realização de estudos de simulação. No caso dos materiais papeleiros, utilizou-se para investigar a influência das propriedades das fibras, tendo-se obtido previsões realistas. No caso dos nanomateriais, o modelo foi modificado e implementado para as fibras electrofiadas de Poliamida-6 e redes de nanocelulose. O plano de experiencias computacionais utilizou uma distribuição no espaço “Latin hypercube” e os resultados das simulações organizaram-se recorrendo a árvores de regressão. Tanto a caracterização experimental, como a modelação computacional, contribuíram com valiosa informação para o estudo das relações entre as fibras poliméricas e as estruturas em rede por elas formadas

    Ferramenta computacional para análise de imagens de ensaios mecânicos de dureza

    Get PDF
    Este artigo apresenta uma ferramenta computacional capaz de determinar as durezas Brinell e Vickers a partir de imagens de identações. A referida ferramenta integra algoritmos de processamento e análisede imagem, como crescimento de regiões e contornos ativos (snakes). Para validar a ferramenta proposta, foram realizadas comparações dos resultados obtidos por ela e pelo processo convencional a partir de identações realizadas no aço ABNT 1020. A partir dessa comparação, pode-se afirmar que a ferramenta desenvolvida é mais rápida, intuitiva na sua aplicação e menos dependente da subjetividade do operador

    Processamento e análise de imagem em Engenharia Biomédica

    Get PDF
    Nesta apresentação, serão consideradas metodologias computacionais para a análise deobjectos representados em imagens estáticas e dinâmicas. Assim, serão descritas metodologiasde processamento e análise de imagem para segmentar (ou seja, identificar) objectos presentesem imagens, realizar o seu seguimento temporal ao longo de sequências de imagens,emparelhar e alinhar objectos em imagens, bem como para a reconstrução da forma 3D deobjectos a partir de imagens 2D. Serão apresentadas soluções computacionais baseadas em:modelos deformáveis, como templates, modelos activos e métodos de level-set; métodosestatísticos, como modelos de distribuição pontual, de forma e de aparência activa; métodosestocásticos, como filtragem de Kalman e Unscented Kalman; modelação geométrica, emparticular, considerando informação de curvatura e de distância; e em técnicas de análisemodal, optimização e programação dinâmica. Serão apresentados vários exemplos deaplicação das referidas técnicas, incluindo: segmentação de objectos em imagens, seguimentode objectos em sequências de imagens, emparelhamento e alinhamento de objectos emimagens e reconstrução 3D de objectos a partir de imagens 2D. Especial ênfase será dado aobjectos representados em imagens médicas

    Análise de Objectos em Imagens: Técnicas e Aplicações

    Get PDF
    Nesta apresentação, serão consideradas abordagens computacionais para a análise de objectosrepresentados em imagens estáticas e dinâmicas. Assim, serão descritas técnicas deprocessamento e análise de imagem para segmentar objectos presentes em imagens, realizar oseu seguimento temporal ao longo de sequências de imagens, emparelhar e alinhar objectosem imagens, bem como para reconstruir a forma 3D de objectos a partir de imagens 2D.Para a segmentação de objectos, serão consideradas técnicas baseadas em modelosdeformáveis, como protótipos, modelos activos, modelos de distribuição pontual, de forma ede aparência activa, e de level-set. Por outro lado, para o seguimento de movimento serãoconsideradas técnicas baseadas em filtragem estocástica, como os filtros de Kalman eUnscented Kalman, optimização e modelos de gestão. Já para determinar a correspondênciaentre objectos, serão consideradas técnicas baseadas em modelizações físicas e geométricascomplementadas com procedimentos de optimização. Relativamente ao alinhamento deobjectos, quer em termos espaciais quer temporais, serão apresentadas técnicas baseadas emmodelizações físicas e geométricas, optimização e programação dinâmica. Finalmente, para areconstrução da forma 3D de objectos a partir de imagens 2D serão apresentadas técnicasbaseadas em escavação temporal, e na segmentação de contornos 2D seguida da interpolaçãodos contornos segmentados e construção da malha 3D respectiva; já para a obtenção deinformação 3D de cenas, serão apresentadas técnicas passivas e activas.Em termos de aplicações, serão apresentadas e discutidas utilizações das técnicasapresentadas em Engenharia, Ciência dos Materiais, Biomecânica, Bioengenharia e Medicina

    State-Of-The-Art In Image Clustering Based On Affinity Propagation

    Get PDF
    Proclivity spread (AP) is a productive unsupervised grouping technique, which display a quick execution speed and discover bunches in a low mistake rate. AP calculation takes as info a similitude network that comprise of genuine esteemed likenesses between information focuses. The strategy iteratively trades genuine esteemed messages between sets of information focuses until a decent arrangement of models developed. The development of the comparability network dependent on the Euclidean separation is a significant stage during the time spent AP. Appropriately, the conventional Euclidean separation which is the summation of the pixel-wise force contrasts perform beneath normal when connected for picture grouping, as it endures of being reasonable to exceptions and even to little misshapening in pictures. Studies should be done on different methodologies from existing investigations especially in the field of picture grouping with different datasets. In this way, a sensible picture closeness metric will be researched to suite with datasets in the picture clustering field. As an end, changing the comparability lattice will prompt a superior clustering results

    Paper Structure Formation Simulation

    Get PDF
    On the surface, paper appears simple, but closer inspection yields a rich collection of chaotic dynamics and random variables. Predictive simulation of paper product properties is desirable for screening candidate experiments and optimizing recipes but existing models are inadequate for practical use. We present a novel structure simulation and generation system designed to narrow the gap between mathematical model and practical prediction. Realistic inputs to the system are preserved as randomly distributed variables. Rapid fiber placement (~1 second/fiber) is achieved with probabilistic approximation of chaotic fluid dynamics and minimization of potential energy to determine flexible fiber conformations. Resulting digital packed structures, storable in common formats, return basic properties and provide a flexible platform for subsequent analysis and prediction. Simulated results are validated through comparison with experimental handsheet measurements. Good agreement with thickness measurements are obtained and possible uses of simulated structures for more enhanced property prediction are discussed

    Local Image Patterns for Counterfeit Coin Detection and Automatic Coin Grading

    Get PDF
    Abstract Local Image Patterns for Counterfeit Coin Detection and Automatic Coin Grading Coins are an essential part of our life, and we still use them for everyday transactions. We have always faced the issue of the counterfeiting of the coins, but it has become worse with time due to the innovation in the technology of counterfeiting, making it more difficult for detection. Through this thesis, we propose a counterfeit coin detection method that is robust and applicable to all types of coins, whether they have letters on them or just images or both of these characteristics. We use two different types of feature extraction methods. The first one is SIFT (Scale Invariant Feature transform) features, and the second one is RFR (Rotation and Flipping invariant Regional Binary Patterns) features to make our system complete in all aspects and very generic at the same time. The feature extraction methods used here are scale, rotation, illumination, and flipping invariant. We concatenate both our feature sets and use them to train our classifiers. Our feature sets highly complement each other in a way that SIFT provides us with most discriminative features that are scale and rotation invariant but do not consider the spatial value when we cluster them, and here our second set of features comes into play as it considers the spatial structure of each coin image. We train SVM classifiers with two different sets of features from each image. The method has an accuracy of 99.61% with both high and low-resolution images. We also took pictures of the coins at 90˚ and 45˚ angles using the mobile phone camera, to check the robustness of our proposed method, and we achieved promising results even with these low-resolution pictures. Also, we work on the problem of Coin Grading, which is another issue in the field of numismatic studies. Our algorithm proposed above is customized according to the coin grading problem and calculates the coin wear and assigns a grade to it. We can use this grade to remove low-quality coins from the system, which are otherwise sold to coin collectors online for a considerable price. Coin grading is currently done by coin experts manually and is a time consuming and expensive process. We use digital images and apply computer vision and machine learning algorithms to calculate the wear on the coin and then assign it a grade based on its quality level. Our method calculates the amount of wear on coins and assign them a label and achieve an accuracy of 98.5%

    Feature-sensitive and Adaptive Image Triangulation: A Super-pixel-based Scheme for Image Segmentation and Mesh Generation

    Get PDF
    With increasing utilization of various imaging techniques (such as CT, MRI and PET) in medical fields, it is often in great need to computationally extract the boundaries of objects of interest, a process commonly known as image segmentation. While numerous approaches have been proposed in literature on automatic/semi-automatic image segmentation, most of these approaches are based on image pixels. The number of pixels in an image can be huge, especially for 3D imaging volumes, which renders the pixel-based image segmentation process inevitably slow. On the other hand, 3D mesh generation from imaging data has become important not only for visualization and quantification but more critically for finite element based numerical simulation. Traditionally image-based mesh generation follows such a procedure as: (1) image boundary segmentation, (2) surface mesh generation from segmented boundaries, and (3) volumetric (e.g., tetrahedral) mesh generation from surface meshes. These three majors steps have been commonly treated as separate algorithms/steps and hence image information, once segmented, is not considered any more in mesh generation. In this thesis, we investigate a super-pixel based scheme that integrates both image segmentation and mesh generation into a single method, making mesh generation truly an image-incorporated approach. Our method, called image content-aware mesh generation, consists of several main steps. First, we generate a set of feature-sensitive, and adaptively distributed points from 2D grayscale images or 3D volumes. A novel image edge enhancement method via randomized shortest paths is introduced to be an optional choice to generate the features’ boundary map in mesh node generation step. Second, a Delaunay-triangulation generator (2D) or tetrahedral mesh generator (3D) is then utilized to generate a 2D triangulation or 3D tetrahedral mesh. The generated triangulation (or tetrahedralization) provides an adaptive partitioning of a given image (or volume). Each cluster of pixels within a triangle (or voxels within a tetrahedron) is called a super-pixel, which forms one of the nodes of a graph and adjacent super-pixels give an edge of the graph. A graph-cut method is then applied to the graph to define the boundary between two subsets of the graph, resulting in good boundary segmentations with high quality meshes. Thanks to the significantly reduced number of elements (super-pixels) as compared to that of pixels in an image, the super-pixel based segmentation method has tremendously improved the segmentation speed, making it feasible for real-time feature detection. In addition, the incorporation of image segmentation into mesh generation makes the generated mesh well adapted to image features, a desired property known as feature-preserving mesh generation

    Feature extraction from MRI ADC images for brain tumor classification using machine learning techniques

    Get PDF
    Diffusion-weighted (DW) imaging is a well-recognized magnetic resonance imaging (MRI) technique that is being routinely used in brain examinations in modern clinical radiology practices. This study focuses on extracting demographic and texture features from MRI Apparent Diffusion Coefficient (ADC) images of human brain tumors, identifying the distribution patterns of each feature and applying Machine Learning (ML) techniques to differentiate malignant from benign brain tumors
    corecore