1,742 research outputs found

    Sculplexity: Sculptures of Complexity using 3D printing

    Full text link
    We show how to convert models of complex systems such as 2D cellular automata into a 3D printed object. Our method takes into account the limitations inherent to 3D printing processes and materials. Our approach automates the greater part of this task, bypassing the use of CAD software and the need for manual design. As a proof of concept, a physical object representing a modified forest fire model was successfully printed. Automated conversion methods similar to the ones developed here can be used to create objects for research, for demonstration and teaching, for outreach, or simply for aesthetic pleasure. As our outputs can be touched, they may be particularly useful for those with visual disabilities.Comment: Free access to article on European Physics Letter

    Interactive Co-Design of Form and Function for Legged Robots using the Adjoint Method

    Get PDF
    Our goal is to make robotics more accessible to casual users by reducing the domain knowledge required in designing and building robots. Towards this goal, we present an interactive computational design system that enables users to design legged robots with desired morphologies and behaviors by specifying higher level descriptions. The core of our method is a design optimization technique that reasons about the structure, and motion of a robot in coupled manner in order to achieve user-specified robot behavior, and performance. We are inspired by the recent works that also aim to jointly optimize robot's form and function. However, through efficient computation of necessary design changes, our approach enables us to keep user-in-the-loop for interactive applications. We evaluate our system in simulation by automatically improving robot designs for multiple scenarios. Starting with initial user designs that are physically infeasible or inadequate to perform the user-desired task, we show optimized designs that achieve user-specifications, all while ensuring an interactive design flow.Comment: 8 pages; added link of the accompanying vide

    Machine Learning in Predicting Printable Biomaterial Formulations for Direct Ink Writing

    Get PDF
    Three-dimensional (3D) printing is emerging as a transformative technology for biomedical engineering. The 3D printed product can be patient-specific by allowing customizability and direct control of the architecture. The trial-and-error approach currently used for developing the composition of printable inks is time- and resource-consuming due to the increasing number of variables requiring expert knowledge. Artificial intelligence has the potential to reshape the ink development process by forming a predictive model for printability from experimental data. In this paper, we constructed machine learning (ML) algorithms including decision tree, random forest (RF), and deep learning (DL) to predict the printability of biomaterials. A total of 210 formulations including 16 different bioactive and smart materials and 4 solvents were 3D printed, and their printability was assessed. All ML methods were able to learn and predict the printability of a variety of inks based on their biomaterial formulations. In particular, the RF algorithm has achieved the highest accuracy (88.1%), precision (90.6%), and F1 score (87.0%), indicating the best overall performance out of the 3 algorithms, while DL has the highest recall (87.3%). Furthermore, the ML algorithms have predicted the printability window of biomaterials to guide the ink development. The printability map generated with DL has finer granularity than other algorithms. ML has proven to be an effective and novel strategy for developing biomaterial formulations with desired 3D printability for biomedical engineering applications

    Machine learning using Multi-Modal Data Predicts the Production of Selective Laser Sintered 3D Printed Drug Products

    Get PDF
    Three-dimensional (3D) printing is drastically redefining medicine production, offering digital precision and personalized design opportunities. One emerging 3D printing technology is selective laser sintering (SLS), which is garnering attention for its high precision, and compatibility with a wide range of pharmaceutical materials, including low-solubility compounds. However, the full potential of SLS for medicines is yet to be realized, requiring expertise and considerable time-consuming and resource-intensive trial-and-error research. Machine learning (ML), a subset of artificial intelligence, is an in silico tool that is accomplishing remarkable breakthroughs in several sectors for its ability to make highly accurate predictions. Therefore, the present study harnessed ML to predict the printability of SLS formulations. Using a dataset of 170 formulations from 78 materials, ML models were developed from inputs that included the formulation composition and characterization data retrieved from Fourier-transformed infrared spectroscopy (FT-IR), X-ray powder diffraction (XRPD) and differential scanning calorimetry (DSC). Multiple ML models were explored, including supervised and unsupervised approaches. The results revealed that ML can achieve high accuracies, by using the formulation composition leading to a maximum F1 score of 81.9%. Using the FT-IR, XRPD and DSC data as inputs resulted in an F1 score of 84.2%, 81.3%, and 80.1%, respectively. A subsequent ML pipeline was built to combine the predictions from FT-IR, XRPD and DSC into one consensus model, where the F1 score was found to further increase to 88.9%. Therefore, it was determined for the first time that ML predictions of 3D printability benefit from multi-modal data, combining numeric, spectral, thermogram and diffraction data. The study lays the groundwork for leveraging existing characterization data for developing high-performing computational models to accelerate developments

    The YAC, May/June 2017

    Get PDF
    A Newsletter for Iowa library staff who work with youth and children brought to you by Iowa Library Services

    Incorporating interactive 3-dimensional graphics in astronomy research papers

    Full text link
    Most research data collections created or used by astronomers are intrinsically multi-dimensional. In contrast, all visual representations of data presented within research papers are exclusively 2-dimensional. We present a resolution of this dichotomy that uses a novel technique for embedding 3-dimensional (3-d) visualisations of astronomy data sets in electronic-format research papers. Our technique uses the latest Adobe Portable Document Format extensions together with a new version of the S2PLOT programming library. The 3-d models can be easily rotated and explored by the reader and, in some cases, modified. We demonstrate example applications of this technique including: 3-d figures exhibiting subtle structure in redshift catalogues, colour-magnitude diagrams and halo merger trees; 3-d isosurface and volume renderings of cosmological simulations; and 3-d models of instructional diagrams and instrument designs.Comment: 18 pages, 7 figures, submitted to New Astronomy. For paper with 3-dimensional embedded figures, see http://astronomy.swin.edu.au/s2plot/3dpd
    corecore