39 research outputs found

    Intellectual Property Rights of Electronic Information in the Age of Digital Convergence

    Get PDF
    The laws of intellectual property aim to protect owners of the literary, dramatic, musical, and artistic works; designs, innovations and inventions from unauthorized use or exploitation by some one else. Though every country has enacted laws to protect intellectual property of its citizens, many infringements take place and a majority of them end up in courts of law. The developments in information and communication technologies made the situation grimmer. This paper briefly explains the copyright and protection of electronic information, its security in network environment, and copyright provisions for databases, multimedia works, and computer software. The relevant provisions of the European Union, the American and the Indian legislative developments as well as the international efforts were touched. The various facets of the information Technology Act and the recently tabled Communications Convergence Bill have been discussed. Despite all the legislative efforts, a level playing field is needed for the rights owners, publishers, library professionals and user

    Recurrence network analysis of design-quality interactions in additive manufacturing

    Get PDF
    Powder bed fusion (PBF) additive manufacturing (AM) provides a great level of flexibility in the design-driven build of metal products. However, the more complex the design, the more difficult it becomes to control the quality of AM builds. The quality challenge persistently hampers the widespread application of AM technology. Advanced imaging (e.g., X-ray computed tomography scans and high-resolution optical images) has been increasingly explored to enhance the visibility of information and improve the AM quality control. Realizing the full potential of imaging data depends on the advent of information processing methodologies for the analysis of design-quality interactions. This paper presents a design of AM experiment to investigate how design parameters (e.g., build orientation, thin-wall width, thin-wall height, and contour space) interact with quality characteristics in thin-wall builds. Note that the build orientation refers to the position of thin-walls in relation to the recoating direction on the plate, and the contour space indicates the width between rectangle hatches. First, we develop a novel generalized recurrence network (GRN) to represent the AM spatial image data. Then, GRN quantifiers, namely degree, betweenness, pagerank, closeness, and eigenvector centralities, are extracted to characterize the quality of layerwise builds. Further, we establish a regression model to predict how the design complexity impacts GRN behaviors in each layer of thin-wall builds. Experimental results show that network features are sensitive to build orientations, width, height, and contour space under the significant level α = 0.05. Thin-walls with the width bigger than 0.1 mm printed under orientation 0° are found to yield better quality compared to 60° and 90°. Also, thin-walls build with orientation 60° are more sensitive to the changes in contour space compare to the other two orientations. As a result, the orientation 60° should be avoided while printing thin-wall structures. The proposed design-quality analysis shows great potential to optimize engineering design and enhance the quality of PBF-AM builds

    Dirichlet Process Gaussian Mixture Models for Real-Time Monitoring and Their Application to Chemical Mechanical Planarization

    Get PDF
    The goal of this work is to use sensor data for online detection and identification of process anomalies (faults). In pursuit of this goal, we propose Dirichlet process Gaussian mixture (DPGM) models. The proposed DPGM models have two novel outcomes: 1) DP-based statistical process control (SPC) chart for anomaly detection and 2) unsupervised recurrent hierarchical DP clustering model for identification of specific process anomalies. The presented DPGM models are validated using numerical simulation studies as well as wireless vibration signals acquired from an experimental semiconductor chemical mechanical planarization (CMP) test bed. Through these numerically simulated and experimental sensor data, we test the hypotheses that DPGM models have significantly lower detection delays compared with SPC charts in terms of the average run length (ARL1) and higher defect identification accuracies (F-score) than popular clustering techniques, such as mean shift. For instance, the DP-based SPC chart detects pad wear anomaly in CMP within 50 ms, as opposed to over 140 ms with conventional control charts. Likewise, DPGM models are able to classify different anomalies in CMP

    The role of low-level image features in the affective categorization of rapidly presented scenes

    Get PDF
    It remains unclear how the visual system is able to extract affective content from complex scenes even with extremely brief (\u3c 100 millisecond) exposures. One possibility, suggested by findings in machine vision, is that low-level features such as unlocalized, two-dimensional (2-D) Fourier spectra can be diagnostic of scene content. To determine whether Fourier image amplitude carries any information about the affective quality of scenes, we first validated the existence of image category differences through a support vector machine (SVM) model that was able to discriminate our intact aversive and neutral images with ~ 70% accuracy using amplitude-only features as inputs. This model allowed us to confirm that scenes belonging to different affective categories could be mathematically distinguished on the basis of amplitude spectra alone. The next question is whether these same features are also exploited by the human visual system. Subsequently, we tested observers’ rapid classification of affective and neutral naturalistic scenes, presented briefly (~33.3 ms) and backward masked with synthetic textures. We tested categorization accuracy across three distinct experimental conditions, using: (i) original images, (ii) images having their amplitude spectra swapped within a single affective image category (e.g., an aversive image whose amplitude spectrum has been swapped with another aversive image) or (iii) images having their amplitude spectra swapped between affective categories (e.g., an aversive image containing the amplitude spectrum of a neutral image). Despite its discriminative potential, the human visual system does not seem to use Fourier amplitude differences as the chief strategy for affectively categorizing scenes at a glance. The contribution of image amplitude to affective categorization is largely dependent on interactions with the phase spectrum, although it is impossible to completely rule out a residual role for unlocalized 2-D amplitude measures

    Deep-Learning-Based Multivariate Pattern Analysis (dMVPA): A Tutorial and a Toolbox

    Get PDF
    In recent years, multivariate pattern analysis (MVPA) has been hugely beneficial for cognitive neuroscience by making new experiment designs possible and by increasing the inferential power of functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and other neuroimaging methodologies. In a similar time frame, “deep learning” (a term for the use of artificial neural networks with convolutional, recurrent, or similarly sophisticated architectures) has produced a parallel revolution in the field of machine learning and has been employed across a wide variety of applications. Traditional MVPA also uses a form of machine learning, but most commonly with much simpler techniques based on linear calculations; a number of studies have applied deep learning techniques to neuroimaging data, but we believe that those have barely scratched the surface of the potential deep learning holds for the field. In this paper, we provide a brief introduction to deep learning for those new to the technique, explore the logistical pros and cons of using deep learning to analyze neuroimaging data – which we term “deep MVPA,” or dMVPA – and introduce a new software toolbox (the “Deep Learning In Neuroimaging: Exploration, Analysis, Tools, and Education” package, DeLINEATE for short) intended to facilitate dMVPA for neuroscientists (and indeed, scientists more broadly) everywhere

    Rheological, In Situ Printability and Cell Viability Analysis of Hydrogels for Muscle Tissue Regeneration

    Get PDF
    Advancements in additive manufacturing have made it possible to fabricate biologically relevant architectures from a wide variety of materials. Hydrogels have garnered increased attention for the fabrication of muscle tissue engineering constructs due to their resemblance to living tissue and ability to function as cell carriers. However, there is a lack of systematic approaches to screen bioinks based on their inherent properties, such as rheology, printability and cell viability. Furthermore, this study takes the critical first-step for connecting in-process sensor data with construct quality by studying the influence of printing parameters. Alginate-chitosan hydrogels were synthesized and subjected to a systematic rheological analysis. In situ print layer photography was utilized to identify the optimum printing parameters and also characterize the fabricated three-dimensional structures. Additionally, the scaffolds were seeded with C2C12 mouse myoblasts to test the suitability of the scaffolds for muscle tissue engineering. The results from the rheological analysis and print layer photography led to the development of a set of optimum processing conditions that produced a quality deposit while the cell viability tests indicated the suitability of the hydrogel for muscle tissue engineering applications

    Feedforward control of thermal history in laser powder bed fusion: Toward physics-based optimization of processing parameters

    Get PDF
    We developed and applied a model-driven feedforward control approach to mitigate thermal-induced flaw formation in laser powder bed fusion (LPBF) additive manufacturing process. The key idea was to avert heat buildup in a LPBF part before it is printed by adapting process parameters layer-by-layer based on insights from a physics-based thermal simulation model. The motivation being to replace cumbersome empirical build-and-test parameter optimization with a physics-guided strategy. The approach consisted of three steps: prediction, analysis, and correction. First, the temperature distribution of a part was predicted rapidly using a graph theory-based computational thermal model. Second, the model-derived thermal trends were analyzed to isolate layers of potential heat buildup. Third, heat buildup in affected layers was corrected before printing by adjusting process parameters optimized through iterative simulations. The effectiveness of the approach was demonstrated experimentally on two separate build plates. In the first build plate, termed fixed processing, ten different nickel alloy 718 parts were produced under constant processing conditions. On a second identical build plate, called controlled processing, the laser power and dwell time for each part was adjusted before printing based on thermal simulations to avoid heat buildup. To validate the thermal model predictions, the surface temperature of each part was tracked with a calibrated infrared thermal camera. Post-process the parts were examined with non-destructive and destructive materials characterization techniques. Compared to fixed processing, parts produced under controlled processing showed superior geometric accuracy and resolution, finer grain size, increased microhardness, and reduced surface roughness

    Closed-loop control of meltpool temperature in directed energy deposition

    Get PDF
    The objective of this work is to mitigate flaw formation in powder and laser-based directed energy deposition (DED) additive manufacturing process through close-loop control of the meltpool temperature. In this work, the meltpool temperature was controlled by modulating the laser power based on feedback signals from a coaxial two-wavelength imaging pyrometer. The utility of closed-loop control in DED is demonstrated in the context of practically inspired trapezoid-shaped stainlesssteel parts (SS 316L). We demonstrate that parts built under closed-loop control have reduced variation in porosity and uniform microstructure compared to parts built under open-loop conditions. For example, post-process characterization showed that closed-loop processed parts had a volume percent porosity ranging from 0.036% to 0.043%. In comparison, open-loop processed parts had a larger variation in volume percent porosity ranging from 0.032% to 0.068%. Further, parts built with closed-loop processing depicted consistent dendritic microstructure. By contrast, parts built with open-loop processing showed microstructure heterogeneity with the presence of both dendritic and planar grains, which in turn translated to large variation in microhardness

    Design Rules for Additive Manufacturing – Understanding the Fundamental Thermal Phenomena to Reduce Scrap

    Get PDF
    The goal of this work is to predict the effect of part geometry and process parameters on the direction and magnitude of heat flow heat flux in parts made using metal additive manufacturing (AM) processes. As a step towards this goal, the objective of this paper is to develop and apply the mathematical concept of heat diffusion over graphs to approximate the heat flux in metal AM parts as a function of their geometry. This objective is consequential to overcome the poor process consistency and part quality in AM. Currently, part build failure rates in metal AM often exceed 20%, the causal reason for this poor part yield in metal AM processes is ascribed to the nature of the heat flux in the part. For instance, constrained heat flux causes defects such as warping, thermal stress-induced cracking, etc. Hence, to alleviate these challenges in metal AM processes, there is a need for computational thermal models to estimate the heat flux, and thereby guide part design and selection of process parameters. Compared to moving heat source finite element analysis techniques, the proposed graph theoretic approach facilitates layer-by-layer simulation of the heat flux within a few minutes on a desktop computer, instead of several hours on a supercomputer
    corecore