5,118 research outputs found

    A digital computer program for the dynamic interaction simulation of controls and structure (DISCOS), volume 1

    Get PDF
    A theoretical development and associated digital computer program system for the dynamic simulation and stability analysis of passive and actively controlled spacecraft are presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system is used to investigate total system dynamic characteristics, including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. In addition, the program system is used for designing attitude control systems and for evaluating total dynamic system performance, including time domain response and frequency domain stability analyses

    Bibliometric Perspectives on Medical Innovation using the Medical Subject Headings (MeSH) of PubMed

    Full text link
    Multiple perspectives on the nonlinear processes of medical innovations can be distinguished and combined using the Medical Subject Headings (MeSH) of the Medline database. Focusing on three main branches-"diseases," "drugs and chemicals," and "techniques and equipment"-we use base maps and overlay techniques to investigate the translations and interactions and thus to gain a bibliometric perspective on the dynamics of medical innovations. To this end, we first analyze the Medline database, the MeSH index tree, and the various options for a static mapping from different perspectives and at different levels of aggregation. Following a specific innovation (RNA interference) over time, the notion of a trajectory which leaves a signature in the database is elaborated. Can the detailed index terms describing the dynamics of research be used to predict the diffusion dynamics of research results? Possibilities are specified for further integration between the Medline database, on the one hand, and the Science Citation Index and Scopus (containing citation information), on the other.Comment: forthcoming in the Journal of the American Society for Information Science and Technolog

    Direct Immersogeometric Fluid Flow and Heat Transfer Analysis of Objects Represented by Point Clouds

    Full text link
    Immersogeometric analysis (IMGA) is a geometrically flexible method that enables one to perform multiphysics analysis directly using complex computer-aided design (CAD) models. In this paper, we develop a novel IMGA approach for simulating incompressible and compressible flows around complex geometries represented by point clouds. The point cloud object's geometry is represented using a set of unstructured points in the Euclidean space with (possible) orientation information in the form of surface normals. Due to the absence of topological information in the point cloud model, there are no guarantees for the geometric representation to be watertight or 2-manifold or to have consistent normals. To perform IMGA directly using point cloud geometries, we first develop a method for estimating the inside-outside information and the surface normals directly from the point cloud. We also propose a method to compute the Jacobian determinant for the surface integration (over the point cloud) necessary for the weak enforcement of Dirichlet boundary conditions. We validate these geometric estimation methods by comparing the geometric quantities computed from the point cloud with those obtained from analytical geometry and tessellated CAD models. In this work, we also develop thermal IMGA to simulate heat transfer in the presence of flow over complex geometries. The proposed framework is tested for a wide range of Reynolds and Mach numbers on benchmark problems of geometries represented by point clouds, showing the robustness and accuracy of the method. Finally, we demonstrate the applicability of our approach by performing IMGA on large industrial-scale construction machinery represented using a point cloud of more than 12 million points.Comment: 30 pages + references; Accepted in Computer Methods in Applied Mechanics and Engineerin

    Mitigating the effect of covariates in face recognition

    Get PDF
    Current face recognition systems capture faces of cooperative individuals in controlled environment as part of the face recognition process. It is therefore possible to control lighting, pose, background, and quality of images. However, in a real world application, we have to deal with both ideal and imperfect data. Performance of current face recognition systems is affected for such non-ideal and challenging cases. This research focuses on designing algorithms to mitigate the effect of covariates in face recognition.;To address the challenge of facial aging, an age transformation algorithm is proposed that registers two face images and minimizes the aging variations. Unlike the conventional method, the gallery face image is transformed with respect to the probe face image and facial features are extracted from the registered gallery and probe face images. The variations due to disguises cause change in visual perception, alter actual data, make pertinent facial information disappear, mask features to varying degrees, or introduce extraneous artifacts in the face image. To recognize face images with variations due to age progression and disguises, a granular face verification approach is designed which uses dynamic feed-forward neural architecture to extract 2D log polar Gabor phase features at different granularity levels. The granular levels provide non-disjoint spatial information which is combined using the proposed likelihood ratio based Support Vector Machine match score fusion algorithm. The face verification algorithm is validated using five face databases including the Notre Dame face database, FG-Net face database and three disguise face databases.;The information in visible spectrum images is compromised due to improper illumination whereas infrared images provide invariance to illumination and expression. A multispectral face image fusion algorithm is proposed to address the variations in illumination. The Support Vector Machine based image fusion algorithm learns the properties of the multispectral face images at different resolution and granularity levels to determine optimal information and combines them to generate a fused image. Experiments on the Equinox and Notre Dame multispectral face databases show that the proposed algorithm outperforms existing algorithms. We next propose a face mosaicing algorithm to address the challenge due to pose variations. The mosaicing algorithm generates a composite face image during enrollment using the evidence provided by frontal and semiprofile face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a users face image. Experiments conducted on three different databases indicate that face mosaicing offers significant benefits by accounting for the pose variations that are commonly observed in face images.;Finally, the concept of online learning is introduced to address the problem of classifier re-training and update. A learning scheme for Support Vector Machine is designed to train the classifier in online mode. This enables the classifier to update the decision hyperplane in order to account for the newly enrolled subjects. On a heterogeneous near infrared face database, the case study using Principal Component Analysis and C2 feature algorithms shows that the proposed online classifier significantly improves the verification performance both in terms of accuracy and computational time

    Previously Unidentified Changes in Renal Cell Carcinoma Gene Expression Identified by Parametric Analysis of Microarray Data

    Get PDF
    BACKGROUND. Renal cell carcinoma is a common malignancy that often presents as a metastatic-disease for which there are no effective treatments. To gain insights into the mechanism of renal cell carcinogenesis, a number of genome-wide expression profiling studies have been performed. Surprisingly, there is very poor agreement among these studies as to which genes are differentially regulated. To better understand this lack of agreement we profiled renal cell tumor gene expression using genome-wide microarrays (45,000 probe sets) and compare our analysis to previous microarray studies. METHODS. We hybridized total RNA isolated from renal cell tumors and adjacent normal tissue to Affymetrix U133A and U133B arrays. We removed samples with technical defects and removed probesets that failed to exhibit sequence-specific hybridization in any of the samples. We detected differential gene expression in the resulting dataset with parametric methods and identified keywords that are overrepresented in the differentially expressed genes with the Fisher-exact test. RESULTS. We identify 1,234 genes that are more than three-fold changed in renal tumors by t-test, 800 of which have not been previously reported to be altered in renal cell tumors. Of the only 37 genes that have been identified as being differentially expressed in three or more of five previous microarray studies of renal tumor gene expression, our analysis finds 33 of these genes (89%). A key to the sensitivity and power of our analysis is filtering out defective samples and genes that are not reliably detected. CONCLUSIONS. The widespread use of sample-wise voting schemes for detecting differential expression that do not control for false positives likely account for the poor overlap among previous studies. Among the many genes we identified using parametric methods that were not previously reported as being differentially expressed in renal cell tumors are several oncogenes and tumor suppressor genes that likely play important roles in renal cell carcinogenesis. This highlights the need for rigorous statistical approaches in microarray studies.National Institutes of Healt

    Image Analysis System for Early Detection of Cardiothoracic Surgery Wound Alterations Based on Artificial Intelligence Models

    Get PDF
    Funding Information: This work is part of a research project funded by Fundação para a Ciência e Tecnologia, which aims to design and implement a post-surgical digital telemonitoring service for cardiothoracic surgery patients. The main goals of the research project are: to study the impact of daily telemonitoring on early diagnosis, to reduce hospital readmissions, and to improve patient safety, during the 30-day period after hospital discharge. This remote follow-up involves a digital remote patient monitoring kit which includes a sphygmomanometer, a scale, a smartwatch, and a smartphone, allowing daily patient data collection. One of the daily outcomes was the daily photographs taken by patients regarding surgical wounds. Every day, the clinical team had to analyze the image of each patient, which could take a long time. The automatic analysis of these images would allow implementing an alert related to the detection of wound modifications that could represent a risk of infection. Such an alert would spare time for the clinical team in follow-up care. Funding Information: This research has been supported by Fundação para a Ciência e Tecnologia (FCT) under CardioFollow.AI project (DSAIPA/AI/0094/2020), Lisboa-05-3559-FSE-000003 and UIDB/04559/2020. Publisher Copyright: © 2023 by the authors.Cardiothoracic surgery patients have the risk of developing surgical site infections which cause hospital readmissions, increase healthcare costs, and may lead to mortality. This work aims to tackle the problem of surgical site infections by predicting the existence of worrying alterations in wound images with a wound image analysis system based on artificial intelligence. The developed system comprises a deep learning segmentation model (MobileNet-Unet), which detects the wound region area and categorizes the wound type (chest, drain, and leg), and a machine learning classification model, which predicts the occurrence of wound alterations (random forest, support vector machine and k-nearest neighbors for chest, drain, and leg, respectively). The deep learning model segments the image and assigns the wound type. Then, the machine learning models classify the images from a group of color and textural features extracted from the output region of interest to feed one of the three wound-type classifiers that reach the final binary decision of wound alteration. The segmentation model achieved a mean Intersection over Union of 89.9% and a mean average precision of 90.1%. Separating the final classification into different classifiers was more effective than a single classifier for all the wound types. The leg wound classifier exhibited the best results with an 87.6% recall and 52.6% precision.publishersversionpublishe

    Design and analysis of adaptive noise subspace estimation algorithms

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore