262 research outputs found

    Topology Optimization with Text-Guided Stylization

    Full text link
    We propose an approach for the generation of topology-optimized structures with text-guided appearance stylization. This methodology aims to enrich the concurrent design of a structure's physical functionality and aesthetic appearance. Users can effortlessly input descriptive text to govern the style of the structure. Our system employs a hash-encoded neural network as the implicit structure representation backbone, which serves as the foundation for the co-optimization of structural mechanical performance, style, and connectivity, to ensure full-color, high-quality 3D-printable solutions. We substantiate the effectiveness of our system through extensive comparisons, demonstrations, and a 3D printing test

    FPGA Implementation of Blob Recognition

    Get PDF
    Real-time embedded vision systems can be used in a wide range of applications and therefore the demand has been increasing for them. In this thesis, an FPGA-based embedded vision system capable of recognizing objects in real time is presented. The proposed system architecture consists of multiple Intellectual Properties (IPs), which are used as a set of complex instructions by an integrated 32-bit CPU Microblaze. Each IP is tailored specifically to meet the needs of the application and at the same time to consume the minimum FPGA logic resources. Integrating both hardware and software on a single FPGA chip, this system can achieve the real-time performance of full VGA video processing at 32 frames per second (fps). In addition, this work comes up with a new method called Dual Connected Component Labelling (DCCL) suitable for FPGA implementation

    An Automated rule based visual printed circuit board inspection system which uses mathematical morphological image processing algorithms

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent Univ. , 1990.Thesis (Master's) -- Bilkent University, 1990.Includes bibliographical references leaves 122-125.In this thesis, the design and implementation of an automated rule based visual printed circuit board (PCB) inspection system are presented. The developed system makes use of mathematical morphology based image processing algorithms. This system is designed for the detection of the PCB defects related to the conducting structures on the PCBs. For this purpose, four new algorithms, three of which are defect detection algorithms, are designed, and an already existing algorithm is modified for its implementation in our system. The designed defect detection algorithms respectively verify the minimum conductor trace width, minimum land width, and the minimum conductor trace spacing requirements on digital binary PCB images. The implementation of a prototype system is made in our image processing laboratory and the necessary computer programs are developed. These programs control the image processor and apply the defect detection algorithms to discrete binary PCB test images.Oğuz, Seyfullah HalitM.S

    A two step deep learning workflow for pumonary embolism segmentation and classification

    Get PDF
    Orientador: Lucas Ferrari de OliveiraDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 14/12/2021Inclui referências: p. 57-60Área de concentração: Ciência da ComputaçãoResumo: Embolia pulmonar está entre as principais causas de morte no mundo todo, de acordo com o Datasus 22% dos pacientes hospitalizados por embolia pulmonar acabam falecendo, trata-se de um trombo alojado em alguma região da vascularização arterial pulmonar. Para estabelecer o tratamento adequado e reduzir a mortalidade associada é necessário um diagnóstico rápido por parte da equipe médica, atualmente a forma de diagnóstico mais utilizado é a analise de imagens de tomografia computadorizada, devido a vários fatores como a sua velocidade de aquisição, alta disponibilidade dos equipamentos que fazem a sua captura e uma alta acurácia no diagnóstico. Um exame de tomografia computadorizada é composto de centenas de imagens que requerem a atenção do radiologista, pelo número alto de dados produzidos a análise de tais exames pode ser cansativa e levar a erros de diagnóstico devido a fadiga, ainda hoje a embolia pulmonar está entre as doenças onde se há mais erros diagnósticos. Nos últimos anos alguns sistemas computacionais de auxílio ao diagnóstico foram desenvolvidos para auxiliar radiologistas na detecção de trombos, tais sistemas têm se tornado de grande ajuda para um diagnóstico mais eficaz. Deep learning tem sido um dos tópicos mais comentados na área da visão computacional ultimamente, especialmente na área do processamento de imagens médicas, mais especificamente em aplicações de detecção e reconhecimento de imagens. Isso também se aplica no auxílio ao diagnóstico de embolia pulmonar, alguns trabalhos têm atingido resultados do estado da arte utilizando modelos complexos de redes neurais, que são capazes de identificar trombos dentro das imagens geradas pela tomografia, removendo outros ruídos que podem ser vistos como um falso positivo. O objetivo deste trabalho é desenvolver uma aplicação de deep learning capaz de encontrar tromboembolismos pulmonares em imagens de tomografias computadorizadas, a robustez do modelo permitirá que detecte trombos em exames de diferente origens. Se bem sucedido, o algoritmo produzido neste trabalho será capaz de auxiliar radiologistas em um diagnóstico rápido com uma alta probabilidade de acerto. Alguns testes preliminares já mostram que modelos de deep learning são capazes de discriminar embolias pulmonares, em uma base de dados pública contendo imagens de tomografia computadorizada de pulmão a rede foi capaz de encontrar vários trombos. Com um total de 35 exames, 28 foram usados para treinar o modelo e validar seus resultados, ajustando seus hiperparâmetros de acordo com os resultados, as outras 7 imagens foram utilizadas como teste, avaliado como o sistema se comporta quando recebe dados reais, atingindo um Dice score de 0.81 e uma acurácia de 84%, apesar de já apresentar bons resultados a modelo ainda possui espaço para melhores, pois ainda há diversos métodos de otimização que costumam melhorar os resultados das arquiteturas.Abstract: Pulmonary embolism is one of the leading causes of death all over the world, according to Datasus the mortality rate of patients hospitalized due to pulmonary embolism is 22%. To break the clot and save the patient a fast diagnosis is required, that is the reason why computer tomography is used as a means to detect embolisms. A computed tomography exam is composed of hundreds of images that require an analysis from a radiologist, due to the high number of images this process can be tiresome and can lead to errors due to fatigue, pulmonary embolism remains one of the frequent misdiagnosis due to this fact. Over the years some computed aided systems had been developed aiming to help radiologists to see some missed clot, those systems had proven to be of great aid to an even faster diagnosis. Deep learning models have been increased significantly in many computer vision problems, especially in medical imaging, in image detection and recognition. This is also true in the classification of pulmonary embolisms, some works achieve a state of the art results by applying complex neural network models that can identify a clot from a whole tomography exam and remove any potential false positive found. The purpose of this work is to develop a deep learning application that is capable of discriminate pulmonary embolisms from a whole computed tomography volume, due to the use of deep learning a robust model can be developed that can generalize the process of embolism detection in different sources of data. If successful, this work will be able to aid radiologists in a fast diagnosis of pulmonary embolisms with a high discrimination probability. Some preliminary tests show that a deep learning architecture can discriminate pulmonary embolisms, a public dataset was used for validation of this architecture and can find several clots. With a total of 35 exams, 28 were used for training the model and validating its results, tweaking the models' hyperparameters with the results, the last 7 exams were used for testing the model, simulating how it should behave in a receiving unknown data, it achieves a Dice score of 0.81 and an accuracy of 84%, even if it got a relatively good result, it got plenty of room for improvement still, since many known improvement methods can still be applied in the architecture

    Liquid Water Transport in the Reactant Channels of Proton Exchange Membrane Fuel Cells

    Get PDF
    Water management has been identified as a critical issue in the development of PEM fuel cells for automotive applications. Water is present inside the PEM fuel cell in three phases, i.e. liquid phase, vapor phase and mist phase. Liquid water in the reactant channels causes flooding of the cell and blocks the transport of reactants to the reaction sites at the catalyst layer. Understanding the behavior of liquid water in the reactant channels would allow us to devise improved strategies for removing liquid water from the reactant channels. In situ fuel cell tests have been performed to identify and diagnose operating conditions which result in the flooding of the fuel cell. A relationship has been identified between the liquid water present in the reactant channels and the cell performance. A novel diagnostic technique has been established which utilizes the pressure drop multiplier in the reactant channels to predict the flooding of the cell or the drying-out of the membrane. An ex-situ study has been undertaken to quantify the liquid water present in the reactant channels. A new parameter, the Area Coverage Ratio (ACR), has been defined to identify the interfacial area of the reactant channel which is blocked for reactant transport by the presence of liquid water. A parametric study has been conducted to study the effect of changing temperature and the inlet relative humidity on the ACR. The ACR decreases with increase in current density as the gas flow rates increase, removing water more efficiently. With increase in temperature, the ACR decreases rapidly, such that by 60°C, there is no significant ACR to be reported. Inlet relative humidity of the gases does change the saturation of the gases in the channel, but did not show any significant effect on the ACR. Automotive powertrains, which is the target for this work, are continuously faced with transient changes. Water management under transient operating conditions is significantly more challenging and has not been investigated in detail. This study begins to investigate the effects of changing operating conditions on liquid water transport through the reactant channels. It has been identified that rapidly increasing temperature leads to the dry-out of the membrane and rapidly cooling the cell below 55°C results in the start of cell flooding. In changing the operating load of the PEMFC, overshoot in the pressure drop in the reactant channel has been identified for the first time as part of this investigation. A parametric study has been conducted to identify the factors which influence this overshoot behavior

    CIRA annual report FY 2016/2017

    Get PDF
    Reporting period April 1, 2016-March 31, 2017

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    Contribution to the optimization of 4G mobile communications by means of advanced carrier aggregation strategies

    Get PDF
    Mobile broadband subscriptions and data traffic have increasingly grown in the past years with the deployment of the 3G and 4G technologies and the massive use of mobile devices. In this sense, LTE-A has been presented as the next step in wireless communications where higher data rates are targeted and fully packet switched services are held. The ultimate goal of 4G and the forthcoming 5G technology is to increase the Quality of Experience (QoE) of users. In this context, several challenges open up to face the increased bandwidth demands in both uplink (UL) and downlink (DL). To this end, LTE-A has proposed the use of Carrier Aggregation (CA) which allows the simultaneous data transmission in separate fragments of spectrum. The improvements brought by CA in the DL can be almost straightforward appreciable, since the evolved Node B (eNB) is in charge of transmissions, and power availability is not typically an issue. Conversely, the UL presents many open challenges to introduce aggregated transmissions, since it relies on the user terminal for transmission procedures. Lower transmission power and increased interference variability turn the UL more complex than the DL. For this reason, this Ph.D. thesis provides a contribution to the field of CA for UL mobile systems. The novelties here presented address the main limitations the UL encounters when introducing CA; new methods and strategies are proposed with the final aim of enhancing the UL communications with the use of increased bandwidth transmissions, and reducing the unbalanced data rate between the UL and DL. Throughout an exhaustive literature review, the main research opportunities to successfully implement CA in the UL were identified. In particular, three main blocks can be recognized. First, the need for introducing intelligent Radio Resource Management procedures that provide the user with increased QoE, specially in the cell edge, where users are more likely to be power limited, and CA is typically discarded. Consequently, the first part of this dissertation places emphasis on topics related to scheduling and the power limitations to face the increased bandwidth. In this sense, mechanisms that tackle the throughput improvement are proposed and scheduling schemes that specifically assess the gain or deterioration of CA are designed. Indeed, these strategies strongly rely on an accurate Channel State Information (CSI); it is of utmost importance to possess precise CSI to effectively support these assessments. In this line, the second part deals with the imperfect CSI where the efficient use of reference signals provides a high value. Channel prediction techniques have been proposed with the use of the splines method. However, the increased variability of interferences and the high delay in measurements still impairs the CSI accuracy. In this manner, interference management methods are introduced to support the CSI acquisition process. Finally, since CA constitutes the most transverse topic of the new features added to the 4G standard, the last block of research focuses on the opportunities that emerge with the use of CA in the context of heterogeneous networks, and new system designs are addressed. It is proposed to use dual connectivity in the form of decoupled uplink and downlink connections in a CA context, where aggregated carriers may have different coverage footprints. An analysis of two different cell association cases that arise has been driven. Stochastic geometry is used to study the system analytically, propagation conditions in the different tiers and frequencies are considered and the different association cases are compared to a classical downlink received power association rule. Conclusions show that decoupling the uplink provides the system with outstanding gains, however, being connected to the cell that receives the highest received power may not always be profitable, since issues like interferences or load conditions shall be also considered.El número de usuarios móviles y el tráfico de datos generado han aumentado en los últimos años con el despliegue de redes 3G y 4G y el uso masivo de dispositivos móviles. De este modo, LTE-A surge como el siguiente escalón de las comunicaciones móviles, dónde se apunta a mayores velocidades de transmisión y los servicios se basan en la conmutación de paquetes. El objetivo principal de las redes 4G y de la inminente red 5G es mejorar la experiencia del usuario. En este contexto, se presentan nuevos retos para hacer frente a las demandas de incrementar el ancho de banda en ambos enlaces: ascendente (UL) y descendente (DL). Por ello, LTE-A propone el uso de portadoras agregadas (Carrier Aggregation (CA)), tecnología que permite la transmisión simultánea en dos fragmentos del espectro. Las mejoras que aporta CA en el DL son casi inmediatas dado que las transmisiones corren a cargo de la base, la cual no sufre la falta de potencia. Al contrario, el UL presenta más retos para introducir CA, ya que es el terminal quién se encarga de la transmisión. La baja disponibilidad de potencia y la alta variabilidad de la interferencia lo convierten en un entorno mucho más complejo. Por ello, esta disertación presenta una contribución al campo de CA en el UL de comunicaciones móviles. Las novedades presentadas tratan las principales limitaciones para incorporar CA; se proponen nuevos métodos y estrategias con el objetivo de mejorar las comunicaciones en el UL mediante el uso de CA; todo ello, para reducir el desajuste que existe entre la velocidad de transmisión del UL y DL. Mediante una extensa revisión de la literatura, se han detectado las principales líneas de investigación y potenciales mejoras para incorporar CA exitosamente. Se han identificado tres grandes bloques de investigación. Primero, la necesidad de introducir estrategias de gestión de recursos inteligentes, que proporcionen al usuario una mejora de la experiencia, especialmente en el límite de la celda. Es allí donde los usuarios tienen una mayor probabilidad de estar limitados en potencia, razón por la que se les aparta de CA. Consecuentemente, la primera parte de esta tesis pone énfasis en la asignación de recursos y las limitaciones en potencia por parte del usuario para hacer frente a un incremento del ancho de banda. Se proponen mecanismos que mejoran la velocidad de transmisión evaluando las ganancias o pérdidas de incorporar CA a la transmisión. Para apoyar el funcionamiento de estas estrategias de asignación, y asegurar su máximo rendimiento, es necesario un método que proporcione un conocimiento preciso y fidedigno del estado del canal (Channel State Information (CSI)). De este modo, la segunda parte de la investigación lidia con el CSI, donde el uso eficiente de las señales de referencia es de gran importancia. Se proponen técnicas de predicción de señal mediante el uso de Splines; sin embargo, la alta variabilidad de las interferencias y el gran retardo entre dos muestras de CSI perjudican la precisión. Por ello, se introducen métodos de gestión de interferencias que apoyan el proceso de adquisición del CSI. Finalmente, dado que CA es una de las funciones más transversales de las introducidas por el estándar 4G, la última parte de investigación se centra en las oportunidades que surgen con su uso en las redes heterogéneas. Se propone el uso de la conectividad dual, desacoplando el UL del DL junto con CA, donde el área de cobertura de las portadoras puede ser diferente. Se analizan dos escenarios de asociación posibles. Con el uso de geometría estocástica se estudia analíticamente el sistema, considerando diferentes condiciones de propagación en los distintos tipos de celda y frecuencias; los escenarios de asociación se comparan a uno tradicional, en el cual los usuarios se asocian en función de la potencia recibida de las bases. Las conclusiones destacan que el desacoplo aporta mejoras en el UL. Sin embargo, temas como interferencias o carga deben también considera

    Low-Memory Techniques for Routing and Fault-Tolerance on the Fat-Tree Topology

    Full text link
    Actualmente, los clústeres de PCs están considerados como una alternativa eficiente a la hora de construir supercomputadores en los que miles de nodos de computación se conectan mediante una red de interconexión. La red de interconexión tiene que ser diseñada cuidadosamente, puesto que tiene una gran influencia sobre las prestaciones globales del sistema. Dos de los principales parámetros de diseño de las redes de interconexión son la topología y el encaminamiento. La topología define la interconexión de los elementos de la red entre sí, y entre éstos y los nodos de computación. Por su parte, el encaminamiento define los caminos que siguen los paquetes a través de la red. Las prestaciones han sido tradicionalmente la principal métrica a la hora de evaluar las redes de interconexión. Sin embargo, hoy en día hay que considerar dos métricas adicionales: el coste y la tolerancia a fallos. Las redes de interconexión además de escalar en prestaciones también deben hacerlo en coste. Es decir, no sólo tienen que mantener su productividad conforme aumenta el tamaño de la red, sino que tienen que hacerlo sin incrementar sobremanera su coste. Por otra parte, conforme se incrementa el número de nodos en las máquinas de tipo clúster, la red de interconexión debe crecer en concordancia. Este incremento en el número de elementos de la red de interconexión aumenta la probabilidad de aparición de fallos, y por lo tanto, la tolerancia a fallos es prácticamente obligatoria para las redes de interconexión actuales. Esta tesis se centra en la topología fat-tree, ya que es una de las topologías más comúnmente usadas en los clústeres. El objetivo de esta tesis es aprovechar sus características particulares para proporcionar tolerancia a fallos y un algoritmo de encaminamiento capaz de equilibrar la carga de la red proporcionando una buena solución de compromiso entre las prestaciones y el coste.Gómez Requena, C. (2010). Low-Memory Techniques for Routing and Fault-Tolerance on the Fat-Tree Topology [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8856Palanci

    Quantitative structure fate relationships for multimedia environmental analysis

    Get PDF
    Key physicochemical properties for a wide spectrum of chemical pollutants are unknown. This thesis analyses the prospect of assessing the environmental distribution of chemicals directly from supervised learning algorithms using molecular descriptors, rather than from multimedia environmental models (MEMs) using several physicochemical properties estimated from QSARs. Dimensionless compartmental mass ratios of 468 validation chemicals were compared, in logarithmic units, between: a) SimpleBox 3, a Level III MEM, propagating random property values within statistical distributions of widely recommended QSARs; and, b) Support Vector Regressions (SVRs), acting as Quantitative Structure-Fate Relationships (QSFRs), linking mass ratios to molecular weight and constituent counts (atoms, bonds, functional groups and rings) for training chemicals. Best predictions were obtained for test and validation chemicals optimally found to be within the domain of applicability of the QSFRs, evidenced by low MAE and high q2 values (in air, MAE≤0.54 and q2≥0.92; in water, MAE≤0.27 and q2≥0.92).Las propiedades fisicoquímicas de un gran espectro de contaminantes químicos son desconocidas. Esta tesis analiza la posibilidad de evaluar la distribución ambiental de compuestos utilizando algoritmos de aprendizaje supervisados alimentados con descriptores moleculares, en vez de modelos ambientales multimedia alimentados con propiedades estimadas por QSARs. Se han comparado fracciones másicas adimensionales, en unidades logarítmicas, de 468 compuestos entre: a) SimpleBox 3, un modelo de nivel III, propagando valores aleatorios de propiedades dentro de distribuciones estadísticas de QSARs recomendados; y, b) regresiones de vectores soporte (SVRs) actuando como relaciones cuantitativas de estructura y destino (QSFRs), relacionando fracciones másicas con pesos moleculares y cuentas de constituyentes (átomos, enlaces, grupos funcionales y anillos) para compuestos de entrenamiento. Las mejores predicciones resultaron para compuestos de test y validación correctamente localizados dentro del dominio de aplicabilidad de los QSFRs, evidenciado por valores bajos de MAE y valores altos de q2 (en aire, MAE≤0.54 y q2≥0.92; en agua, MAE≤0.27 y q2≥0.92)
    corecore