42 research outputs found

    On the use of word embedding for cross language plagiarism detection

    Full text link
    [EN] Cross language plagiarism is the unacknowledged reuse of text across language pairs. It occurs if a passage of text is translated from source language to target language and no proper citation is provided. Although various methods have been developed for detection of cross language plagiarism, less attention has been paid to measure and compare their performance, especially when tackling with different types of paraphrasing through translation. In this paper, we investigate various approaches to cross language plagiarism detection. Moreover, we present a novel approach to cross language plagiarism detection using word embedding methods and explore its performance against other state-of-the-art plagiarism detection algorithms. In order to evaluate the methods, we have constructed an English-Persian bilingual plagiarism detection corpus (referred to as HAMTA-CL) comprised of seven types of obfuscation. The results show that the word embedding approach outperforms the other approaches with respect to recall when encountering heavily paraphrased passages. On the other hand, translation based approach performs well when the precision is the main consideration of the cross language plagiarism detection system.Asghari, H.; Fatemi, O.; Mohtaj, S.; Faili, H.; Rosso, P. (2019). On the use of word embedding for cross language plagiarism detection. Intelligent Data Analysis. 23(3):661-680. https://doi.org/10.3233/IDA-183985S661680233H. Asghari, K. Khoshnava, O. Fatemi and H. Faili, Developing bilingual plagiarism detection corpus using sentence aligned parallel corpus: Notebook for {PAN} at {CLEF} 2015, In L. Cappellato, N. Ferro, G.J.F. Jones and E. SanJuan, editors, Working Notes of {CLEF} 2015 – Conference and Labs of the Evaluation forum, Toulouse, France, September 8–11, 2015, volume 1391 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2015.A. Barrón-Cede no, M. Potthast, P. Rosso and B. Stein, Corpus and evaluation measures for automatic plagiarism detection, In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner and D. Tapias, editors, Proceedings of the International Conference on Language Resources and Evaluation, {LREC} 2010, 17–23 May 2010, Valletta, Malta. European Language Resources Association, 2010.A. Barrón-Cede no, P. Rosso, D. Pinto and A. Juan, On cross-lingual plagiarism analysis using a statistical model, In B. Stein, E. Stamatatos and M. Koppel, editors, Proceedings of the ECAI’08 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse, Patras, Greece, July 22, 2008, volume 377 of {CEUR} Workshop Proceedings. CEUR-WS.org, 2008.Farghaly, A., & Shaalan, K. (2009). Arabic Natural Language Processing. ACM Transactions on Asian Language Information Processing, 8(4), 1-22. doi:10.1145/1644879.1644881J. Ferrero, F. Agnès, L. Besacier and D. Schwab, A multilingual, multi-style and multi-granularity dataset for cross-language textual similarity detection, In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk and S. Piperidis, editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation {LREC} 2016, Portorož, Slovenia, May 23–28, 2016, European Language Resources Association {(ELRA)}, 2016.Franco-Salvador, M., Gupta, P., Rosso, P., & Banchs, R. E. (2016). Cross-language plagiarism detection over continuous-space- and knowledge graph-based representations of language. Knowledge-Based Systems, 111, 87-99. doi:10.1016/j.knosys.2016.08.004Franco-Salvador, M., Rosso, P., & Montes-y-Gómez, M. (2016). A systematic study of knowledge graph analysis for cross-language plagiarism detection. Information Processing & Management, 52(4), 550-570. doi:10.1016/j.ipm.2015.12.004C.K. Kent and N. Salim, Web based cross language plagiarism detection, CoRR, abs/0912.3, 2009.McNamee, P., & Mayfield, J. (2004). Character N-Gram Tokenization for European Language Text Retrieval. Information Retrieval, 7(1/2), 73-97. doi:10.1023/b:inrt.0000009441.78971.beT. Mikolov, K. Chen, G. Corrado and J. Dean, Efficient estimation of word representations in vector space, CoRR, abs/1301.3, 2013.S. Mohtaj, B. Roshanfekr, A. Zafarian and H. Asghari, Parsivar: A language processing toolkit for persian, In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis and T. Tokunaga, editors, Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7–12, 2018, European Language Resources Association ELRA, 2018.R.M.A. Nawab, M. Stevenson and P.D. Clough, University of Sheffield – Lab Report for {PAN} at {CLEF} 2010, In M. Braschler, D. Harman and E. Pianta, editors, {CLEF} 2010 LABs and Workshops, Notebook Papers, 22–23 September 2010, Padua, Italy, volume 1176 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2010.G. Oberreuter, G. L’Huillier, S.A. Rios and J.D. Velásquez, Approaches for intrinsic and external plagiarism detection – Notebook for {PAN} at {CLEF} 2011, In V. Petras, P. Forner and P.D. Clough, editors, {CLEF} 2011 Labs and Workshop, Notebook Papers, 19–22 September 2011, Amsterdam, The Netherlands, volume 1177 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2011.Pinto, D., Civera, J., Barrón-Cedeño, A., Juan, A., & Rosso, P. (2009). A statistical approach to crosslingual natural language tasks. Journal of Algorithms, 64(1), 51-60. doi:10.1016/j.jalgor.2009.02.005M. Potthast, A. Barrón-Cede no, A. Eiselt, B. Stein and P. Rosso, Overview of the 2nd international competition on plagiarism detection, In M. Braschler, D. Harman and E. Pianta, editors, {CLEF} 2010 LABs and Workshops, Notebook Papers, 22–23 September 2010, Padua, Italy, volume 1176 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2010.Potthast, M., Barrón-Cedeño, A., Stein, B., & Rosso, P. (2010). Cross-language plagiarism detection. Language Resources and Evaluation, 45(1), 45-62. doi:10.1007/s10579-009-9114-zM. Potthast, A. Eiselt, A. Barrón-Cede no, B. Stein and P. Rosso, Overview of the 3rd international competition on plagiarism detection, In V. Petras, P. Forner and P.D. Clough, editors, {CLEF} 2011 Labs and Workshop, Notebook Papers, 19–22 September 2011, Amsterdam, The Netherlands, volume 1177 of {CEUR} Workshop Proceedings. CEUR-WS.org, 2011.M. Potthast, S. Goering, P. Rosso and B. Stein, Towards data submissions for shared tasks: First experiences for the task of text alignment, In L. Cappellato, N. Ferro, G.J.F. Jones and E. SanJuan, editors, Working Notes of {CLEF} 2015 – Conference and Labs of the Evaluation forum, Toulouse, France, September 8–11, 2015, volume 1391 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2015.Potthast, M., Stein, B., & Anderka, M. (s. f.). A Wikipedia-Based Multilingual Retrieval Model. Advances in Information Retrieval, 522-530. doi:10.1007/978-3-540-78646-7_51B. Pouliquen, R. Steinberger and C. Ignat, Automatic identification of document translations in large multilingual document collections, CoRR, abs/cs/060, 2006.B. Stein, E. Stamatatos and M. Koppel, Proceedings of the ECAI’08 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse, Patras, Greece, July 22, 2008, volume 377 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2008.J. Wieting, M. Bansal, K. Gimpel and K. Livescu, Towards universal paraphrastic sentence embeddings, CoRR, abs/1511.0, 2015.V. Zarrabi, J. Rafiei, K. Khoshnava, H. Asghari and S. Mohtaj, Evaluation of text reuse corpora for text alignment task of plagiarism detection, In L. Cappellato, N. Ferro, G.J.F. Jones and E. SanJuan, editors, Working Notes of {CLEF} 2015 – Conference and Labs of the Evaluation forum, Toulouse, France, September 8–11, 2015, volume 1391 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2015.Barrón-Cedeño, A., Gupta, P., & Rosso, P. (2013). Methods for cross-language plagiarism detection. Knowledge-Based Systems, 50, 211-217. doi:10.1016/j.knosys.2013.06.01

    Open Source Software for Integrated Library System : Relative Appropriatness in the Indian Context

    Get PDF
    Libraries in all fields of human activity are involved in collection, preservation, management, and effective distribution of information that determines the quality of development in concerned sectors including that of higher education and research. Now information is flooding and along with that the recorded information to be managed; which necessitates automation of libraries to make the information stored in their collections useful and retrievable. Hitherto the cost of commercial packages for automation has prevented millions of libraries from using those tools. The recent emergence of Open Source Software has drastically reduced the cost of automation as well provided tools for new and innovative information services. The present research work focuses on comparative study of library automation packages with stress to appropriateness of Open Source Integrated Library Systems (OSILS) for countries like India. Study is based on a survey among library professionals from India using commercial and OSILS packages. The sample users belong to 601 libraries covering university, college, school, special and research libraries using any one of the integrated library systems. Packages covered is limited to the software /versions used in India. The survey found that features users of library automation packages consider are cost effectiveness, technical infrastructure, staff skills, software functionality and the availability of support, documentation and community. Study revealed that OSILS provides technological freedom and so is changing the landscape of library automation. Survey found Koha to be most popular in India. Suggests solutions to improve the situation. Few recommendations are provided to help libraries to choose suitable OSILS by understanding their advantages. Opines that being an attractive alternative to costly commercial package for any type of libraries OSILS, which is free to experiment and easy to use and customize for local requirements needs to be promoted in Indian libraries

    Evaluating Information Retrieval and Access Tasks

    Get PDF
    This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today’s smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students—anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one

    IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages

    Full text link
    India has a rich linguistic landscape with languages from 4 major language families spoken by over a billion people. 22 of these languages are listed in the Constitution of India (referred to as scheduled languages) are the focus of this work. Given the linguistic diversity, high-quality and accessible Machine Translation (MT) systems are essential in a country like India. Prior to this work, there was (i) no parallel training data spanning all the 22 languages, (ii) no robust benchmarks covering all these languages and containing content relevant to India, and (iii) no existing translation models which support all the 22 scheduled languages of India. In this work, we aim to address this gap by focusing on the missing pieces required for enabling wide, easy, and open access to good machine translation systems for all 22 scheduled Indian languages. We identify four key areas of improvement: curating and creating larger training datasets, creating diverse and high-quality benchmarks, training multilingual models, and releasing models with open access. Our first contribution is the release of the Bharat Parallel Corpus Collection (BPCC), the largest publicly available parallel corpora for Indic languages. BPCC contains a total of 230M bitext pairs, of which a total of 126M were newly added, including 644K manually translated sentence pairs created as part of this work. Our second contribution is the release of the first n-way parallel benchmark covering all 22 Indian languages, featuring diverse domains, Indian-origin content, and source-original test sets. Next, we present IndicTrans2, the first model to support all 22 languages, surpassing existing models on multiple existing and new benchmarks created as a part of this work. Lastly, to promote accessibility and collaboration, we release our models and associated data with permissive licenses at https://github.com/ai4bharat/IndicTrans2

    The text classification pipeline: Starting shallow, going deeper

    Get PDF
    An increasingly relevant and crucial subfield of Natural Language Processing (NLP), tackled in this PhD thesis from a computer science and engineering perspective, is the Text Classification (TC). Also in this field, the exceptional success of deep learning has sparked a boom over the past ten years. Text retrieval and categorization, information extraction and summarization all rely heavily on TC. The literature has presented numerous datasets, models, and evaluation criteria. Even if languages as Arabic, Chinese, Hindi and others are employed in several works, from a computer science perspective the most used and referred language in the literature concerning TC is English. This is also the language mainly referenced in the rest of this PhD thesis. Even if numerous machine learning techniques have shown outstanding results, the classifier effectiveness depends on the capability to comprehend intricate relations and non-linear correlations in texts. In order to achieve this level of understanding, it is necessary to pay attention not only to the architecture of a model but also to other stages of the TC pipeline. In an NLP framework, a range of text representation techniques and model designs have emerged, including the large language models. These models are capable of turning massive amounts of text into useful vector representations that effectively capture semantically significant information. The fact that this field has been investigated by numerous communities, including data mining, linguistics, and information retrieval, is an aspect of crucial interest. These communities frequently have some overlap, but are mostly separate and do their research on their own. Bringing researchers from other groups together to improve the multidisciplinary comprehension of this field is one of the objectives of this dissertation. Additionally, this dissertation makes an effort to examine text mining from both a traditional and modern perspective. This thesis covers the whole TC pipeline in detail. However, the main contribution is to investigate the impact of every element in the TC pipeline to evaluate the impact on the final performance of a TC model. It is discussed the TC pipeline, including the traditional and the most recent deep learning-based models. This pipeline consists of State-Of-The-Art (SOTA) datasets used in the literature as benchmark, text preprocessing, text representation, machine learning models for TC, evaluation metrics and current SOTA results. In each chapter of this dissertation, I go over each of these steps, covering both the technical advancements and my most significant and recent findings while performing experiments and introducing novel models. The advantages and disadvantages of various options are also listed, along with a thorough comparison of the various approaches. At the end of each chapter, there are my contributions with experimental evaluations and discussions on the results that I have obtained during my three years PhD course. The experiments and the analysis related to each chapter (i.e., each element of the TC pipeline) are the main contributions that I provide, extending the basic knowledge of a regular survey on the matter of TC.An increasingly relevant and crucial subfield of Natural Language Processing (NLP), tackled in this PhD thesis from a computer science and engineering perspective, is the Text Classification (TC). Also in this field, the exceptional success of deep learning has sparked a boom over the past ten years. Text retrieval and categorization, information extraction and summarization all rely heavily on TC. The literature has presented numerous datasets, models, and evaluation criteria. Even if languages as Arabic, Chinese, Hindi and others are employed in several works, from a computer science perspective the most used and referred language in the literature concerning TC is English. This is also the language mainly referenced in the rest of this PhD thesis. Even if numerous machine learning techniques have shown outstanding results, the classifier effectiveness depends on the capability to comprehend intricate relations and non-linear correlations in texts. In order to achieve this level of understanding, it is necessary to pay attention not only to the architecture of a model but also to other stages of the TC pipeline. In an NLP framework, a range of text representation techniques and model designs have emerged, including the large language models. These models are capable of turning massive amounts of text into useful vector representations that effectively capture semantically significant information. The fact that this field has been investigated by numerous communities, including data mining, linguistics, and information retrieval, is an aspect of crucial interest. These communities frequently have some overlap, but are mostly separate and do their research on their own. Bringing researchers from other groups together to improve the multidisciplinary comprehension of this field is one of the objectives of this dissertation. Additionally, this dissertation makes an effort to examine text mining from both a traditional and modern perspective. This thesis covers the whole TC pipeline in detail. However, the main contribution is to investigate the impact of every element in the TC pipeline to evaluate the impact on the final performance of a TC model. It is discussed the TC pipeline, including the traditional and the most recent deep learning-based models. This pipeline consists of State-Of-The-Art (SOTA) datasets used in the literature as benchmark, text preprocessing, text representation, machine learning models for TC, evaluation metrics and current SOTA results. In each chapter of this dissertation, I go over each of these steps, covering both the technical advancements and my most significant and recent findings while performing experiments and introducing novel models. The advantages and disadvantages of various options are also listed, along with a thorough comparison of the various approaches. At the end of each chapter, there are my contributions with experimental evaluations and discussions on the results that I have obtained during my three years PhD course. The experiments and the analysis related to each chapter (i.e., each element of the TC pipeline) are the main contributions that I provide, extending the basic knowledge of a regular survey on the matter of TC

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction

    A knowledge-based framework to manage plastic waste in urban environments usimg multi-source data.

    Get PDF
    Debido al continuo aumento de la cantidad de residuos plásticos a nivel mundial, la definición de políticas eficientes de planificación urbana junto con una correcta gestión y recogida de los residuos domésticos pueden ser a menudo un reto muy exigente. Muchas ciudades y países se enfrentan a menudo con una inadecuada eliminación de los residuos plásticos, como los dos países en los que se centra esta tesis: India y Filipinas. En este sentido, India tiene políticas diferentes en función de su segmentación geográfica. Dos de los estados de dicho país que se analizan con más detalle son Punjab, donde la mayoría de las ciudades no tienen un contenedor de basura adecuado, y Gujarat, donde el uso e implantación de los contenedores municipales acaban de empezar. Esta tesis presenta un sistema colaborativo inteligente que se centra en la monitorización de los residuos plásticos a través de un enfoque novedoso para definir políticas que ayuden en el proceso de gestión de este tipo de residuos en entornos urbanos. El sistema propuesto se compone de contenedores domésticos inteligentes equipados con balanzas de peso y una aplicación inteligente para recoger y anticipar los residuos plásticos que se almacenarán en el contenedor en diferentes horizontes temporales. Por otro lado, el sistema es también capaz de genera rutas mediante un mecanismo de planificación que facilita a los recicladores la recogida proactiva de residuos en los hogares con diferentes medios de transporte. Los recicladores pueden utilizar las diferentes ubicaciones de los contenedores municipales de residuos plásticos que han sido previamente inferidas por nuestro sistema a través del análisis de datos abiertos socioeconómicos y demográficos. Este sistema inteligente ha sido evaluado en dos zonas urbanas de la India y Filipinas mostrando resultados convincentes. Gracias a la continua promoción mundial de los datos abiertos como método para acceder a datos transparentes, este estudio también ha utilizado datos abiertos para recuperar la demografía, el número de locales dentro de diferentes categorías, el número de segmentos de calles y la ubicación de los contenedores de cuatro ciudades occidentales de referencia: Nueva York, Málaga, Madrid y Stavanger. El objetivo principal de extraer los datos abiertos de estas cuatro ciudades es determinar la distribución de las papeleras en función de las variables mencionadas. Como prueba de concepto, hemos empleado estos datos para planificar un escenario de gestión de residuos urbanos en las ciudades objetivo de Filipinas e India. La comparación de las ciudades de referencia y las ciudades objetivo también nos permite ver que las zonas de la India parecen ser más familiares, como Stavanger, debido a la distribución de los locales, y que Quezon City tiene una actividad ciudadana similar a la de Nueva York, Madrid y Málaga. En concreto, se realizó un análisis de regresión lineal sobre los datos de las ciudades de referencia para determinar las variables relevantes y el coeficiente de determinación que mide la confianza en los modelos. También se aplicó el análisis de mínimos cuadrados ponderados a las diferentes variables obtenidas en los pasos anteriores, como la densidad de población, el número de segmentos de calles y los cuatro usos del suelo predominantes obtenidos mediante la aplicación del algoritmo de análisis de componentes principales. Con ello, se identificó el número de contenedores necesarios y propuestos en cada una de las ciudades objetivo. Por otro lado, la recogida de residuos en la mayoría de los países sigue basándose en métodos tradicionales con horarios fijos. Esto representa un problema, ya que una recogida de residuos inadecuada e ineficaz puede provocar contaminación y polución. También pueden surgir grandes preocupaciones entre la población cuando hay un tratamiento inadecuado de los residuos plásticos debido a problemas de recogida como, por ejemplo, la irregularidad de la misma. Como alternativa, se utiliza un contenedor inteligente con una báscula de alta resolución para controlar los residuos plásticos domésticos. También se diseñó una aplicación colaborativa para gestionar la recogida de residuos domésticos en las comunidades con necesidades especiales, como residentes afectados por Covid-19, personas mayores o con discapacidad. Este desarrollo incluyó además un algoritmo para prever la generación de residuos de plástico con el fin de disponer de una ruta de recogida optimizada para los recolectores de basura doméstica. A modo general, el sistema recoge el peso de los contenedores de las casas a través del sensor de peso. Estos datos se envían a un servidor backend que incluye un panel de control para visualizar los datos recogidos por el sensor, así como un algoritmo de planificación capaz de personalizar las rutas de los recicladores registrados en el sistema para que la recogida de residuos sea proactiva y no tradicional. Los datos utilizados para realizar las simulaciones se basaron en experimentos realizados a través de diferentes características demográficas como tipos de hogar y grupos de edad. La predicción del peso se introduce en el módulo que se utiliza para crear rutas para los recicladores. También se obtuvieron tres clústers basados en dichas características, cada uno representando un perfil particular de generación de residuos plásticos. La evaluación de la simulación se llevó a cabo en la ciudad de Quezon, Filipinas, donde se definieron ocho contenedores inteligentes domésticos y dos ubicaciones de recicladores, y cada contenedor se vinculó a un clúster particular. Se simuló un enfoque iterativo en el que se extrajo un experimento particular y se generó un número específico de subexperimentos. Los puntos de recogida junto con el registro de tiempo u horas de los recicladores se introdujeron en un algoritmo para la optimización de las rutas de recogida necesarias para los recicladores. Posteriormente, se calcula la tasa de recogida que indica el porcentaje de contenedores incluidos en la ruta que son recogidos por los recicladores antes de que se llenen. Los cálculos de cada ruta incluyen la hora de recolección y la hora de llenado real de cada contenedor. Tres medios de transporte diferentes, coche, bicicleta y a pie, fueron estudiados para estudiar dicha tasa de recogida. Los resultados muestran que la solución alcanzó una tasa de recogida media del 80%. Además, cuando se utilizan bicicletas y coches, las tasas de recogida aumentan con el mayor número de predicciones de contenedores. Con la integración del módulo de planificación urbana y el módulo de composición de rutas y contenedores inteligentes, los resultados muestran una tasa de recogida media superior al 80% para bicicletas, coches y a pie como medio de transporte. También se puede observar que el uso de los recicladores y los contenedores de residuos municipales en la misma zona, facilitaría un sistema sostenible que permite el uso de bicicletas y el desplazamiento a pie a las casas y los contenedores en lugar de en coche. En definitiva, se ha conseguido una solución colaborativa que ayuda a distintos colectivos en la recogida de los residuos plásticos domésticos. Así, se propone un contenedor inteligente ligero de alta resolución para captar y pronosticar la cantidad de residuos plásticos en los contenedores de cada hogar. Además, se definen diferentes técnicas inteligentes para generar rutas optimizadas para los recolectores de residuos domésticos y los recicladores registrados. Esto les permitirá llevar a cabo una recogida de residuos eficiente. También se determina el número de contenedores de plástico necesarios en una zona específica a través de datos abiertos y diferentes variables relacionadas con la planificación urbana y la gestión de los plásticos extraídos de ciudades referentes en la gestión de residuos urbanos.Ingeniería, Industria y Construcció

    NIAS Annual Report 2020-2021 (NIAS/U/AR/24/2021)

    Get PDF

    Sustainability in design: now! Challenges and opportunities for design research, education and practice in the XXI century

    Get PDF
    Copyright @ 2010 Greenleaf PublicationsLeNS project funded by the Asia Link Programme, EuropeAid, European Commission
    corecore