45 research outputs found
Collected Papers (on Neutrosophics, Plithogenics, Hypersoft Set, Hypergraphs, and other topics), Volume X
This tenth volume of Collected Papers includes 86 papers in English and Spanish languages comprising 972 pages, written between 2014-2022 by the author alone or in collaboration with the following 105 co-authors (alphabetically ordered) from 26 countries: Abu Sufian, Ali Hassan, Ali Safaa Sadiq, Anirudha Ghosh, Assia Bakali, Atiqe Ur Rahman, Laura Bogdan, Willem K.M. Brauers, Erick González Caballero, Fausto Cavallaro, Gavrilă Calefariu, T. Chalapathi, Victor Christianto, Mihaela Colhon, Sergiu Boris Cononovici, Mamoni Dhar, Irfan Deli, Rebeca Escobar-Jara, Alexandru Gal, N. Gandotra, Sudipta Gayen, Vassilis C. Gerogiannis, Noel Batista Hernández, Hongnian Yu, Hongbo Wang, Mihaiela Iliescu, F. Nirmala Irudayam, Sripati Jha, Darjan Karabašević, T. Katican, Bakhtawar Ali Khan, Hina Khan, Volodymyr Krasnoholovets, R. Kiran Kumar, Manoranjan Kumar Singh, Ranjan Kumar, M. Lathamaheswari, Yasar Mahmood, Nivetha Martin, Adrian Mărgean, Octavian Melinte, Mingcong Deng, Marcel Migdalovici, Monika Moga, Sana Moin, Mohamed Abdel-Basset, Mohamed Elhoseny, Rehab Mohamed, Mohamed Talea, Kalyan Mondal, Muhammad Aslam, Muhammad Aslam Malik, Muhammad Ihsan, Muhammad Naveed Jafar, Muhammad Rayees Ahmad, Muhammad Saeed, Muhammad Saqlain, Muhammad Shabir, Mujahid Abbas, Mumtaz Ali, Radu I. Munteanu, Ghulam Murtaza, Munazza Naz, Tahsin Oner, Gabrijela Popović, Surapati Pramanik, R. Priya, S.P. Priyadharshini, Midha Qayyum, Quang-Thinh Bui, Shazia Rana, Akbara Rezaei, Jesús Estupiñán Ricardo, Rıdvan Sahin, Saeeda Mirvakili, Said Broumi, A. A. Salama, Flavius Aurelian Sârbu, Ganeshsree Selvachandran, Javid Shabbir, Shio Gai Quek, Son Hoang Le, Florentin Smarandache, Dragiša Stanujkić, S. Sudha, Taha Yasin Ozturk, Zaigham Tahir, The Houw Iong, Ayse Topal, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Rizha Vitania, Luige Vlădăreanu, Victor Vlădăreanu, Ștefan Vlăduțescu, J. Vimala, Dan Valeriu Voinea, Adem Yolcu, Yongfei Feng, Abd El-Nasser H. Zaied, Edmundas Kazimieras Zavadskas.
Precision livestock farming towards broiler welfare
Due to intensification of the livestock system the ratio between number of broilers and number of farmers have been increasing, making impossible the individualized attention to animals without the use of appropriate tools. Increasingly societal concern on broiler welfare requires farmers to find means to improve animal welfare level. Precision livestock farming (PLF) emerges as a possible solution as it enables the monitoring of animals and its environment 24/7. The present study aims to provide information on how PLF technologies can address broiler welfare and to evaluate reasons for their adoption (or non-adoption) by farmers. The results discussions and analysis are based in the three main pillars that guide the present research: animal welfare, PLF technologies and innovation adoption. Methodologically, the study consists of two different steps. Initially, a systematic review of the literature was carried out to identify which are the PLF technologies related to broiler welfare and to assess how they address birds ́ welfare. Results indicate that most PLF technologies are related to image analysis and mainly focused on broiler health improvements. In the second stage, an empirical research was carried out with broiler farmers in the Southern Brazil. From this survey, information on broiler farmers ́ opinions towards broiler welfare and PLF potentialities were assessed as well as on the determinants and limiting factors for technologies adoption. In general, Brazilian broiler farmers attribute great importance to broiler welfare and perceive the current level of welfare as high; however higher scores for importance than for perception indicate that there is room for welfare improvements. In broiler farmers ́ opinions, providing animals food/water and good housing and health conditions are more important than provide means for the animals to express their natural behaviors. Broiler farmers believe that technologies can help them on welfare improvements and are willing to adopt them even when no extra income come from this. Broiler farmers with less experience, producing chicken grillers, having other farm activity besides broiler production and presenting high beliefs on PLF potentialities regarding animal welfare improvements are more likely to adopt PLF technologies. Major limiting factors for PLF technologies adoption are regarding technology high prices, maintenance requirements and to possible financial consequences with technical problems. It is expected the present thesis to be useful to clarify about PLF technologies opportunities in the broiler farmers point of view and that the results obtained to be valuable to increase PLF adoption, which can potentially improve animal and farmers welfare alike.A intensificação do sistema produtivo aumentou a relação entre o número de frangos de corte e o número de trabalhadores rurais, impossibilitando a atenção individualizada aos animais sem o uso de ferramentas adequadas. Em paralelo, a sociedade pressiona os produtores a encontrarem meios para aumentar o nível bem-estar animal (BEA). Tecnologias da zootecnia de precisão (ZP)surgem como possívelsolução, pois possibilitam o monitoramento dos animais e de seu ambiente de forma contínua. O presente estudo objetiva fornecer informações sobre como as tecnologias da ZP abordam o bem-estar de frangos de corte e avaliar os fatores que influenciam a sua adoção pelos produtores. A discussão e a análise dos resultados baseiam-se em três pilares, a saber: BEA, tecnologias da ZP e adoção de inovações. Metodologicamente, o estudo é composto por duas etapas distintas. Inicialmente, uma revisão sistemática da literatura foi realizada para identificar quais são as tecnologias da ZP relacionadas ao bem-estar de frangos de corte e para avaliar como elas abordam o bem-estar das aves. Os resultados indicam que a maioria das tecnologias está relacionada à análise de imagens e principalmente focada na melhoria da saúde dos frangos. Na segunda etapa, foi realizada uma pesquisa empírica com produtores de frangos de corte no Sul do Brasil. A partir desta pesquisa, foram avaliadas informações sobre as opiniões dos criadores de frangos de corte em relação ao BEA e às potencialidades das tecnologias, bem como sobre os fatores determinantes e limitantes para adoção de tecnologias. Em geral, os avicultores brasileiros atribuem grande importância ao bem-estar dos frangos e consideram alto o nível atual de BEA; no entanto, maiores escores para importância do que para percepção indicam que há espaço para melhorias. Na opinião dos produtores, fornecer aos animais comida/água e boas condições de alojamento e saúde é mais importante do que fornecer meios para que os animais expressem seus comportamentos naturais. Os produtores acreditam que as tecnologias podem ajudá-los a aumentar o BEA e estão dispostos a adotá-las mesmo que isso não resulte em maior renda. Produtores com menos experiência, que produzem grillers, que possuem mais de uma atividade agropecuária e que acreditam nas potencialidades das tecnologias em melhorar o BEA são mais propensos a adotar tecnologias. Os principais fatores limitantes para a adoção de tecnologias são os preços elevados, as exigências de manutenção e as possíveis consequências financeiras com problemas técnicos. Espera-se que a presente tese seja útil para esclarecer sobre as oportunidades da ZP do ponto de vista dos produtores e que os resultados obtidos sejam valiosos para aumentar a adoção de tecnologias, as quais podem melhorar o BEA e o bem-estar dos produtores
Deep learning for surface electromyography artifact contamination type detection
The quality of surface Electromyography (sEMG) signals could be an issue if highly contaminated by Power Line Interference (PLI), Electrocardiogram signal (ECG), Movement Artifact (MOA) or White Gaussian Noise (WGN), that could lead to unsafe operation of devices that is controlled by sEMG data, such as electro-mechanical prothesis. There are some mitigation methods proposed for some specifics sEMG contaminants and to use these methods in an efficient way is important to identify the contaminant in the sEMG signal. In this work we propose the use of a Recurrent Neural Network (RNN) using Long Short-Term Memory (LSTM) units in the hidden layer with no need of features extraction with the objective to classify the signal directly from sequences of the band-pass filtered data. The method proposed use the NinaPro database with amputee and non-amputee subjects. Only non-amputee subjects are used for parameters selection and then tested on both databases. The results show that 98% of the non-contaminated sEMG data was corrected classified and more than 95% of the contaminants were identified inside the training SNR range. Also, in this work is presented a SNR sensibility control and the contamination analysis in the range from −40 dB to 40 dB in 10 dB steps. The conclusion is that is possible to classify the contamination type in sEMG signals with a RNN-LSTM with a 112.5 ms time window and to predicted with a small error the classification hit rate for each SNR level in some cases
UMSL Bulletin 2020-2021
The 2020-2021 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1084/thumbnail.jp
Inferring Complex Activities for Context-aware Systems within Smart Environments
The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems.
Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods.
The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system.
As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results
Welcome to Hell on Earth - Artificial Intelligence, Babies, Bitcoin, Cartels, China, Democracy, Diversity, Dysgenics, Equality, Hackers, Human Rights, Islam, Liberalism, Prosperity, The Web
America and the world are in the process of collapse from excessive population growth, most of it for the last century and now all of it due to 3rd world people. Consumption of resources and the addition of one or two billion more ca. 2100 will collapse industrial civilization and bring about starvation, disease, violence and war on a staggering scale. Billions will die and nuclear war is all but certain. In America this is being hugely accelerated by massive immigration and immigrant reproduction, combined with abuses made possible by democracy. Depraved human nature inexorably turns the dream of democracy and diversity into a nightmare of crime and poverty. The root cause of collapse is the inability of our innate psychology to adapt to the modern world, which leads people to treat unrelated persons as though they had common interests. This, plus ignorance of basic biology and psychology, leads to the social engineering delusions of the partially educated who control democratic societies. Few understand that if you help one person you harm someone else—there is no free lunch and every single item anyone consumes destroys the earth beyond repair. Consequently, social policies everywhere are unsustainable and one by one all societies without stringent controls on selfishness will collapse into anarchy or dictatorship. Without dramatic and immediate changes, there is no hope for preventing the collapse of America, or any country that follows a democratic system. Hence my essay “Suicide by Democracy”. It is also now clear that the seven sociopaths who rule China are winning world war 3, and so my concluding essay on them. The only greater threat is Artificial Intelligence which I comment on briefly.
The key to everything about us is biology, and it is obliviousness to it that leads millions of smart educated people like Obama, Chomsky, Clinton, the Democratic Party and the Pope to espouse suicidal utopian ideals that inexorably lead straight to Hell on Earth. As W noted, it is what is always before our eyes that is the hardest to see. We live in the world of conscious deliberative linguistic System 2, but it is unconscious, automatic reflexive System 1 that rules. This is the source of the universal blindness described by Searle’s The Phenomenological Illusion (TPI), Pinker’s Blank Slate and Tooby and Cosmides’ Standard Social Science Model.
The first group of articles attempt to give some insight into how we behave that is reasonably free of theoretical delusions. In the next three groups I comment on three of the principal delusions preventing a sustainable world— technology, religion and politics (cooperative groups). People believe that society can be saved by them, so I provide some suggestions in the rest of the book as to why this is unlikely via short articles and reviews of recent books by well-known writers.
Another section describes the religious delusion – that there is some super power that will save us.
The next section describes the digital delusions, which confuse the language games of System 2 with the automatisms of System one, and so cannot distinguish biological machines (i.e., people) from other kinds of machines (i.e., computers). Other digital delusions are that we will be saved from the pure evil (selfishness) of System 1 by computers/AI/robotics/nanotech/genetic engineering created by System 2. The No Free Lunch principal tells us there will be serious and possibly fatal consequences.
The last section describes The One Big Happy Family Delusion, i.e., that we are selected for cooperation with everyone, and that the euphonious ideals of Democracy, Diversity and Equality will lead us into utopia, if we just manage things correctly (the possibility of politics). Again, the No Free Lunch Principle ought to warn us it cannot be true, and we see throughout history and all over the contemporary world, that without strict controls, selfishness and stupidity gain the upper hand and soon destroy any nation that embraces these delusions. In addition, the monkey mind steeply discounts the future, and so we cooperate in selling our descendant’s heritage for temporary comforts, greatly exacerbating the problems
The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences
In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines
Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2019 Michael Starks 3rd Edition
This collection of articles and reviews are about human behavior (as are all articles by anyone about anything), and so about the limitations of having a recent monkey ancestry (8 million years or much less depending on viewpoint) and manifest words and deeds within the framework of our innate psychology as presented in the table of intentionality. As famous evolutionist Richard Leakey says, it is critical to keep in mind not that we evolved from apes, but that in every important way, we are apes. If everyone was given a real understanding of this (i.e., of human ecology and psychology to actually give them some control over themselves), maybe civilization would have a chance. As things are however the leaders of society have no more grasp of things than their constituents and so collapse into anarchy and dictatorship appears inevitable.
Since philosophy proper is essentially the same as the descriptive psychology of higher order thought (behavior), and philosophical problems are the result of our innate psychology, or as Wittgenstein put it, due to the lack of perspicuity of language, they run throughout human discourse and behavior, so there is endless need for philosophical analysis, not only in the ‘human sciences’ of philosophy, sociology, anthropology, political science, psychology, history, literature, religion, etc., but in the ‘hard sciences’ of physics, mathematics, and biology. It is universal to mix the language game questions with the real scientific ones as to what the empirical facts are. Scientism is ever present and the master has laid it before us long ago, i.e., Wittgenstein (hereafter W) beginning with the Blue and Brown Books in the early 1930’s.
Although I separate the book into sections on philosophy and psychology, religion, biology, the ‘hard sciences’ and politics/sociology/economics, all the articles, like all behavior, are intimately connected if one knows how to look at them. As I note, The Phenomenological Illusion (oblivion to our automated System 1) is universal and extends not merely throughout philosophy but throughout life. I am sure that Chomsky, Obama, Zuckerberg and the Pope would be incredulous if told that they suffer from the same problems as Hegel, Husserl and Heidegger, or that that they differ only in degree from drug and sex addicts in being motivated by stimulation of their frontal cortices by the delivery of dopamine (and over 100 other chemicals) via the ventral tegmentum and the nucleus accumbens, but it’s clearly true. While the phenomenologists only wasted a lot of people’s time, they are wasting the earth and their descendant’s future.
I hope that these essays will help to separate the philosophical issues of language use from the scientific factual issues, and in some small way hinder the collapse of civilization, or at least make it clear why it is doomed.
Those wishing to read my other writings may see Talking Monkeys 2nd ed (2019), The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle 2nd ed (2019), Suicide by Democracy 3rd ed (2019), The Logical Stucture of Human Behavior (2019) and Suicidal Utopian Delusions in the 21st Century 4th ed (2019
Learning, conditionals, causation
This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation
Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC
A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC