4,108 research outputs found

    Reflections on Cultural Superiority and the Just War: A Neomodern Imperative

    Get PDF
    If all cultures are morally equivalent, then all individuals are not endowed with the same human rights, because some cultures award some men more rights than are allotted to other men and women. If, on the other hand, all men and women are endowed with the same human rights, then all cultures are not morally equivalent, because cultures that acknowledge that "all men are created equal" are ethically superior to those that do not. These two statements are mutually contradictory and cannot both be true. Moreover, there is a natural conflict between them, leading to inevitable intra and inter-civilizational clashes. Relativism will confront evolutionism and hierarchical theocracy will confront secularized republicanism. This essay takes sides and argues that cultural superiority can be asserted on two different levels: moral and epistemological. A culture that acknowledges a set of universal human rights is superior to one that does not, even if it often deviates from these very norms. A culture capable of delving into nature increasing life expectancy through scientific discovery is superior to one that cannot. Furthermore, waging war to defend a superior culture is a moral imperative.

    From Madness to Medicine: How Nazi Medical Experimentation Morphed into Today’s Medical Field

    Get PDF
    It is no secret that many of our current scientific and medical advancements stem from a long history of research, trials, and experimentation, but not enough is known about the origins of our routine practices. The Holocaust enabled Nazi doctors to explore countless victims in search of the ultimate answer to the Jewish question. The answer: to alleviate the burden that those deemed “unworthy of life” placed on the greater society. The mass extermination practices which highlight the atrocities of the Holocaust are the end result of constant scientific developments disguised as medicine. Tiergarten 4 (T4) serves as the beginning of the euthanasia project, a secret initiative which strived to perfect the science behind extermination. This project quickly grew from a science experiment into a plague that invaded psychiatric asylums, pediatric wards, and eventually evolved into the main method of extermination in Nazi concentration camps. In the years following the conclusion of the war, the world turned its face from the horrors associated with the Holocaust. Tactics, regimens, and beliefs established throughout the Nazi regime were abandoned and disregarded as inhumane – except for those discovered through the robust scientific experiments disguised in the name of medicine. How did we progress from utilizing Zyklon B pellets into gas chambers to giving patients doses of anesthesia to be sedated for procedures? This paper analyzes the slow progression from madness to medicine, uncovering how Nazi medical experimentation slowly morphed into routine practices acknowledged in the medical field today

    Linkage Knowledge Management and Data Mining in E-business: Case study

    Get PDF

    Genomic insights into ayurvedic and western approaches to personalized medicine

    Get PDF
    Ayurveda, an ancient Indian system of medicine documented and practised since 1500 B.C., follows a systems approach that has interesting parallels with contemporary personalized genomic medicine approaches to the understanding and management of health and disease. It is based on the trisutra, which are the three aspects of causes, features and therapeutics that are interconnected through a common organizing principle termed ‘tridosha’. Tridosha comprise three ascertainable physiological entities; vata (kinetic), pitta (metabolic) and kapha (potential) that are pervasive across systems, work in conjunction with each other, respond to the external environment and maintain homeostasis. Each individual is born with a specific proportion of tridosha that are not only genetically determined but also influenced by the environment during foetal development. Jointly they determine a person’s basic constitution, which is termed their ‘prakriti’. Development and progression of different diseases with their subtypes are thought to depend on the origin and mechanism of perturbation of the doshas, and the aim of therapeutic practice is to ensure that the doshas retain their homeostatic state. Similarly, western systems biology epitomized by translational P4 medicine envisages the integration of multiscalar genetic, cellular, physiological and environmental networks to predict phenotypic outcomes of perturbations. In this perspective article, we aim to outline the shape of a unifying scaffold that may allow the two intellectual traditions to enhance one another. Specifically, we illustrate how a unique integrative ‘Ayurgenomics’ approach can be used to integrate the trisutra concept of Ayurveda with genomics. We observe biochemical and molecular correlates of prakriti and show how these differ significantly in processes that are linked to intermediate patho-phenotypes, known to take different course in diseases. We also observe a significant enrichment of the highly connected hub genes which could explain differences in prakriti, focussing on EGLN1, a key oxygen sensor that differs between prakriti types and is linked to high altitude adaptation. Integrating our observation with the current literature, we demonstrate how EGLN1 could qualify as a molecular equivalent of tridosha that can modulate different phenotypic outcomes, where hypoxia is a cause or a consequence both during health and diseased states. Our studies affirm that integration of the trisutra framework through Ayurgenomics can guide the identification of predisposed groups of individuals and enable discovery of actionable therapeutic points in an individualized manner

    A semi-supervised Genetic Programming method for dealing with noisy labels and hidden overfitting

    Get PDF
    Silva, S., Vanneschi, L., Cabral, A. I. R., & Vasconcelos, M. J. (2018). A semi-supervised Genetic Programming method for dealing with noisy labels and hidden overfitting. Swarm and Evolutionary Computation, 39(April), 323-338. DOI: 10.1016/j.swevo.2017.11.003Data gathered in the real world normally contains noise, either stemming from inaccurate experimental measurements or introduced by human errors. Our work deals with classification data where the attribute values were accurately measured, but the categories may have been mislabeled by the human in several sample points, resulting in unreliable training data. Genetic Programming (GP) compares favorably with the Classification and Regression Trees (CART) method, but it is still highly affected by these errors. Despite consistently achieving high accuracy in both training and test sets, many classification errors are found in a later validation phase, revealing a previously hidden overfitting to the erroneous data. Furthermore, the evolved models frequently output raw values that are far from the expected range. To improve the behavior of the evolved models, we extend the original training set with additional sample points where the class label is unknown, and devise a simple way for GP to use this additional information and learn in a semi-supervised manner. The results are surprisingly good. In the presence of the exact same mislabeling errors, the additional unlabeled data allowed GP to evolve models that achieved high accuracy also in the validation phase. This is a brand new approach to semi-supervised learning that opens an array of possibilities for making the most of the abundance of unlabeled data available today, in a simple and inexpensive way.authorsversionpublishe

    PENERAPAN JARINGAN SARAF TIRUAN BACKPROPAGATION UNTUK PERAMALAN CURAH HUJAN

    Get PDF
    Hujan merupakan salah satu unsur cuaca yang penting dalam kehidupan. Banyak bidang kehidupan yang tergantung pada banyaknya hujan yang turun atau dikenal dengan curah hujan. Walaupun demikian, curah hujan yang terlalu rendah atau terlalu tinggi dapat menyebabkan bencana. Untuk itulah dilakukan peramalan untuk mengetahui jumlah curah hujan di waktu mendatang, sehingga dapat dilakukan perencanaan dan antisipasi. Dalam peramalan curah hujan, ada dua pendekatan yang dapat dilakukan, yaitu dengan faktor penyebabnya dan data historis curah hujan. Salah satu metode peramalan yang dapat digunakan melalui pendekatan data historis adalah metode Jaringan Saraf Tiruan. Jaringan Saraf Tiruan (JST) adalah sistem pemroses informasi yang memiliki karakteristik mirip dengan jaringan saraf biologis. JST memerlukan proses pelatihan untuk mendapatkan bobot penghubung yang tepat untuk masing-masing masukan yang diberikan dengan keluaran yang dikehendaki. Salah satu algoritma pelatihan JST yaitu Backpropagation. Agar pelatihan backpropagation optimal, digunakan pula teknik inisialisasi bobot Nguyen-Widrow serta adaptive learning rate dan momentum. Dalam pengembangan sistem peramalan curah hujan menggunakan metode backpropagation, ada beberapa tahap yang perlu dilakukan, yaitu: data preprocessing, perancangan struktur jaringan, penyusunan data set pelatihan dan pengujian, inisialisasi data pelatihan, modifikasi algoritma pelatihan backpropagation, pengujian jaringan, analisis sensitifitas, dan pemilihan jaringan optimum untuk peramalan curah hujan. Penelitian dilakukan dengan menggunakan arsitektur jaringan dan inisialisasi data yang berbeda-beda. Dari 36 kasus yang dilakukan, diperoleh jaringan dengan Mean Absolute Percentage Error (MAPE) pengujian terendah. Pengujian dilakukan dengan data pelatihan dan data pengujian. Hasilnya menunjukkan, nilai MAPE data pelatihan adalah 24,27%, sedangkan untuk data pengujian yaitu 26,23%

    Definicja i granice prawnej ochrony prywatnoƛci w epoce analityki big data

    Get PDF
    More than one hundred years after the first definitions of the right to privacy, the content of this right and the limits of its protection are still being discussed and disputed in the doctrine. The protection of human rights tends to define privacy by determining an open list of protected values. At the same time, in data protection law the scope of regulation is determined by terms ‘personal data’ and ‘special categories of data’. The definition of these terms has remained unchanged for over thirty years. The division of vertical and horizontal intrusions in the area of privacy protection in cyberspace is no longer valid. The activities of public authorities and specialized entities such as data brokers have been increasingly complementing one another. Collecting vast amounts of data about hundreds of millions of users may lead to privacy intrusions not only of individuals, but also of entire societies. The purpose of this article is an attempt to determine whether the legal regulations already in force and being implemented, based on the definition of personal data adopted in the pre-Internet era, have the potential to effectively protect against the risks associated with modern data processing techniques such as Big Data. To achieve this goal, the most important features of Big Data are discussed, such as algorithmic knowledge building or incremental effect, and it is also explained how this technology allows legal restrictions related to the processing of different categories of personal data to be bypassed. In the summary, a postulate to develop regulations dedicated to regulating the market for the processing of large data sets is formulated.Po ponad stu latach od wprowadzenia pierwszych definicji prawa do prywatnoƛci treƛć tego prawa i tym samym granice jego ochrony są wciÄ…ĆŒ analizowane i dyskutowane w doktrynie. W systemach ochrony praw czƂowieka dominuje podejƛcie do definiowania prywatnoƛci przez wprowadzanie katalogu chronionych wartoƛci. Jednoczeƛnie w prawie ochrony danych zakres regulacji wyznaczany jest terminami „dane osobowe” i „specjalne kategorie danych”. Definicja tych pojęć jest stosowana w niemal niezmienionej formie od ponad trzydziestu lat. PodziaƂ zagroĆŒeƄ dla prywatnoƛci na wertykalne i horyzontalne nie jest aktualny w odniesieniu do zdarzeƄ zachodzących w cyberprzestrzeni. DziaƂania organĂłw publicznych i wyspecjalizowanych podmiotĂłw, takich jak brokerĂłw  danych, w coraz większym stopniu się uzupeƂniają. Gromadzenie ogromnej iloƛci danych na temat setek milionĂłw uĆŒytkownikĂłw moĆŒe prowadzić do naruszenia prywatnoƛci nie tylko jednostek, lecz takĆŒe caƂych spoƂeczeƄstw. Celem niniejszego artykuƂu jest prĂłba analizy, czy obowiązujące przepisy prawne bazujące na koncepcjach uksztaƂtowanych w erze przedinternetowej posiadają potencjaƂ do skutecznej ochrony przed zagroĆŒeniami związanymi z nowoczesnymi formami przetwarzania danych, takimi jak big data. W tym celu omĂłwiono najwaĆŒniejsze cechy big data, takie jak algorytmiczne budowanie wnioskĂłw czy efekt przyrostowy, a takĆŒe wyjaƛniono, w jaki sposĂłb technologia ta pozwala na omijanie ograniczeƄ prawnych związanych z przetwarzaniem rĂłĆŒnych kategorii danych osobowych. W podsumowaniu sformuƂowano postulat opracowania przepisĂłw dotyczących regulacji rynku przetwarzania duĆŒych zbiorĂłw danych
    • 

    corecore