130 research outputs found

    Future of the Artificial Intelligence: Object of Law or Legal Personality?

    Get PDF
    Objective: to reveal the problems associated with legal regulation of public relations, in which artificial intelligence systems are used, and to rationally comprehend the possibility of endowing such systems with a legal subject status, which is being discussed by legal scientists.Methods: the methodological basis of the research are the general scientific methods of analysis and synthesis, analogy, abstraction and classification. Among the legal methods primarily applied in the work are formal-legal, comparative-legal and systemic-structural methods, as well as the methods of law interpretation and legal modeling.Results: the authors present a review of the state of artificial intelligence development and its introduction into practice by the time of the research. Legal framework in this sphere is considered; the key current concepts of endowing artificial intelligence with a legal personality (individual, collective and gradient legal personality of artificial intelligence) are reviewed. Each approach is assessed; conclusions are made as to the most preferable amendments in the current legislation, which ceases to correspond to the reality. The growing inconsistency is due to the accelerated development of artificial intelligence and its spreading in various sectors of economy, social sphere, and in the nearest future – in public management. All this testifies to the increased risk of a break between legal matter and the changing social reality.Scientific novelty: scientific approaches are classified which endow artificial intelligence with a legal personality. Within each approach, the key moments are identified, the use of which will allow in the future creating legal constructs based on combinations, avoiding extremes and observing the balance between the interests of all parties. The optimal variant to define the legal status of artificial intelligence might be to include intellectual systems into a list of civil rights objects, but differentiating the legal regulation of artificial intelligence as an object of law and an “electronic agent” as a quasi subject of law. The demarcation line should be drawn depending on the functional differences between intellectual systems, while not only a robot but also a virtual intellectual system can be considered an “electronic agent”.Practical significance: the research materials can be used when preparing proposals for making amendments and additions to the current legislation, as well as when elaborating academic course and writing tutorials on the topics related to regulation of using artificial intelligence

    A stochastic event-based approach for flood estimation in catchments with mixed rainfall and snowmelt flood regimes

    Get PDF
    The estimation of extreme floods is associated with high uncertainty, in part due to the limited length of streamflow records. Traditionally, statistical flood frequency analysis and an event-based model (PQRUT) using a single design storm have been applied in Norway. We here propose a stochastic PQRUT model, as an extension of the standard application of the event-based PQRUT model, by considering different combinations of initial conditions, rainfall and snowmelt, from which a distribution of flood peaks can be constructed. The stochastic PQRUT was applied for 20 small- and medium-sized catchments in Norway and the results give good fits to observed peak-over-threshold (POT) series. A sensitivity analysis of the method indicates (a) that the soil saturation level is less important than the rainfall input and the parameters of the PQRUT model for flood peaks with return periods higher than 100 years and (b) that excluding the snow routine can change the seasonality of the flood peaks. Estimates for the 100- and 1000-year return level based on the stochastic PQRUT model are compared with results for (a) statistical frequency analysis and (b) a standard implementation of the event-based PQRUT method. The differences in flood estimates between the stochastic PQRUT and the statistical flood frequency analysis are within 50&thinsp;% in most catchments. However, the differences between the stochastic PQRUT and the standard implementation of the PQRUT model are much higher, especially in catchments with a snowmelt flood regime.</p

    Flood Vulnerability Assessment of Urban Traditional Buildings in Kuala Lumpur, Malaysia

    Get PDF
    Flood hazard is increasing in frequency and magnitude in Southeast Asia major metropolitan areas due to the effects of fast urban development and changes in climate, threatening people's properties and life. Typically, flood management actions are mostly focused on large scale defenses, such as river embankments or discharge channels or tunnels. However, these are difficult to implement in historic centres without disturbing their heritage value, and might not provide sufficient mitigation in these areas. Therefore urban heritage buildings may be particularly exposed to flood events, even when they were originally designed and built with intrinsic resilient measures, based on the local knowledge of the natural environment and its threats at the time. Their attractiveness, cultural and economic values, means that they can represent a proportionally high contribution to losses of any event. Hence it is worth to pursue more localised, tailored, mitigation measures. Vulnerability assessment studies are essential to inform the feasibility and development of such strategies. In the present paper we propose a multi-level methodology to assess the flood vulnerability of residential buildings in an area of Kuala Lumpur Malaysia characterised by traditional timber housing. The multi-scale flood vulnerability model is based on a wide range of parameters, covering building specific parameters, neighbourhood conditions and catchment area condition. Parameters for 163 buildings were measured in detail by a field surveys integrated with Google Street View. The vulnerability model is combined with high resolution fluvial and pluvial flood maps providing likely water depths for a range of different flood return periods. The obtained vulnerability index shows ability to reflect different exposure by different building types and their relative locations. The study provides evidence that results obtained for a small district can be scaled up at city level, to inform both generic and specific protection strategies. The paper discusses these in relation to a scenario event of 0.1 % Annual Exceedance Probability (AEP), based on hydrological and hydraulic models developed for the Disaster Resilient Cities Project

    Flood vulnerability and risk assessment of urban traditional buildings in a heritage district of Kuala Lumpur, Malaysia

    Get PDF
    Flood hazard is increasing in frequency and magnitude in major South East Asian metropolitan areas due to fast urban development and changes in climate, threatening people's property and life. Typically, flood management actions are mostly focused on large-scale defences, such as river embankments or discharge channels or tunnels. However, these are difficult to implement in town centres without affecting the value of their heritage districts and might not provide sufficient mitigation. Therefore, urban heritage buildings may become vulnerable to flood events, even when they were originally designed and built with intrinsic resilient measures, based on the local knowledge of the natural environment and its threats at the time. Their aesthetic and cultural and economic values mean that they can represent a proportionally high contribution to losses in any event. Hence it is worth investigating more localized, tailored mitigation measures. Vulnerability assessment studies are essential to inform the feasibility and development of such strategies. In this study we propose a multilevel methodology to assess the flood vulnerability and risk of residential buildings in an area of Kuala Lumpur, Malaysia, characterized by traditional timber housing. The multiscale flood vulnerability model is based on a wide range of parameters, covering building-specific parameters, neighbourhood conditions and catchment area conditions. The obtained vulnerability index shows the ability to reflect different exposure by different building types and their relative locations. The vulnerability model is combined with high-resolution fluvial and pluvial flood maps providing scenario events with 0.1 % annual exceedance probability (AEP). A damage function of generic applicability is developed to compute the economic losses at individual building and sample levels. The study provides evidence that results obtained for a small district can be scaled up to the city level, to inform both generic and specific protection strategies

    Shell-Model Effective Operators for Muon Capture in ^{20}Ne

    Get PDF
    It has been proposed that the discrepancy between the partially-conserved axial-current prediction and the nuclear shell-model calculations of the ratio CP/CAC_P/C_A in the muon-capture reactions can be solved in the case of ^{28}Si by introducing effective transition operators. Recently there has been experimental interest in measuring the needed angular correlations also in ^{20}Ne. Inspired by this, we have performed a shell-model analysis employing effective transition operators in the shell-model formalism for the transition 20Ne(0g.s.+)+μ20F(1+;1.057MeV)+νμ^{20}Ne(0^+_{g.s.})+\mu^- \to ^{20}F(1^+; 1.057 MeV) + \nu_\mu. Comparison of the calculated capture rates with existing data supports the use of effective transition operators. Based on our calculations, as soon as the experimental anisotropy data becomes available, the limits for the ratio CP/CAC_P/ C_A can be extracted.Comment: 9 pages, 3 figures include

    Comparative bioavailability of a newly developed Irbesartan 300 mg containing preparation

    Get PDF
    Introduction: Irbesartan (CAS registry: 138402-11-6) is a potent, orally active, selective antagonist of the angiotensin II receptors (type AT1) indicated for the treatment of arterial hypertension and chronic heart failure. Aim: The objective of the present study was to demonstrate the bioequivalence of an oral test preparation (Irbesartan 300 mg film-coated tablets Tchaikapharma High Quality Medicines Inc., Bulgaria) and a reference (Aprovel 300 mg film-coated tablets, Sanofi Clir SNC, France), by comparing the rate and extent of absorption of both products upon a single oral administration of the tablets under fasting conditions in healthy volunteers. Methodology: The study was carried out as a single-center, open-label, randomised, twoperiod, single dose, crossover oral bioequivalence study in 40 healthy male and female subjects under fasting conditions. During each study period blood samples for analysis of irbesartan were taken prior to dosing and at 0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.5, 3, 3.5, 4, 5, 6, 8, 12, 24, 36, 48 and 72 hours after dosing. The separated plasma was analyzed in the bioanalytical division of Anapharm Europe with a validated method using reversed phase high performance liquid chromatography coupled to a tandem mass spectrometry detector (RP-LC/MS/MS). Results: The point estimates with 90% confidence intervals of the geometric mean ratios of test and reference (T/R) in the study were found to be 102.39% (95.55% - 109.71%) for Cmax and 98.56 % (92.72 % - 104.76 %) for AUC0-72. Thus, the corresponding ratios of Cmax and AUC0-72 met the predetermined criteria for bioequivalence (90% confidence intervals of the geometric mean ratios of test and reference within the 80.00% - 125.00%). Both products were generally very well tolerated. Conclusions: Irbesartan 300 mg film-coated tablets, Tchaikapharma High Quality Medicines Inc., Bulgaria) and Aprovel 300 mg film-coated tablets (Sanofi Clir SNC, France), are bioequivalent with regard to the rate and extent of absorption

    Technical design and performance of the NEMO3 detector

    Full text link
    The development of the NEMO3 detector, which is now running in the Frejus Underground Laboratory (L.S.M. Laboratoire Souterrain de Modane), was begun more than ten years ago. The NEMO3 detector uses a tracking-calorimeter technique in order to investigate double beta decay processes for several isotopes. The technical description of the detector is followed by the presentation of its performance.Comment: Preprint submitted to Nucl. Instrum. Methods A Corresponding author: Corinne Augier ([email protected]

    Handbook on the Carpathian Convention

    Get PDF
    This volume describes, article by article, the Carpathian Convention: the principles of international environmental law beyond each article, giving uselful examples of best practices and a detailed overview of the international documents providing guidance to its implementation. It is targeted at policy makers and all stakeholders involved in the implementationof the Convention itself

    Adaptive Evolution in Zinc Finger Transcription Factors

    Get PDF
    The majority of human genes are conserved among mammals, but some gene families have undergone extensive expansion in particular lineages. Here, we present an evolutionary analysis of one such gene family, the poly–zinc-finger (poly-ZF) genes. The human genome encodes approximately 700 members of the poly-ZF family of putative transcriptional repressors, many of which have associated KRAB, SCAN, or BTB domains. Analysis of the gene family across the tree of life indicates that the gene family arose from a small ancestral group of eukaryotic zinc-finger transcription factors through many repeated gene duplications accompanied by functional divergence. The ancestral gene family has probably expanded independently in several lineages, including mammals and some fishes. Investigation of adaptive evolution among recent paralogs using dN/dS analysis indicates that a major component of the selective pressure acting on these genes has been positive selection to change their DNA-binding specificity. These results suggest that the poly-ZF genes are a major source of new transcriptional repression activity in humans and other primates

    Broad targeting of resistance to apoptosis in cancer

    Get PDF
    Apoptosis or programmed cell death is natural way of removing aged cells from the body. Most of the anti-cancer therapies trigger apoptosis induction and related cell death networks to eliminate malignant cells. However, in cancer, de-regulated apoptotic signaling, particularly the activation of an anti-apoptotic systems, allows cancer cells to escape this program leading to uncontrolled proliferation resulting in tumor survival, therapeutic resistance and recurrence of cancer. This resistance is a complicated phenomenon that emanates from the interactions of various molecules and signaling pathways. In this comprehensive review we discuss the various factors contributing to apoptosis resistance in cancers. The key resistance targets that are discussed include (1) Bcl-2 and Mcl-1 proteins; (2) autophagy processes; (3) necrosis and necroptosis; (4) heat shock protein signaling; (5) the proteasome pathway; (6) epigenetic mechanisms; and (7) aberrant nuclear export signaling. The shortcomings of current therapeutic modalities are highlighted and a broad spectrum strategy using approaches including (a) gossypol; (b) epigallocatechin-3-gallate; (c) UMI-77 (d) triptolide and (e) selinexor that can be used to overcome cell death resistance is presented. This review provides a roadmap for the design of successful anti-cancer strategies that overcome resistance to apoptosis for better therapeutic outcome in patients with cancer
    corecore