6 research outputs found

    Пределы и риски цифровой трансформации

    Get PDF
    Currently, the process of digital transformation is actively going on in the economy, science, education, and society as a whole. This process has a number of restrictions and risks we consider. The mathematical theory of complexity reveals a large class of the restrictions. The exact solution of a number of simple-looking problems with a small amount of input data requires resources many times greater than the capabilities of all available computers.On the "border" between natural and artificial intelligence lies the "cognitive barrier". This, as a rule, makes it impossible to use the results of a number of artificial intelligence systems to adjust our strategies. We and computers "think" differently. They have to be considered as "black boxes". It is very likely that the tester of artificial intelligence systems will become one of the mass professions in the close future.We give examples to show that the "translation" from "continuous" to "discrete" language can lead to qualitatively different behavior of mathematical models. In a number of problems associated with a computational experiment this can be quite significant.Great risks arise when passing to the "fast world", approaching the "Lem's barrier". It happens when artificial intelligence systems are assigned strategically important tasks that they must solve at a speed inaccessible to humans.The analysis shows that managing the risks of digital transformation and its limitations requires the attention of the scientific and expert community, as well as active participants in this process.В настоящее время процесс цифровой трансформации активно идет в экономике, науке, образовании и в обществе в целом. С ним связан ряд ограничений и рисков, рассматриваемых в статье. Большой класс ограничений позволяет выявить математическая теория сложности. Точное решение ряда простых по виду проблем с небольшим объемом входных данных требует ресурсов, многократно превышающих возможности всех доступных компьютеров.На «границе» между естественным и искусственным интеллектом имеет место «когнитивный барьер». Это приводит к тому, что мы, как правило, не можем воспользоваться результатами работы ряда систем с искусственным интеллектом, чтобы скорректировать свои стратегии. Мы и машины «думаем» по-разному. Их приходится рассматривать как «черные ящики». Весьма вероятно, что тестер систем искусственного интеллекта станет одной из массовых профессий уже в недалеком будущем.Приведены примеры, показывающие, что «перевод» с «непрерывного» на «дискретный» язык может приводить к качественно различному поведению математических моделей. В ряде задач, связанных с вычислительным экспериментом, это может быть весьма существенно.Большие риски возникают при переходе в «быстрый мир», при приближении к «барьеру Лема». Это происходит, когда системам искусственного интеллекта препоручаются стратегически важные задачи, которые они должны решать в темпе, недоступном для человека.Проведенный анализ показывает, что управление рисками цифровой трансформации и её ограничений требует внимания научного и экспертного сообщества, а также активных участников этого процесса

    Philosophy and Ideology of the Future in the Context of Modern Science

    No full text
    The article analyzes the issues of the philosophy and ideology of the future from the point of view of the theory of self-organization, or synergetics. This interdisciplinary approach allows researchers to focus their efforts on the key problems of the development of civilization and, in particular, on designing the future. In the theory of valuable information developed by D.S. Chernavsky, it is shown that there is a number of knowledge, skills, and abilities that increase the probability of their holders to survive and convey essential information to the future generations. In the 21 st century, civilizational choice and ideology will be such knowledge. Several “brief histories of the future” are currently popular, from S. Lem and A. Toffler to J. Attali and S.P. Huntington. All of them focus on the projection onto society of achieved or promising industrial technical changes. D. Bell’s theory of post-industrial development as well as the theory of the humanitarian and technological revolution, which is actively developing at the present time, show that this is not correct. The key factor will be the image of the desired future, which underlies the ideology adopted by the elites and public consciousness. The article shows that now a choice is being made between the society that adopts a new leftist ideology or the New Middle Ages. The current socio-economic and military-political instability in the world, which was clearly demonstrated by the situation of the COVID-19 pandemic, shows that without it the world will be dependent on the establishment and other centers of power pursuing their own, far from common, interests

    Strategic Digital Reality Risks Management Technologies

    Full text link
    A number of strategic risks are taken into account in the formation and development of computer reality. Moreover, we estimated that the possibility of implementing some risks and the likelihood of other risks are significantly underestimated. Nassim Taleb identified various risks as “Black swans” and “Mandelbrot Grey swans”. They are characterized by power laws of probability density distribution, which makes their parameters difficult to predict. We find ourselves in a “Fast world” extensively studied in overcoming the “barrier problem”. Artificial intelligence systems are given “last decisions” based on risk results. This use of digital technology is unacceptable. LEM noted: “unheard-of Fast cars make mistakes unheard-of Fast,” and the cost of an error can affect the fate of humanity too high. International control mechanisms are needed similar to those that in the XX century restrained the quantitative growth and qualitative improvement of nuclear charges and their means of delivery, as well as to create such weapons. The risk analysis is based on the theory of the humanitarian and technological revolution associated with the transition from the industrial to the post-industrial phase of civilization development. There is a gradual transition from the world of things to the world of people. A bifurcation point is set here, after which different trajectories are possible. One of the further trajectories in the terminology of J.Attali is associated with the era of hypercontrol and the formation of hyperimperia. The founder of the Davos economic forum K. Schwab considers this scenario as the main one. The analysis of the trajectory clarifies the huge risks on this path, as well as the need for efforts to preserve the gains of culture and avoid this scenario. Within the framework of the risk management paradigm, specific measures are proposed to avoid many negative consequences of the introduction of digital technologies. Keywords: digital reality, risk management, strategic deterrence, humanitarian and technological revolution, digital educatio

    Limits and Risks of Digital Transformation

    Get PDF
    Currently, the process of digital transformation is actively going on in the economy, science, education, and society as a whole. This process has a number of restrictions and risks we consider. The mathematical theory of complexity reveals a large class of the restrictions. The exact solution of a number of simple-looking problems with a small amount of input data requires resources many times greater than the capabilities of all available computers.On the "border" between natural and artificial intelligence lies the "cognitive barrier". This, as a rule, makes it impossible to use the results of a number of artificial intelligence systems to adjust our strategies. We and computers "think" differently. They have to be considered as "black boxes". It is very likely that the tester of artificial intelligence systems will become one of the mass professions in the close future.We give examples to show that the "translation" from "continuous" to "discrete" language can lead to qualitatively different behavior of mathematical models. In a number of problems associated with a computational experiment this can be quite significant.Great risks arise when passing to the "fast world", approaching the "Lem's barrier". It happens when artificial intelligence systems are assigned strategically important tasks that they must solve at a speed inaccessible to humans.The analysis shows that managing the risks of digital transformation and its limitations requires the attention of the scientific and expert community, as well as active participants in this process
    corecore