64 research outputs found
The Ironies of Automation Law: Tying Policy Knots with Fair Automation Practices Principles
Rapid developments in sensors, computing, and robotics, including power, kinetics, control, telecommunication, and artificial intelligence have presented opportunities to further integrate sophisticated automation across society. With these opportunities come questions about the ability of current laws and policies to protect important social values new technologies may threaten. As sophisticated automation moves beyond the cages of factories and cockpits, the need for a legal approach suitable to guide an increasingly automated future becomes more pressing. This Article analyzes examples of legal approaches to automation thus far by legislative, administrative, judicial, state, and international bodies. The case studies reveal an interesting irony: while automation regulation is intended to protect and promote human values, by focusing on the capabilities of the automation, this approach results in less protection of human values. The irony is similar to those pointed out by Lisanne Bainbridge in 1983, when she described how designing automation to improve the life of the operator using an automation-centered approach actually made the operator\u27s life worse and more difficult. The ironies that result from automation-centered legal approaches are a product of the neglect of the sociotechnical nature of automation: the relationships between man and machine are situated and interdependent, humans will always be in the loop, and reactive policies ignore the need for general guidance for ethical and accountable automation design and implementation. Like system engineers three decades ago, policymakers must adjust the focus of Meg Leta (Ambrose) Jones, J.D., Ph.D. is an Assistant Professor of Communication, legal treatment of automation to recognize the interdependence of man and machine to avoid the ironies of automation law and meet the goals of ethical integration. The Article proposes that the existing models utilized for safe and actual implementation for automated system design be supplemented with principles to guide ethical and sociotechnical legal approaches to automation
A review of the use of artificial intelligence methods in infrastructure systems
The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the ‘system of systems’ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems
Parking guiding system with occupation prediction
Parking availability is an increasingly scarce and expensive resource within large
cities, and this problem is considered to be one of the most critical transportation
management system inside a big city. To approach this problem a proof of concept
is presented as a way to guide a driver to the possible free parking lot through a
prediction process using past data, correlated with traffic, weather conditions and
time period features (year, month, day, holidays, and so on).
A feature selection was performed by the study of data patterns, in order to
understand the parking lot affluence and how certain features influence them, as
well as to comprehend the sudden changes in the total occupation of the parking
lot and which features really matter and have an impact on the total occupation.
Those conclusions helped to create a robust and efficient predictive model in order
to predict the parking lot availability rate more accurately.
Three algorithms were used to build the predictive models as a way to test
the most efficient and accurate one, namely Gradient Boosting Machine, Decision
Random Forest and Neural Networks. Various types of models were tested with
the aim of improving the results obtained, as well as understanding the impact of
each of the processing of the data used.
To complement this, a decision algorithm was created to guide the driver to the
most optimal parking lot that presents better conditions, taking into account the
location and driver characteristics, like the park more likely to have an available
parking space, closer to the user’s current position or a more attractive price for
the driver. Finally, these developments are integrated into a mobile application in
order to work like an interface that the driver can interact.A disponibilidade de estacionamento é um recurso cada vez mais escasso e caro
nas grandes cidades, e este problema é considerado um dos mais crÃticos nos sistemas
de gestão de transportes dentro de uma grande cidade. Para abordar este
problema, uma prova de conceito é apresentada como uma forma de guiar um
condutor para o parque de estacionamento com lugares disponÃveis através de
um processo de previsão usando dados passados, correlacionados com o tráfego,
condições climáticas e caracterÃsticas do perÃodo de tempo (ano, mês, dia, feriados,
e assim por diante).
Uma seleção de caracterÃsticas foi realizada pelo estudo de padrões de dados, a
fim de entender a afluência do estacionamento e como certas caracterÃsticas os influenciam,
bem como para compreender as mudanças repentinas na ocupação total
do estacionamento e quais caracterÃsticas realmente importam e têm um impacto
sobre a ocupação total. Essas conclusões ajudaram a criar um modelo preditivo
robusto e eficiente a fim de prever a taxa de disponibilidade do estacionamento
com mais precisão.
Três algoritmos foram usados para construir os modelos preditivos como forma
de testar o mais eficiente e preciso, a saber: Gradient Boosting Machine, Decision
Random Forest e Neural Networks. Foram também testados vários tipos de modelos
com o objetivo de melhorar os resultados obtidos, bem como compreender o
impacto de cada um dos processamentos de dados utilizados.
Para complementar, foi criado um algoritmo de decisão para orientar o condutor
para o parque de estacionamento mais indicado e que apresente melhores
condições, tendo em conta a localização e as caracterÃsticas do condutor, como
o mais provável de ter um lugar de estacionamento disponÃvel, mais próximo da
posição atual do utilizador ou um preço mais atrativo para o condutor. Finalmente,
estes desenvolvimentos são integrados numa aplicação móvel de forma a
que o utilizador consiga aceder através de uma interface
AI: Limits and Prospects of Artificial Intelligence
The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence
Regulating by Robot: Administrative Decision Making in the Machine-Learning Era
Machine-learning algorithms are transforming large segments of the economy, underlying everything from product marketing by online retailers to personalized search engines, and from advanced medical imaging to the software in self-driving cars. As machine learning’s use has expanded across all facets of society, anxiety has emerged about the intrusion of algorithmic machines into facets of life previously dependent on human judgment. Alarm bells sounding over the diffusion of artificial intelligence throughout the private sector only portend greater anxiety about digital robots replacing humans in the governmental sphere. A few administrative agencies have already begun to adopt this technology, while others have the clear potential in the near-term to use algorithms to shape official decisions over both rulemaking and adjudication. It is no longer fanciful to envision a future in which government agencies could effectively make law by robot, a prospect that understandably conjures up dystopian images of individuals surrendering their liberty to the control of computerized overlords. Should society be alarmed by governmental use of machine learning applications? We examine this question by considering whether the use of robotic decision tools by government agencies can pass muster under core, time-honored doctrines of administrative and constitutional law. At first glance, the idea of algorithmic regulation might appear to offend one or more traditional doctrines, such as the nondelegation doctrine, procedural due process, equal protection, or principles of reason-giving and transparency. We conclude, however, that when machine-learning technology is properly understood, its use by government agencies can comfortably fit within these conventional legal parameters. We recognize, of course, that the legality of regulation by robot is only one criterion by which its use should be assessed. Obviously, agencies should not apply algorithms cavalierly, even if doing so might not run afoul of the law, and in some cases, safeguards may be needed for machine learning to satisfy broader, good-governance aspirations. Yet in contrast with the emerging alarmism, we resist any categorical dismissal of a future administrative state in which key decisions are guided by, and even at times made by, algorithmic automation. Instead, we urge that governmental reliance on machine learning should be approached with measured optimism over the potential benefits such technology can offer society by making government smarter and its decisions more efficient and just
Interrogating Datafication
What constitutes a data practice and how do contemporary digital media technologies reconfigure our understanding of practices in general? Autonomously acting media, distributed digital infrastructures, and sensor-based media environments challenge the conditions of accounting for data practices both theoretically and empirically. Which forms of cooperation are constituted in and by data practices? And how are human and nonhuman agencies distributed and interrelated in data-saturated environments? The volume collects theoretical, empirical, and historiographical contributions from a range of international scholars to shed light on the current shift from media to data practices
Interrogating Datafication: Towards a Praxeology of Data
What constitutes a data practice and how do contemporary digital media technologies reconfigure our understanding of practices in general? Autonomously acting media, distributed digital infrastructures, and sensor-based media environments challenge the conditions of accounting for data practices both theoretically and empirically. Which forms of cooperation are constituted in and by data practices? And how are human and nonhuman agencies distributed and interrelated in data-saturated environments? The volume collects theoretical, empirical, and historiographical contributions from a range of international scholars to shed light on the current shift from media to data practices
Anomalous behaviour detection for cyber defence in modern industrial control systems
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.The fusion of pervasive internet connectivity and emerging technologies in smart cities creates fragile cyber-physical-natural ecosystems. Industrial Control Systems (ICS) are intrinsic parts of smart cities and critical to modern societies. Not designed for interconnectivity or security, disruptor technologies enable ubiquitous computing in modern ICS. Aided by artificial intelligence and the industrial internet of things they transform the ICS environment towards better automation, process control and monitoring. However, investigations reveal that leveraging disruptive technologies in ICS creates security challenges exposing critical infrastructure to sophisticated threat actors including increasingly hostile, well-organised cybercrimes and Advanced Persistent Threats. Besides external factors, the prevalence of insider threats includes malicious intent, accidental hazards and professional errors. The sensing capabilities create opportunities to capture various data types. Apart from operational use, this data combined with artificial intelligence can be innovatively utilised to model anomalous behaviour as part of defence-in-depth strategies. As such, this research aims to investigate and develop a security mechanism to improve cyber defence in ICS.
Firstly, this thesis contributes a Systematic Literature Review (SLR), which helps analyse frameworks and systems that address CPS’ cyber resilience and digital forensic incident response in smart cities. The SLR uncovers emerging themes and concludes several key findings. For example, the chronological analysis reveals key influencing factors, whereas the data source analysis points to a lack of real CPS datasets with prevalent utilisation of software and infrastructure-based simulations.
Further in-depth analysis shows that cross-sector proposals or applications to improve digital forensics focusing on cyber resilience are addressed by a small number of research studies in some smart sectors.
Next, this research introduces a novel super learner ensemble anomaly detection and cyber risk quantification framework to profile anomalous behaviour in ICS and derive a cyber risk score. The proposed framework and associated learning models are experimentally validated. The produced results are promising and achieve an overall F1-score of 99.13%, and an anomalous recall score of 99% detecting anomalies lasting only 17 seconds ranging from 0.5% to 89% of the dataset.
Further, a one-class classification model is developed, leveraging stream rebalancing followed by adaptive machine learning algorithms and drift detection methods. The model is experimentally validated producing promising results including an overall Matthews Correlation Coefficient (MCC) score of 0.999 and the Cohen’s Kappa (K) score of 0.9986 on limited variable single-type anomalous behaviour per data stream. Wide data streams achieve an MCC score of 0.981 and a K score of 0.9808 in the prevalence of multiple types of anomalous instances.
Additionally, the thesis scrutinises the applicability of the learning models to support digital forensic readiness. The research study presents the concept of digital witness and digital chain of custody in ICS. Following that, a use case integrating blockchain technologies into the design of ICS to support digital forensic readiness is discussed.
In conclusion, the contributions of this research thesis help towards developing the next generation of state-of-the-art methods for anomalous behaviour detection in ICS defence-in-depth
The Technological Emergence of AutoML: A Survey of Performant Software and Applications in the Context of Industry
With most technical fields, there exists a delay between fundamental academic
research and practical industrial uptake. Whilst some sciences have robust and
well-established processes for commercialisation, such as the pharmaceutical
practice of regimented drug trials, other fields face transitory periods in
which fundamental academic advancements diffuse gradually into the space of
commerce and industry. For the still relatively young field of
Automated/Autonomous Machine Learning (AutoML/AutonoML), that transitory period
is under way, spurred on by a burgeoning interest from broader society. Yet, to
date, little research has been undertaken to assess the current state of this
dissemination and its uptake. Thus, this review makes two primary contributions
to knowledge around this topic. Firstly, it provides the most up-to-date and
comprehensive survey of existing AutoML tools, both open-source and commercial.
Secondly, it motivates and outlines a framework for assessing whether an AutoML
solution designed for real-world application is 'performant'; this framework
extends beyond the limitations of typical academic criteria, considering a
variety of stakeholder needs and the human-computer interactions required to
service them. Thus, additionally supported by an extensive assessment and
comparison of academic and commercial case-studies, this review evaluates
mainstream engagement with AutoML in the early 2020s, identifying obstacles and
opportunities for accelerating future uptake
Interrogating Datafication
What constitutes a data practice and how do contemporary digital media technologies reconfigure our understanding of practices in general? Autonomously acting media, distributed digital infrastructures, and sensor-based media environments challenge the conditions of accounting for data practices both theoretically and empirically. Which forms of cooperation are constituted in and by data practices? And how are human and nonhuman agencies distributed and interrelated in data-saturated environments? The volume collects theoretical, empirical, and historiographical contributions from a range of international scholars to shed light on the current shift from media to data practices
- …