13,246 research outputs found

    Free-libre open source software as a public policy choice

    Get PDF
    Free Libre Open Source Software (FLOSS) is characterised by a specific programming and development paradigm. The availability and freedom of use of source code are at the core of this paradigm, and are the prerequisites for FLOSS features. Unfortunately, the fundamental role of code is often ignored among those who decide the software purchases for Canadian public agencies. Source code availability and the connected freedoms are often seen as unrelated and accidental aspects, and the only real advantage acknowledged, which is the absence of royalty fees, becomes paramount. In this paper we discuss some relevant legal issues and explain why public administrations should choose FLOSS for their technological infrastructure. We also present the results of a survey regarding the penetration and awareness of FLOSS usage into the Government of Canada. The data demonstrates that the Government of Canada shows no enforced policy regarding the implementation of a specific technological framework (which has legal, economic, business, and ethical repercussions) in their departments and agencies

    Social justice and an information democracy with free and open source software

    Get PDF
    This paper includes some thoughts on the implications of proprietary software versus free and open source software with regards to social justice, capital, and notions of an information society versus an information democracy. It outlines what free and open source software is and why it is important for social justice, and it offers three cases that highlight two salient themes. This includes a case about preference ordering and decision-making and two cases about knowing and knowledge

    "Influence Sketching": Finding Influential Samples In Large-Scale Regressions

    Full text link
    There is an especially strong need in modern large-scale data analysis to prioritize samples for manual inspection. For example, the inspection could target important mislabeled samples or key vulnerabilities exploitable by an adversarial attack. In order to solve the "needle in the haystack" problem of which samples to inspect, we develop a new scalable version of Cook's distance, a classical statistical technique for identifying samples which unusually strongly impact the fit of a regression model (and its downstream predictions). In order to scale this technique up to very large and high-dimensional datasets, we introduce a new algorithm which we call "influence sketching." Influence sketching embeds random projections within the influence computation; in particular, the influence score is calculated using the randomly projected pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We validate that influence sketching can reliably and successfully discover influential samples by applying the technique to a malware detection dataset of over 2 million executable files, each represented with almost 100,000 features. For example, we find that randomly deleting approximately 10% of training samples reduces predictive accuracy only slightly from 99.47% to 99.45%, whereas deleting the same number of samples with high influence sketch scores reduces predictive accuracy all the way down to 90.24%. Moreover, we find that influential samples are especially likely to be mislabeled. In the case study, we manually inspect the most influential samples, and find that influence sketching pointed us to new, previously unidentified pieces of malware.Comment: fixed additional typo

    Regulating Habit-Forming Technology

    Get PDF
    Tech developers, like slot machine designers, strive to maximize the user’s “time on device.” They do so by designing habit-forming products— products that draw consciously on the same behavioral design strategies that the casino industry pioneered. The predictable result is that most tech users spend more time on device than they would like, about five hours of phone time a day, while a substantial minority develop life-changing behavioral problems similar to problem gambling. Other countries have begun to regulate habit-forming tech, and American jurisdictions may soon follow suit. Several state legislatures today are considering bills to regulate “loot boxes,” a highly addictive slot-machine- like mechanic that is common in online video games. The Federal Trade Commission has also announced an investigation into the practice. As public concern mounts, it is surprisingly easy to envision consumer regulation extending beyond video games to other types of apps. Just as tobacco regulations might prohibit brightly colored packaging and fruity flavors, a social media regulation might limit the use of red notification badges or “streaks” that reward users for daily use. It is unclear how much of this regulation could survive First Amendment scrutiny; software, unlike other consumer products, is widely understood as a form of protected “expression.” But it is also unclear whether well-drawn laws to combat compulsive technology use would seriously threaten First Amendment values. At a very low cost to the expressive interests of tech companies, these laws may well enhance the quality and efficacy of online speech by mitigating distraction and promoting deliberation

    Moving from a "human-as-problem" to a "human-as-solution" cybersecurity mindset

    Get PDF
    Cybersecurity has gained prominence, with a number of widely publicised security incidents, hacking attacks and data breaches reaching the news over the last few years. The escalation in the numbers of cyber incidents shows no sign of abating, and it seems appropriate to take a look at the way cybersecurity is conceptualised and to consider whether there is a need for a mindset change.To consider this question, we applied a "problematization" approach to assess current conceptualisations of the cybersecurity problem by government, industry and hackers. Our analysis revealed that individual human actors, in a variety of roles, are generally considered to be "a problem". We also discovered that deployed solutions primarily focus on preventing adverse events by building resistance: i.e. implementing new security layers and policies that control humans and constrain their problematic behaviours. In essence, this treats all humans in the system as if they might well be malicious actors, and the solutions are designed to prevent their ill-advised behaviours. Given the continuing incidences of data breaches and successful hacks, it seems wise to rethink the status quo approach, which we refer to as "Cybersecurity, Currently". In particular, we suggest that there is a need to reconsider the core assumptions and characterisations of the well-intentioned human's role in the cybersecurity socio-technical system. Treating everyone as a problem does not seem to work, given the current cyber security landscape.Benefiting from research in other fields, we propose a new mindset i.e. "Cybersecurity, Differently". This approach rests on recognition of the fact that the problem is actually the high complexity, interconnectedness and emergent qualities of socio-technical systems. The "differently" mindset acknowledges the well-intentioned human's ability to be an important contributor to organisational cybersecurity, as well as their potential to be "part of the solution" rather than "the problem". In essence, this new approach initially treats all humans in the system as if they are well-intentioned. The focus is on enhancing factors that contribute to positive outcomes and resilience. We conclude by proposing a set of key principles and, with the help of a prototypical fictional organisation, consider how this mindset could enhance and improve cybersecurity across the socio-technical system

    Reading a Protoevangelium in the Context of Genesis

    Get PDF
    This article proposes that the case for a ‘messianic’ reading of Gen. 3:15 is cumulative. No single individual argument is decisive and it is virtually impossible to sustain a robust protevangelium interpretation of this text within the context of Gen. 3 alone. However, as already pointed out in the introduction, isolating Gen. 3 from its literary/historical context in the book of Genesis does not lead to a fruitful resolution of its meaning but at best creates a hypothetical reconstructed meaning behind the text which becomes difficult to sustain in light of the interpretation of the \u27seed\u27 in the entire book. Though the lexical evidence by itself is somewhat ambiguous, the individual meaning for the term ‘seed’ is certainly plausible as demonstrated by its usage within the book of Genesis and in the rest of the Hebrew Bible. Further, when the text is read in the context of the first and second toledots in the Primeval History, not to mention in light of the macro-toledot structure of the entire book of Genesis we would agree with T.D. Alexander’s statement that in the “in the light of Genesis as a whole, a messianic reading of this verse is not only possible but highly probable

    Endogenous space in the Net era

    Get PDF
    Libre Software communities are among the most interesting and advanced socio-economic laboratories on the Net. In terms of directions of Regional Science research, this paper addresses a simple question: “Is the socio-economics of digital nets out of scope for Regional Science, or might the latter expand to a cybergeography of digitally enhanced territories ?” As for most simple questions, answers are neither so obvious nor easy. The authors start drafting one in a positive sense, focussing upon a file rouge running across the paper: endogenous spaces woven by socio-economic processes. The drafted answer declines on an Evolutionary Location Theory formulation, together with two computational modelling views. Keywords: Complex networks, Computational modelling, Economics of Internet, Endogenous spaces, Evolutionary location theory, Free or Libre Software, Path dependence, Positionality.

    Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    Get PDF
    [Excerpt] Interfacing science and policy raises challenging issues when large spatial-scale (regional, continental, global) environmental problems need transdisciplinary integration within a context of modelling complexity and multiple sources of uncertainty. This is characteristic of science-based support for environmental policy at European scale, and key aspects have also long been investigated by European Commission transnational research. Approaches (either of computational science or of policy-making) suitable at a given domain-specific scale may not be appropriate for wide-scale transdisciplinary modelling for environment (WSTMe) and corresponding policy-making. In WSTMe, the characteristic heterogeneity of available spatial information and complexity of the required data-transformation modelling (D-TM) appeal for a paradigm shift in how computational science supports such peculiarly extensive integration processes. In particular, emerging wide-scale integration requirements of typical currently available domain-specific modelling strategies may include increased robustness and scalability along with enhanced transparency and reproducibility. This challenging shift toward open data and reproducible research (open science) is also strongly suggested by the potential - sometimes neglected - huge impact of cascading effects of errors within the impressively growing interconnection among domain-specific computational models and frameworks. Concise array-based mathematical formulation and implementation (with array programming tools) have proved helpful in supporting and mitigating the complexity of WSTMe when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) where semantic transparency also implies free software use (although black-boxes - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP exploits the joint semantics provided by SemAP and geospatial tools to split a complex D-TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks. GeoSemAP allows intermediate data and information layers to be more easily and formally semantically described so as to increase fault-tolerance, transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often diffcult to transfer from technical WSTMe to the science-policy interface. [...
    • 

    corecore