313 research outputs found
Traffic Signal Control with Communicative Deep Reinforcement Learning Agents: a Case Study
In this work we theoretically and experimentally analyze Multi-Agent
Advantage Actor-Critic (MA2C) and Independent Advantage Actor-Critic (IA2C),
two recently proposed multi-agent reinforcement learning methods that can be
applied to control traffic signals in urban areas. The two methods differ in
their use of a reward calculated locally or globally and in the management of
agents' communication. We analyze the methods theoretically with the framework
provided by non-Markov decision processes, which provides useful insights in
the analysis of the algorithms. Moreover, we analyze the efficacy and the
robustness of the methods experimentally by testing them in two traffic areas
in the Bologna (Italy) area, simulated by SUMO, a software tool. The
experimental results indicate that MA2C achieves the best performance in the
majority of cases, outperforms the alternative method considered, and displays
sufficient stability during the learning process.Comment: 41 pages, 16 figure
Tracking Rodent Social Interactions Using Machine Learning
We have developed a video-annotation pipeline that can be used to automatically track the movement of particularly social rodents (Degus) during interactive behavior. Using open source software (DeepLabCut), our approach requires methodical training of DeepLabCut neural networks, along with custom post-processing scripts to ensure continuity of the annotation of individual Degus. This tracking work is the first phase in a larger effort to automatically classify and label behaviors observed in video recordings of Degu interactions. Such behavioral annotation will influence our understanding of social behavior in general, with possible long-term impacts on diagnosis and treatment of autism spectrum disorder and other mental health conditions
Recommended from our members
A Framework for Testing Regulatory Authority
In this article, we consider whether certain provisions of the Regulatory Package (including regulations proposed and issued under Section 385, Section 956, Section 7701(l) and Section 7874) are outside the scope of the IRSâs authority. We approach these questions first by formulating our own set of âunderlying principlesâ that we believe are necessary guideposts to any question of regulatory authority. We believe using our own âunderlying principlesâ as the lens through which to focus the question allows us to develop tax-specific examples to illustrate the limits of regulatory authority rather than rely on the facts of non-tax cases in which similar questions of regulatory authority were considered. Not surprisingly, the principles we outline below are echoed in the case law involving questions of statutory interpretation and the Administrative Procedure Act (the âAPAâ).
Even with the changes to the Regulatory Package, as well as potential tax reform which may further limit the impact of any Section 385 regulations,[11] attempting to define the limits of regulatory authority is as important a task as ever. The expansion of executive power, the seeming impossibility of bipartisan legislation, and the potential of radical tax reform which would involve, inevitably, Congress charging the Treasury Department and the IRS to fill in statutory gaps mean the limits of administrative authority will continue to be tested in the near future
Developing a Common Global Baseline for Nucleic Acid Synthesis Screening
Introduction: Nucleic acid synthesis is a powerful tool that has revolutionized the life sciences. However, the misuse of synthetic nucleic acids could pose a serious threat to public health and safety. There is a need for international standards for nucleic acid synthesis screening to help prevent the misuse of this technology.Methods: We outline current barriers to the adoption of screening, which include the cost of developing screening tools and resources, adapting to existing commercial practices, internationalizing screening, and adapting screening to benchtop nucleic acid synthesis devices. To address these challenges, we then introduce the Common Mechanism for DNA Synthesis Screening, which was developed in consultation with a technical consortium of experts in DNA synthesis, synthetic biology, biosecurity, and policy, with the aim of addressing current barriers. The Common Mechanism software uses a variety of methods to identify sequences of concern, identify taxonomic best matches to regulated pathogens, and identify benign genes that can be cleared for synthesis. Finally, we describe outstanding challenges in the development of screening practices.Results: The Common Mechanism is a step toward ensuring the safe and responsible use of synthetic nucleic acids. It provides a baseline capability that overcomes challenges to nucleic acid synthesis screening and provides a solution for broader international adoption of screening practices.Conclusion: The Common Mechanism is a valuable tool for preventing the misuse of synthetic nucleic acids. It is a critical step toward ensuring the safe and responsible use of this powerful technology
Resource-constrained FPGA Design for Satellite Component Feature Extraction
The effective use of computer vision and machine learning for on-orbit
applications has been hampered by limited computing capabilities, and therefore
limited performance. While embedded systems utilizing ARM processors have been
shown to meet acceptable but low performance standards, the recent availability
of larger space-grade field programmable gate arrays (FPGAs) show potential to
exceed the performance of microcomputer systems. This work proposes use of
neural network-based object detection algorithm that can be deployed on a
comparably resource-constrained FPGA to automatically detect components of
non-cooperative, satellites on orbit. Hardware-in-the-loop experiments were
performed on the ORION Maneuver Kinematics Simulator at Florida Tech to compare
the performance of the new model deployed on a small, resource-constrained FPGA
to an equivalent algorithm on a microcomputer system. Results show the FPGA
implementation increases the throughput and decreases latency while maintaining
comparable accuracy. These findings suggest future missions should consider
deploying computer vision algorithms on space-grade FPGAs.Comment: 9 pages, 7 figures, 4 tables, Accepted at IEEE Aerospace Conference
202
Spatial Variability and Application of Ratios between BTEX in Two Canadian Cities
Spatial monitoring campaigns of volatile organic compounds were carried out in two similarly sized urban industrial cities, Windsor and Sarnia, ON, Canada. For Windsor, data were obtained for all four seasons at approximately 50 sites in each season (winter, spring, summer, and fall) over a three-year period (2004, 2005, and 2006) for a total of 12 sampling sessions. Sampling in Sarnia took place at 37 monitoring sites in fall 2005. In both cities, passive sampling was done using 3M 3500 organic vapor samplers. This paper characterizes benzene, toluene, ethylbenzene, o, and (m + p)-xylene (BTEX) concentrations and relationships among BTEX species in the two cities during the fall sampling periods. BTEX concentration levels and rank order among the species were similar between the two cities. In Sarnia, the relationships between the BTEX species varied depending on location. Correlation analysis between land use and concentration ratios showed a strong influence from local industries. Use one of the ratios between the BTEX species to diagnose photochemical age may be biased due to point source emissions, for example, 53 tonnes of benzene and 86 tonnes of toluene in Sarnia. However, considering multiple ratios leads to better conclusions regarding photochemical aging. Ratios obtained in the sampling campaigns showed significant deviation from those obtained at central monitoring stations, with less difference in the (m + p)/E ratio but better overall agreement in Windsor than in Sarnia
A New Field Protocol for Monitoring Forest Degradation
Forest degradation leads to the gradual reduction of forest carbon stocks, function, and biodiversity following anthropogenic disturbance. Whilst tropical degradation is a widespread problem, it is currently very under-studied and its magnitude and extent are largely unknown. This is due, at least in part, to the lack of developed and tested methods for monitoring degradation. Due to the relatively subtle and ongoing changes associated with degradation, which can include the removal of small trees for fuelwood or understory clearance for agricultural production, it is very hard to detect using Earth Observation. Furthermore, degrading activities are normally spatially heterogeneous and stochastic, and therefore conventional forest inventory plots distributed across a landscape do not act as suitable indicators: at best only a small proportion of plots (often zero) will actually be degraded in a landscape undergoing active degradation. This problem is compounded because the metal tree tags used in permanent forest inventory plots likely deter tree clearance, biasing inventories toward under-reporting change. We have therefore developed a new forest plot protocol designed to monitor forest degradation. This involves a plot that can be set up quickly, so a large number can be established across a landscape, and easily remeasured, even though it does not use tree tags or other obvious markers. We present data from a demonstration plot network set up in Jalisco, Mexico, which were measured twice between 2017 and 2018. The protocol was successful, with one plot detecting degradation under our definition (losing greater than 10% AGB but remaining forest), and a further plot being deforested for Avocado (Persea americana) production. Live AGB ranged from 8.4 Mg haâ1 to 140.8 Mg haâ1 in Census 1, and from 0 Mg haâ1 to 144.2 Mg haâ1 Census 2, with four of ten plots losing AGB, and the remainder staying stable or showing slight increases. We suggest this protocol has great potential for underpinning appropriate forest plot networks for degradation monitoring, potentially in combination with Earth Observation analysis, but also in isolation
Recommended from our members
HD CAGnome: A Search Tool for Huntingtin CAG Repeat Length-Correlated Genes
Background: The length of the huntingtin (HTT) CAG repeat is strongly correlated with both age at onset of Huntingtonâs disease (HD) symptoms and age at death of HD patients. Dichotomous analysis comparing HD to controls is widely used to study the effects of HTT CAG repeat expansion. However, a potentially more powerful approach is a continuous analysis strategy that takes advantage of all of the different CAG lengths, to capture effects that are expected to be critical to HD pathogenesis. Methodology/Principal Findings We used continuous and dichotomous approaches to analyze microarray gene expression data from 107 human control and HD lymphoblastoid cell lines. Of all probes found to be significant in a continuous analysis by CAG length, only 21.4% were so identified by a dichotomous comparison of HD versus controls. Moreover, of probes significant by dichotomous analysis, only 33.2% were also significant in the continuous analysis. Simulations revealed that the dichotomous approach would require substantially more than 107 samples to either detect 80% of the CAG-length correlated changes revealed by continuous analysis or to reduce the rate of significant differences that are not CAG length-correlated to 20% (n = 133 or n = 206, respectively). Given the superior power of the continuous approach, we calculated the correlation structure between HTT CAG repeat lengths and gene expression levels and created a freely available searchable website, âHD CAGnome,â that allows users to examine continuous relationships between HTT CAG and expression levels of âź20,000 human genes. Conclusions/Significance: Our results reveal limitations of dichotomous approaches compared to the power of continuous analysis to study a disease where human genotype-phenotype relationships strongly support a role for a continuum of CAG length-dependent changes. The compendium of HTT CAG length-gene expression level relationships found at the HD CAGnome now provides convenient routes for discovery of candidates influenced by the HD mutation
Accessing Justice II: A Model for Providing Counsel to New York Immigrants in Removal Proceedings
The New York Immigrant Representation Study (âNYIR Studyâ) is a two-year project of the Study Group on Immigrant Representation to analyze and ameliorate the immigrant representation crisisâthe acute shortage of qualified attorneys willing and able to represent indigent immigrants facing deportation. The crisis has reached epic proportions in New York and shows no signs of abating.
In its year-one report (issued in the fall of 2011), the NYIR Study analyzed the empirical evidence regarding the nature and scope of the immigrant representation crisis. In that report, we documented how many New Yorkersâ27 percent of those not detained and 60 percent of those who were detainedâface deportation, and the prospect of permanent exile from families, homes and livelihoods, without any legal representation whatsoever. These unrepresented individuals are often held in detention and include many lawful permanent residents (green card holders), asylees and refugees, victims of domestic violence, and other classes of vulnerable immigrants with deep ties to New York. The study confirmed that the impact of having counsel cannot be overstated: people facing deportation in New York immigration courts with a lawyer are 500 percent as likely to win their cases as those without representation. While, at one end, nondetained immigrants with lawyers have successful outcomes 74 percent of the time, those on the other end, without counsel and who were detained, prevailed a mere 3 percent of the time.
In its second year, the NYIR Study convened a panel of experts to use the data from the year-one report to develop ambitious, yet realistic, near- to medium-term ways to mitigate the worst aspects of the immigrant representation crisis here in New York. The year-two analysis and proposals are set forth in detail here, in the NYIR Study Report: Part II.
A comprehensive solution to the nationwide immigrant representation crisis will require federal action. However, such federal action does not appear on the horizon. Meanwhile, the costs of needless deportations are felt most acutely in places like New York, with vibrant and vital immigrant communities. In addition to the injustice of seeing New Yorkers deported simply because they lack access to counsel, the impact of these deportations on the shattered New York families left behind is devastating. Moreover, the local community then bears the cost of these deportations in very tangible ways: when splintered families lose wage-earning members, they become dependent on a variety of City and State safety net programs to survive; the foster care system must step in when deportations cause the breakdown of families; and support networks to families and children must accommodate the myriad difficulties that result when federal policies are enforced without regard for local concerns. Put simply, the City and State of New York bear a heavy cost as a result of the immigrant representation crisis
A novel approach to investigate tissue-specific trinucleotide repeat instability
Abstract Background In Huntington's disease (HD), an expanded CAG repeat produces characteristic striatal neurodegeneration. Interestingly, the HD CAG repeat, whose length determines age at onset, undergoes tissue-specific somatic instability, predominant in the striatum, suggesting that tissue-specific CAG length changes could modify the disease process. Therefore, understanding the mechanisms underlying the tissue specificity of somatic instability may provide novel routes to therapies. However progress in this area has been hampered by the lack of sensitive high-throughput instability quantification methods and global approaches to identify the underlying factors. Results Here we describe a novel approach to gain insight into the factors responsible for the tissue specificity of somatic instability. Using accurate genetic knock-in mouse models of HD, we developed a reliable, high-throughput method to quantify tissue HD CAG repeat instability and integrated this with genome-wide bioinformatic approaches. Using tissue instability quantified in 16 tissues as a phenotype and tissue microarray gene expression as a predictor, we built a mathematical model and identified a gene expression signature that accurately predicted tissue instability. Using the predictive ability of this signature we found that somatic instability was not a consequence of pathogenesis. In support of this, genetic crosses with models of accelerated neuropathology failed to induce somatic instability. In addition, we searched for genes and pathways that correlated with tissue instability. We found that expression levels of DNA repair genes did not explain the tissue specificity of somatic instability. Instead, our data implicate other pathways, particularly cell cycle, metabolism and neurotransmitter pathways, acting in combination to generate tissue-specific patterns of instability. Conclusion Our study clearly demonstrates that multiple tissue factors reflect the level of somatic instability in different tissues. In addition, our quantitative, genome-wide approach is readily applicable to high-throughput assays and opens the door to widespread applications with the potential to accelerate the discovery of drugs that alter tissue instability
- âŚ