8 research outputs found
A Dynamic Power Grid for a Dynamic Age Introducing LES to a pricing mechanism that assesses demand, and changes every hour
Lets say you knew someone who would buy a whole pizza when they were a little bit hungry, left their sink running when they weren’t using it, maxed out their A/C in their house when they were on vacation, and always kept their stove on just in case they wanted to cook something.
You would probably see this as horribly wasteful, and a terrible drain of money. But every time we pay our power bill, our prices assume we share these wasteful habits. The rate that we pay per kilowatt hour assumes customers are eating the whole pizza, even though most of us may only eat a slice or two.
But this type of pricing seems justifiable; as we are paying for a system that is always at the ready, and can provide us a “whole pizza” of electricity if needed, such as a boiling hot day in late July when we are running the A/C full blast for the whole day.
But there are many other industries that have handled varying levels of customer usage stress without ramping up prices for the whole customer base. For example, we pay different prices depending on our internet speeds, how much mobile data we use, and how much water we irrigate our lawns with. And back in the day (only about a decade or two ago) when phone call minutes were scarce, many plans didn’t count it against us when we made our calls during off-peak hours.
Both LES and OPPD have payment plans that charge flat rates regardless of the time that we flip our lights on, charging prices that make sure a grid is ready for peak electrical usage at all times. And the more that you use, the lower your price/kilowatt hour. So the more that you use, the lower your price/kilowatt hour goes. This philosophy may work for buying things in bulk, or buying the large popcorn instead of the medium because it’s only a 1.34 per unit (100 cubic feet, or 748 gallons) for the first 8 units, jumps up to 2.96/unit for all units above this amount. When the price increases the more we use, it convinces us to keep our usage low.
Below are a few graphs, illustrating how you get more “bang for your buck” the more electricity you use with LES, but water pricing has the opposite trend.
Why can’t our electric utility pricing adopt a little bit of the same philosophy?
In this age, we can easily predict when our power grid will be under the most stress, and the times of the day that people use the most power. To lower the peak usage (AKA the whole pizza price) we could vary electricity prices throughout the day, raising them during midday peak power times, and lowering them during power usage lulls overnight. We also could take a page from the water utilities book and increase prices for exorbitant usage, but lower base rates to encourage not only off-peak usage, but conservative usage.
With this method, people will be rewarded for drying their clothes at night, and for keeping lights off during the day. With a consistently lowered peak power usage, our electrical grid, and our wallets, can rest a little easier knowing that it doesn’t need to keep dozens of coal fired power plants on standby just in case we decide to use power.
Our energy providers can also use their profits to steadily introduce more green energy producers, and stabilize the market with low risk, low pollutant power.
A consistent power usage throughout the day means we can rely on more green energy sources like nuclear, solar and wind, as they can be used in combination to provide steady energy 24/7. With varying usage, our grid has to rely on the use of dirtier fuel methods that can be quickly ramped up to handle peaks of power.
It’s time that we pay attention to our price per kilowatt in the same way we pay attention to our gas prices, data remaining, minutes spent, messages sent and gallons drained. By rewarding customers for destressing the grid, we can efficiently shrink the size of our grid overall.
Instead of buying a whole pizza, we should buy the slices we need
A Dynamic Power Grid for a Dynamic Age Introducing LES to a pricing mechanism that assesses demand, and changes every hour
Lets say you knew someone who would buy a whole pizza when they were a little bit hungry, left their sink running when they weren’t using it, maxed out their A/C in their house when they were on vacation, and always kept their stove on just in case they wanted to cook something.
You would probably see this as horribly wasteful, and a terrible drain of money. But every time we pay our power bill, our prices assume we share these wasteful habits. The rate that we pay per kilowatt hour assumes customers are eating the whole pizza, even though most of us may only eat a slice or two.
But this type of pricing seems justifiable; as we are paying for a system that is always at the ready, and can provide us a “whole pizza” of electricity if needed, such as a boiling hot day in late July when we are running the A/C full blast for the whole day.
But there are many other industries that have handled varying levels of customer usage stress without ramping up prices for the whole customer base. For example, we pay different prices depending on our internet speeds, how much mobile data we use, and how much water we irrigate our lawns with. And back in the day (only about a decade or two ago) when phone call minutes were scarce, many plans didn’t count it against us when we made our calls during off-peak hours.
Both LES and OPPD have payment plans that charge flat rates regardless of the time that we flip our lights on, charging prices that make sure a grid is ready for peak electrical usage at all times. And the more that you use, the lower your price/kilowatt hour. So the more that you use, the lower your price/kilowatt hour goes. This philosophy may work for buying things in bulk, or buying the large popcorn instead of the medium because it’s only a 1.34 per unit (100 cubic feet, or 748 gallons) for the first 8 units, jumps up to 2.96/unit for all units above this amount. When the price increases the more we use, it convinces us to keep our usage low.
Below are a few graphs, illustrating how you get more “bang for your buck” the more electricity you use with LES, but water pricing has the opposite trend.
Why can’t our electric utility pricing adopt a little bit of the same philosophy?
In this age, we can easily predict when our power grid will be under the most stress, and the times of the day that people use the most power. To lower the peak usage (AKA the whole pizza price) we could vary electricity prices throughout the day, raising them during midday peak power times, and lowering them during power usage lulls overnight. We also could take a page from the water utilities book and increase prices for exorbitant usage, but lower base rates to encourage not only off-peak usage, but conservative usage.
With this method, people will be rewarded for drying their clothes at night, and for keeping lights off during the day. With a consistently lowered peak power usage, our electrical grid, and our wallets, can rest a little easier knowing that it doesn’t need to keep dozens of coal fired power plants on standby just in case we decide to use power.
Our energy providers can also use their profits to steadily introduce more green energy producers, and stabilize the market with low risk, low pollutant power.
A consistent power usage throughout the day means we can rely on more green energy sources like nuclear, solar and wind, as they can be used in combination to provide steady energy 24/7. With varying usage, our grid has to rely on the use of dirtier fuel methods that can be quickly ramped up to handle peaks of power.
It’s time that we pay attention to our price per kilowatt in the same way we pay attention to our gas prices, data remaining, minutes spent, messages sent and gallons drained. By rewarding customers for destressing the grid, we can efficiently shrink the size of our grid overall.
Instead of buying a whole pizza, we should buy the slices we need
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
Functional and Radiologic Outcomes of Degenerative Versus Traumatic Full-Thickness Rotator Cuff Tears Involving the Supraspinatus Tendon.
BACKGROUND
Arthroscopic rotator cuff repair (ARCR) is among the most commonly performed orthopaedic procedures. Several factors-including age, sex, and tear severity-have been identified as predictors for outcome after repair. The influence of the tear etiology on functional and structural outcome remains controversial.
PURPOSE
To investigate the influence of tear etiology (degenerative vs traumatic) on functional and structural outcomes in patients with supraspinatus tendon tears.
STUDY DESIGN
Cohort study; Level of evidence, 2.
METHODS
Patients undergoing ARCR from 19 centers were prospectively enrolled between June 2020 and November 2021. Full-thickness, nonmassive tears involving the supraspinatus tendon were included. Tears were classified as degenerative (chronic shoulder pain, no history of trauma) or traumatic (acute, traumatic onset, no previous shoulder pain). Range of motion, strength, the Subjective Shoulder Value, the Oxford Shoulder Score (OSS), and the Constant-Murley Score (CMS) were assessed before (baseline) and 6 and 12 months after ARCR. The Subjective Shoulder Value and the OSS were also determined at the 24-month follow-up. Repair integrity after 12 months was documented, as well as additional surgeries up to the 24-month follow-up. Tear groups were compared using mixed models adjusted for potential confounding effects.
RESULTS
From a cohort of 973 consecutive patients, 421 patients (degenerative tear, n = 230; traumatic tear, n = 191) met the inclusion criteria. The traumatic tear group had lower mean baseline OSS and CMS scores but significantly greater score changes 12 months after ARCR (OSS, 18 [SD, 8]; CMS, 34 [SD,18] vs degenerative: OSS, 15 [SD, 8]; CMS, 22 [SD, 15]) (P < .001) and significantly higher 12-month overall scores (OSS, 44 [SD, 5]; CMS, 79 [SD, 9] vs degenerative: OSS, 42 [SD, 7]; CMS, 76 [SD, 12]) (P≤ .006). At the 24-month follow-up, neither the OSS (degenerative, 44 [SD, 6]; traumatic, 45 [SD, 6]; P = .346) nor the rates of repair failure (degenerative, 14 [6.1%]; traumatic 12 [6.3%]; P = .934) and additional surgeries (7 [3%]; 7 [3.7%]; P = .723) differed between groups.
CONCLUSION
Patients with degenerative and traumatic full-thickness supraspinatus tendon tears who had ARCR show satisfactory short-term functional results. Although patients with traumatic tears have lower baseline functional scores, they rehabilitate over time and show comparable clinical results 1 year after ARCR. Similarly, degenerative and traumatic rotator cuff tears show comparable structural outcomes, which suggests that degenerated tendons retain healing potential
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press