Klima og energi

Feed-læser

Why Are Nuclear Plants Losing Money at an Astonishing Rate?

Energy Collective - 18. august 2016 - 10:00

Joe Romm recently wrote a piece for Climate Progress titled Nuclear Power Is Losing Money At An Astonishing Rate.

In that post Romm exaggerates the amount of support that the New York Zero Emissions Credit (ZEC) will provide, absolves the massive build out of industrial scale wind and solar from any responsibility for contributing to the situation, offers an incomplete analysis of the real causes and effects of unsustainably low wholesale market prices and provides a slanted suggestion for implementing a solution.

How big is New York’s Zero Emission Credit?

Like many observers that do not like the idea of helping existing nuclear plants survive temporary market conditions, Romm accepts the very high end of the estimated range of costs for the New York ZEC. With a link to a Huff Post piece By an infamously antinuclear author named Karl Grossman, Romm describes the plan as a “$7.6 billion bailout of nuclear.”

That description overlooks the fact that the equation for the administratively-determined price of the ZECs includes a term that will automatically reduce the amount paid when the predicted wholesale market price of electricity rises above $39 per MWh. Unsurprisingly, given Grossman’s known antinuclear attitude, his article did not mention How wholesale electricity prices above $56 per MWh would result in zero cost ZECs.

Romm should have known better than to trust a single source, especially one with a long-standing agenda of eliminating nuclear energy.

Should we blame renewables?

Romm also takes issue with the notion that unreliable energy system growth is causing market price problems for nuclear energy. He admits that wind, solar, biomass, geothermal and waste-to-energy are still receiving substantial direct payments and market mandates from governments at the local, state and federal level, but he claims that those subsidies are appropriate for emerging technologies that ar still progressing down the cost curve. Besides, he says, those generous subsidies are scheduled to be phased out.

Romm is apparently banking on the fact that many readers don’t know that wind and solar subsidies have been in place since the early 1990s and have been “scheduled” to be phased out at least 5 times already.

Instead of blaming his favorite power systems, Romm says that we should blame “cheap natural gas.” He avoids acknowledging that the massive buildout of wind and solar enabled by the $25 billion that the Recovery Act has dolled out to renewable energy programs has successfully increased the quantity of wind and solar electricity generated by an amount that has finally begun to be visible in EIA statistical reports.

Since weather-dependent sources displace mostly natural gas fired electricity if they are available, the increase in renewable generation has contributed to the current gas market oversupply and low price situation. If the electricity produced by new wind turbines in Texas, Illinois and Iowa had come from burning natural gas, there would not be a gas glut threatening to overflow available storage reservoirs.

The glut is the reason prices are low; fracking makes gas abundant, but it does not lower the cost of extraction compared to conventional drilling.

Blame market design

Romm turns to Peter Fox-Penner for an explanation of the market challenges facing nuclear power plants. His published quote from that conversation is intriguing and offers an opportunity to find common ground.

While I agree that premature closure of safely operating existing nuclear is a terrible idea from the climate policy standpoint, he overlooks the fact that this consequence is neither “unintended” nor the “fault” of solar and wind. This is the very-much-intended result of the way electric markets were designed, and you can be sure this design was not formulated by wind and solar producers and is in so sense their fault.

I concur. The current market structure was purposely invented by market-manipulating entities like Enron to give an advantage to natural gas traders, build an alliance between natural gas and renewable energy and crowd nuclear energy out of the market in order to enable increasing sales for both gas and renewable power systems.

Price on carbon

Romm once again blames the nuclear industry and independent nuclear advocates for not sufficiently supporting the fatally flawed 2009 climate and energy bill. He says the bill would have put a price on carbon, but it did so through the same “cap and trade” mechanism that has proven incapable of actually addressing carbon emissions in example markets like Europe and California.

The politically determined amount of available credits for incumbent producers has invariably been set so high that those credits usually trade for a value that is too low to influence system purchase decisions. They are often so low that they don’t even influence operational choices between burning high emission fuels like brown coal instead of somewhat lower emission fuels like natural gas.

Romm is right that a price on carbon would help nuclear energy from both established plants and new projects compete, but that price needs to be predictable and sufficiently high before it will influence decision making. The rising fee and dividend approach advocated by the Citizens Climate Lobby and James Hansen has a much better chance of successfully reducing CO2 emissions.

Effective solutions

Nuclear energy is not inherently uncompetitive. Fuel costs are low and predictable, equipment is durable and often needs little maintenance or repair, life cycle emissions levels are extremely low, waste is compact and easily managed and paying people fair salaries for productive work is an economic benefit, not a disadvantage.

The established nuclear industry, however, has done a lousy job of controlling its costs and marketing the benefits of its product.

It needs to work more effectively to ensure that regulators do not ratchet requirements, especially when they are imposed for the purpose of producing the right political “optics” with no measurable improvement in public safety. Achieving that condition will require the industry to take a more adversarial approach to regulators. Under the American system of jurisprudence regulators and the regulated are not supposed to be friends and partners.

There is also a need to recognize that innovative, advanced reactor systems that incorporate lessons learned during the past 60 years of commercial nuclear power generation offer the opportunity for long lasting cost reductions. Many of the smaller designs offer the same kind of volume-based learning curves that have so effectively reduced the cost of weather-dependent collector systems like wind turbines and solar panels.

I agree with Romm on the subject of nuclear energy subsidies. They are unaffordable because nuclear is far too productive to remain a small and emerging technology whose subsidies can be lost in the weeds. At this point, that same statement can be made about wind and solar energy. Subsidies for those systems need to actually phase out this time or they will become ever more unaffordable.

They will also become an increasingly important obstacle to the real goal of cleaning up our electricity generating system and enabling it to grow rapidly to provide an ever larger share of our total energy needs.

Photo Credit: IAEA Imagebank via Flickr

The post Why are nuclear plants losing money at an astonishing rate? appeared first on Atomic Insights.

Kategorier: Udenlandske medier

We Have Almost Certainly Blown the 1.5-Degree Global Warming Target

Energy Collective - 18. august 2016 - 9:00

The United Nations climate change conference held last year in Paris had the aim of tackling future climate change. After the deadlocks and weak measures that arose at previous meetings, such as Copenhagen in 2009, the Paris summit was different. The resulting Paris Agreementcommitted to:

Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognising that this would significantly reduce the risks and impacts of climate change.

The agreement was widely met with cautious optimism. Certainly, some of the media were pleased with the outcome while acknowledging the deal’s limitations.

Many climate scientists were pleased to see a more ambitious target being pursued, but what many people fail to realise is that actually staying within a 1.5℃ global warming limit is nigh on impossible.

There seems to be a strong disconnect between what the public and climate scientists think is achievable. The problem is not helped by the media’s apparent reluctance to treat it as a true crisis.

The 1.5 limit is nearly impossible

In 2015, we saw global average temperatures a little over 1℃ above pre-industrial levels, and 2016 will very likely be even hotter. In February and March of this year, temperatures were 1.38℃ above pre-industrial averages.

Admittedly, these are individual months and years with a strong El Niñoinfluence (which makes global temperatures more likely to be warmer), but the point is we’re already well on track to reach 1.5℃ pretty soon.

So when will we actually reach 1.5℃ of global warming?


Timeline showing best current estimates of when global average temperatures will rise beyond 1.5℃ and 2℃ above pre-industrial levels. Boxes represent 90% confidence intervals; whiskers show the full range. Image via Andrew King.

On our current emissions trajectory we will likely reach 1.5℃ within the next couple of decades (2024 is our best estimate). The less ambitious 2℃ target would be surpassed not much later.

This means we probably have only about a decade before we break through the ambitious 1.5℃ global warming target agreed to by the world’s nations in Paris.

University of Melbourne research group recently published these spiral graphs showing just how close we are getting to 1.5℃ warming. Realistically, we have very little time left to limit warming to 2℃, let alone 1.5℃.

This is especially true when you bear in mind that even if we stopped all greenhouse gas emissions right now, we would likely experience about another half-degree of warming as the oceans “catch up” with the atmosphere.

Parallels with climate change scepticism

The public seriously underestimates the level of consensus among climate scientists that human activities have caused the majority of global warming in recent history. Similarly, there appears to be a lack of public awareness about just how urgent the problem is.

Many people think we have plenty of time to act on climate change and that we can avoid the worst impacts by slowly and steadily reducing greenhouse gas emissions over the next few decades.

This is simply not the case. Rapid and drastic cuts to emissions are needed as soon as possible.

In conjunction, we must also urgently find ways to remove greenhouse gases already in the atmosphere. At present, this is not yet viable on a large scale.

Is 1.5℃ even enough to avoid “dangerous” climate change?

The 1.5℃ and 2℃ targets are designed to avoid the worst impacts of climate change. It’s certainly true that the more we warm the planet, the worse the impacts are likely to be. However, we are already experiencing dangerous consequences of climate change, with clear impacts on society and the environment.

For example, a recent study found that many of the excess deaths reported during the summer 2003 heatwave in Europe could be attributed to human-induced climate change.

Also, research has shown that the warm seas associated with the bleaching of the Great Barrier Reef in March 2016 would have been almost impossible without climate change.

Climate change is already increasing the frequency of extreme weather events, from heatwaves in Australia to heavy rainfall in Britain.

These events are just a taste of the effects of climate change. Worse is almost certainly set to come as we continue to warm the planet.

It’s highly unlikely we will achieve the targets set out in the Paris Agreement, but that doesn’t mean governments should give up. It is vital that we do as much as we can to limit global warming.

The more we do now, the less severe the impacts will be, regardless of targets. The simple take-home message is that immediate, drastic climate action will mean far fewer deaths and less environmental damage in the future.

By , Climate Extremes Research Fellow, University of Melbourne and , Research Fellow in Climate and Water Resources, University of Melbourne.

This article has been cross-posted from The Conversation.

Photo: Pixabay

Original Post

Kategorier: Udenlandske medier

American Way of Financing Energy Efficiency Projects Could Lead to Breakthrough in Europe

Energy Collective - 18. august 2016 - 8:00

Retrofitting (Photo: Province of British Columbia)

A retrofit project of the National Health Service (NHS) in the UK is the first in Europe to sign up to a new energy efficiency accreditation scheme, imported from the United States. This Investor Ready Energy Efficiency (IREE) certification gives investors and financial institutions guarantees that a project is environmentally and financially sound. It could pave the way for a huge expansion of energy efficiency projects across Europe: “the IREE represents the beginning of the creation of a recognisable standard investment class.”

A joint retrofit project carried out by a consortium of three National Health Service (NHS) Trusts in Liverpool has become the first in Europe to receive the Investor Ready Energy Efficiency (IREE) certification.

Launched on 30 June 2016 by the Environmental Defense Fund’s Investor Confidence Project (ICP) Europe, the IREE certification for commercial and multifamily residential buildings is granted to projects that follow ICP’s framework, and thereby provide investors with more confidence in financial and environmental results. ICP Europe is funded by the European Commission’s Horizon 2020 programme.

Developed by the Carbon and Energy Fund, the £13 million NHS retrofit project is predicted to achieve annual savings of £1.85 million over the fifteen-year span of the energy performance contract. Even by the standard of the NHS, a UK leader in ambitious retrofitting projects, this represents financial and energy savings on an impressive scale. Set to be the first of many accredited by ICP in Europe, this project will contribute to paving the way towards an expansion of the market in energy efficiency renovation projects. The ICP accreditation aims at increasing the bankability of these projects through the medium of more streamlined transactions and increased reliability of projected energy savings for investors.

At the moment, the market in Europe is not big enough to go down the refinancing and securitisation route

The energy efficiency market already employs more than around 136,000 people in the UK, is worth more than £18 billion annually, and delivers exports valued at nearly £1.9 billion per year. The UK’s commercial retrofit market potential is estimated at a further £9.7 billion.

Project investors in the Liverpool Energy Collaboration, Macquarie Group, stated: “Macquarie is pleased to support the NHS energy efficiency upgrade project in Liverpool. We look forward to financing future energy efficiency projects as the UK transitions to a clean, low-carbon economy.”

Opening up market potential

As it expands, ICP’s Investor Ready Energy Efficiency certification could point the way toward mass-scale financing of energy efficiency in the building sector – where €100 billion per year is needed to reach the EU’s 2020 energy efficiency target. The recently launched Investor Network has brought together investors with €1 billion in assets under management looking for energy efficiency opportunities.

Steven Fawkes, senior enerqy advisor to ICP Europe, sees this project as a significant first step. “We are working on certifying 18 projects and programmes in five European countries, ranging from retrofitting swimming pool dehumidifier systems in Portugal to the deep energy retrofit of a 2000 m2 student accommodation complex in Germany.”

ICP Europe itself is an offshoot of the Investor Confidence Project in the US, launched five years ago (by the Environmental Defense Fund), which has been accrediting an array of projects there, from an apartment block in California to a church in Connecticut, which expects to save $44,000 a year in energy bills.

The smallest project that investors would consider viable for refinancing would be a £200 million one

Fawkes explains the longer term aims of ICP Europe: “We are building a standard asset class. For financial institutions the value is twofold. Firstly, there is a reduced due diligence cost (they can be confident that the project is developed to a high standard). Secondly, it will be helpful when aiming to aggregate projects and issue green bonds.” He explains that, at the moment, the market in Europe is not big enough to go down the refinancing and securitisation route.

When it comes to the potential for aggregation and refinancing, the requirement for ongoing measurement and verification is very important to prove that energy is being saved. However, according to Fawkes, the smallest project that investors would consider viable for refinancing would be a £200 million one. At least the IREE represents the “beginning of the creation of a recognisable standard investment class.”

EU collaboration

The Energy Efficiency Financial Institutions Group (EEFIG) was convened three years ago by the United Nations Environment Program and the European Commission to examine barriers to investment in energy efficiency. EEFIG is working with ICP Europe on a derisking project with two main functions: building a database of energy efficiency project performance so that investors can look at the performance of certain projects, and the development of a guide to standardised underwriting procedures for financial institutions.

Fawkes points out that “when the European Commission’s DG Energy publishes this guide next year, financial institutions will sign up to use it, giving added value to the IREE standards, which will be incorporated in it.”

Interested parties are invited to contribute to ICP Europe’s efforts through the Technical Forum and help make energy efficiency a global asset class by joining the ICP Europe Ally Network.

by

Original Post

Kategorier: Udenlandske medier

Is The Oil Production Efficiency Boom Coming To An End?

Energy Collective - 18. august 2016 - 7:00

“How much more productive can these new wells get?” I asked my host who had kindly invited me on a field trip to some of western Canada’s most prolific oil fields.

Looking down the aisle of bobbing pump jacks, seven in a row on one side of the immaculate gravel pad, the veteran oil executive replied, “We can get up to 1,000 barrels a day out of some of the new ones, but that’s not a limit; we’re improving the economics and productivity with each new well.”

Impressive I thought, subconsciously nodding my head in sync with the leading pump jack.

“Back when I was in field exploration,” I said sounding like a grey-haired guy, “we used to high-five if a new well put out 100 barrels a day.”

The stats certainly show that the industry’s ability to pull oil out of the ground from parts of North America has recently improved by an order of magnitude. In other words, ‘rig productivity’ – the amount of new oil production that an average rig can bring on in one month of drilling– has increased 10-fold in the last 6 years.

Both of us stood marvelling at the operation in front of us, tacitly thinking the same questions. What are the limits to this remarkable energy megatrend: 1,000? 2,000? 5,000 barrels per day per well?

“All this appears amazing,” I said, “like some sort of Moore’s Law for guys in hard hats and overalls.”

Moore’s Law – the observation that the number of transistors on a computer chip doubles about every two years – is the gold standard for benchmarking innovation. But at the same time I explained to my host that I’m always cautious about being duped by exuberant technology, trends and numbers. For example, we’ve all subscribed to “high-speed” Internet services that claim dozens of Megabits per second, but grinds to a fraction of the advertised rate when half the neighborhood is binge-watching Netflix. “The notion of ‘rig productivity’ has to be taken with caution,” I noted. “We can’t assume that the best posted performance in the field is the norm for all wells.” Related: How Storage Could Transform The U.S. Power Grid

My thoughts turn to the US Energy Information Agency (EIA), which every month publishes a report on rig productivity.

July data from the Eagle Ford play in Texas – one of the most prolific in North America – shows that rig productivity in the area is now over 1,000 B/d per rig, similar to the Canadian play area I’m visiting. The rate of innovation has been impressive; between 2007 and 2014, when oil prices were above $100/B, rig productivity was increasing by 110 B/d per rig, every year.

(Click to enlarge)

But a curious thing is noticeable at the end of 2014: Rig productivity suddenly kicked up in all the US plays. The Eagle Ford was particularly impressive, going from 550 to 1,100 B/d per rig. I did a double take. Really? A doubling every 18 months? That’s better than Moore’s law! Related: Iran Undecided On Joining OPEC’s Production Meeting

The well pad I’m standing on is a testament to ongoing innovation, but there is a statistical distortion at play. Starting in late 2014, the severe downturn in oil prices forced the industry to park three-quarters of their rigs and “high-grade” their inventory of prospects. Producers focused on only their best rocks, drilling with only the most efficient rigs. All the low productivity stuff was culled out of the statistical sampling, skewing the average productivity numbers much higher.

(Click to enlarge)

It’s only an estimate, but the average productivity in the Eagle Ford using a wider spectrum of rocks is probably at least 30% less, in the 700 B/d per rig range. Higher oil prices are needed to attract rigs to those lesser quality locations. The much-vaunted Permian only averages about 350, despite showing 500 B/d per rig in the EIA data.

But top-end numbers like 700 B/d per rig should still be a cold-shower wake up call to high-cost oil companies, or to the champions of rival energy systems trying to supplant oil. Notwithstanding the high-grading of the rig sample to premium “sweet spots”, the average productivity in the best North American plays is still improving by about 100 B/d per rig per year, with several years of running room left. It’s not exponential like Moore’s Law, but the pace of innovation is on par with many trends in the tech world.

“We’ve got some of the best acreage and best practices in Western Canada,” said the CEO, looking into the distance and panning his hand across the company’s leased land.

“Yeah, it’s amazing, by the numbers you’re as good as or better than some of the best in North America,” I replied. “But ‘best’ also reminds me to consider that other areas are not as good.”

By Peter Tertzakian for Oilprice.com

The post, Is The Oil Production Efficiency Boom Coming To An End?, was first published on OilPrice.com.

Kategorier: Udenlandske medier

50(d): What Does It Mean for the Tax Credit Market?

Energy Collective - 18. august 2016 - 6:00

Recent guidance from the IRS will provide certainty to the tax credit market.

Last month, we wrote in SOURCE that after years of anticipation, a 50(d) income ruling would soon be released. Sure enough, the Internal Revenue Service (IRS) issued temporary regulations in the Federal Register on July 22. In its issuance, the IRS clarifies its recognition of the income associated with the tax credit for lease pass-through transactions and whether that income should be included in a partner’s outside basis calculations. It has broad implications for many market participants.

To take a step back, let’s look at the lease pass-through structure and understand how it has different capital accounting treatment for the investment tax credit (ITC) compared to the partnership flip.

Under the partnership flip structure, the regulations direct tax equity investors to deduct half of the value of the 30 percent investment tax credit (ITC) when calculating their outside basis in year 1. Solar tax equity investors utilize outside basis for the purposes of realizing the depreciation associated with the investment, and calculating a gain or loss upon exit from the partnership that owns the project after the recapture period. Reducing an investor’s outside basis therefore means reducing the investor’s ability to absorb losses and/or take a capital loss upon exit – both of which are benefits for tax equity investors.

In a lease pass-through structure, the ITC is instead passed through to the Master Tenant where it is then allocated to the partners. Section 50(d) requires the partner to include one-half the ITC ratably into income across the depreciable life of the asset (in this case, five years). The inclusion of one-half the ITC value into income results in greater taxable income for the partner.

But here’s the key. Under normal accounting treatment, any taxable income received increases the capital account. An increase to the capital account, you guessed it, allows a partner to absorb greater losses, offset taxable income, and enjoy a larger loss on exit (depending on the particular deal). In other words, for a tax-laden investor, 50(d) income is a good thing. The industry broadly followed this interpretation, making the lease pass-through structure quite popular despite its particular complexities.

Then, almost two years ago, the IRS sensed peace in the kingdom and announced it would issue clarity on this subject. In December, news broke that guidance was pending, creating uncertainty in the tax credit market, mainly in the form of price bifurcation as investors and syndicators priced these transactions differently depending on views on how IRS would ultimately interpret the rule, variations on who would wear the risk, and bets on when the guidance would in fact be released.

In its ruling, the IRS states that:

  • Investors are not entitled to an increase in their capital accounts under 50(d)
  • 50(d) income is a partner item not a partnership item, and each partner in the lessee partnership is the taxpayer
  • 50(d) income does increase a partner’s outside basis

Now that the industry has more clarity on IRS intent, it is our expectation that the tax credit market will find a new equilibrium for transactions moving forward. Moreover, additional certainty may attract new investors to the solar ITC space, as historic and other tax credit markets also adapt to these changes.

If this all sounds wonky, it is. Don’t worry; we are here to help. To learn more, contact from our tax structured team at finance@solsystems.com with the subject line “Tax Equity”. We have placed tax equity into over 200MW of solar assets across the country, and can explain what the temporary regulations mean for investors.

This is an excerpt from the August edition of SOURCE: the Sol Project Finance Journal, a monthly electronic newsletter analyzing the solar industry’s latest trends based on our unique position in the solar financing space. To view the full Journal or subscribe, please e-mail pr@solsystems.com.

By Sara Rafalson

ABOUT SOL SYSTEMS

Sol Systems is a leading solar energy investment and development firm with an established reputation for integrity and reliability. The company has financed approximately 450MW of solar projects, and manages over $500 million in assets on behalf of insurance companies, utilities, banks, and Fortune 500 companies.

Sol Systems works with its corporate and institutional clients to develop customized energy procurement solutions, and to architect and deploy structured investments in the solar asset class with a dedicated team of investment professionals, lawyers, accountants, engineers, and project finance analysts.

Original Post

Kategorier: Udenlandske medier

Pacific sea level predicts global temperature changes

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(University of Arizona) Sea level changes in the Pacific Ocean can be used to estimate future global surface temperatures, according to a new paper in Geophysical Research Letters. Scientists knew both the rate at which global surface temperature is rising and sea level in the Pacific varied, but had not connected the two phenomena. The researchers estimate by the end of 2016, average surface temperature will increase up to 0.5 F (0.28 C) more than in 2014.

NASA sees Tropical Depression 10W form near Guam

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(NASA/Goddard Space Flight Center) NASA's Terra satellite captured an infrared image of the developing Tropical Storm 10W in the Northwestern Pacific Ocean near Guam.

Addressing changes in regional groundwater resources: Lessons from the high plains aquifer

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(American Geosciences Institute) This Critical Issues Forum is a 1-½ day meeting that will cover multiple aspects of groundwater depletion in the High Plains and will include abundant time for participant discussion.

NASA sees Tropical Storm 12W over the open Northwestern Pacific Ocean

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(NASA/Goddard Space Flight Center) Tropical Storm 12W formed over the open waters of the Northwestern Pacific Ocean, far southeast of the big island of Japan. NASA's Aqua satellite captured an image of the small storm on Aug. 18, 2016.

NASA sees formation of Atlantic Ocean's Tropical Storm Fiona

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(NASA/Goddard Space Flight Center) NASA-NOAA's Suomi NPP satellite passed over Tropical Storm Fiona as it developed in the northeastern Atlantic Ocean and captured a visible image of the strengthening storm.

Mussel flexing: Bivalve save drought-stricken marshes, research finds

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(University of Florida) As coastal ecosystems feel the heat of climate change worldwide, new research shows the humble mussel and marsh grass form an intimate interaction known as mutualism that benefits both partner species and may be critical to helping these ecosystems bounce back from extreme climatic events such as drought.

Blue Cut Fire in California spreads quickly

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(NASA/Goddard Space Flight Center) The Blue Cut Fire, just outside of Los Angeles, is a quickly growing fire that is currently an imminent threat to public safety, rail traffic and structures.

'Ecosystem canaries' provide early warning signs of catastrophic changes to ecosystems

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(University of Southampton) New research, led by the University of Southampton, demonstrates that 'ecosystem canaries' can provide early warning signals of large, potentially catastrophic, changes or tipping points in ecosystems.

Urbanization affects diets of butterflies: NUS study

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(National University of Singapore) A study led by researchers from the National University of Singapore revealed that most tropical butterflies feed on a variety of flower types, but those that are 'picky' about their flower diets tend to prefer native plants and are more dependent on forests. These 'picky' butterflies also have wings that are more conspicuous and shorter proboscis.

Carbon molecular sieve membranes could cut energy in hydrocarbon separations

EurekAlert - Atmospheric - 18. august 2016 - 6:00
(Georgia Institute of Technology) A research team from the Georgia Institute of Technology and ExxonMobil has demonstrated a new carbon-based molecular sieve membrane that could dramatically reduce the energy required to separate a class of hydrocarbon molecules known as alkyl aromatics.

News: ‘Yes Nukes’ in New York and Tennessee as Old Plants and New Get New Lease on Life

Energy Collective - 18. august 2016 - 5:00

Image courtesy of Tennessee Valley Authority.

Nuclear power generates about one-fifth of all electricity in the United States, but it’s had a rough 40 years or so. Between high upfront costs for installation, complicated permitting, rare but dramatic accidents, and general NIMBY, development of new U.S. nuclear facilities all but halted in the last part of the twentieth century. But that might be changing. In the news this week: new nuclear projects in the Southeast, new nuke-supporting policy in New York, and small modular reactors unveiling some of their secrets. It’s a physics-filled news update from Advanced Energy Perspectives.

The Tennessee Valley Authority is doing final tests on Watts Bar 2, which, when it comes into commercial operation at summer’s end, will be the first new reactor in the United States since Watts Bar 1 came online in 1996. Last week, a major milestone in testing: the Watts Bar 2 reactor reached 75% reactor power, then operators deliberately lowered the generation level to 30%, and then a deliberate and successful shutdown.

Watts Bar unit 2 actually began construction in 1973, with work halted for significant periods in the interim, both for safety concerns and economics. The latest round of construction kicked off again in 2007, and the reactor first came online in May. TVA is cutting no corners. The testing process is slow: ramping up the reactor bit by bit before beginning shut down protocols. Watts Bar 2 also features the FLEX system developed by the nuclear industry in response to Nuclear Regulatory Commission’s Fukushima task force. FLEX involves additional backup power and emergency equipment protecting against a major disruption threatening the cooling system.

In Georgia, more signs of the long-promised nuclear renaissance. As we reported last week, state regulators gave Georgia Power Co. permission to start laying groundwork for a new nuclear facility south of Columbus in rural Stewart County.

Public Service Commission staff had recommended putting off the decision until 2019 but, in light of what the Atlanta Business Chronicle characterized as “growing pressure from the federal government on states to reduce carbon emissions from coal-burning power plants coupled with the volatility of natural gas prices,” the Commission voted to let Georgia Power go ahead with preliminary site work and licensing now.

“If we’re going to close coal plants and add renewables, we have to add more base-load [capacity],” said Commissioner Tim Echols. “It has to be carbon-free nuclear power.”

New York certainly thinks so. Earlier this month the New York PSC approved a clean energy standard of 50% renewable energy by 2030 that includes guaranteed income for nuclear power plants. The subsidies for three aging upstate plants use a formula outlined by Utility Dive as “based on expected power costs and the social price on carbon” federal agencies use. Basically, the PSC has decided to consider the positive externalities of nuclear energy – a reliable energy source that does not produce any emissions or pollutants, including greenhouse gases – at least while renewable energy capacity is built up. This is something that owners of existing nuclear plants, challenged by competition from low-priced natural gas, have been calling for, though mostly to no avail – until now.

Already this policy has borne fruit. Exelon has entered into an agreement to purchase the James A. FitzPatrick nuclear power plant located in Scriba, in upstate New York, which Entergy hadplanned to retire later this year or early next year if it couldn’t find a buyer. With the PSC’s decision to guarantee the plant’s ability to generate revenue, Exelon was ready to make a deal.

The $110 million price tag may turn out to be a screaming deal for Exelon. The plant does need to be refueled next year, which will cost money, but it’s nothing like the billions usually necessary to start up a nuclear plant. The Watts Bar 2 reactor, for instance, cost $4.7 billion to build, and that’s about half of what similar nuclear plants will cost in Georgia and South Carolina. But with New York’s commitment to keep its nukes going, that $110 million investment could pay off, at least through 2029.

Exelon’s CEO and President Chris Crane sure thinks so. “We are pleased to have reached an agreement for the continued operation of FitzPatrick,” Crane said in a statement. “We look forward to bringing FitzPatrick’s highly-skilled team of professionals into the Exelon Generation nuclear program, and to continue delivering to New York the environmental, economic and grid reliability benefits of this important energy asset.”

On the other end of the nuclear generation scale, small nuclear reactors are continuing to seek a spot in the market. TVA submitted the first-ever permit to locate a small modular reactor (SMR)near the Clinch River earlier this summer. Transatomic, one of the companies offering SMR designs, recently released a white paper detailing the design of their reactor. “This design is the result of years of open, clearly communicated scientific progress,” said Dr. Leslie Dewan, Transatomic’s CEO. “Our research has demonstrated many-fold increases in fuel efficiency over existing technologies, and we’re really excited about the next steps in our development process.”

As Penn State professor Edward Klevans wrote in an Op-Ed for the Pittsburgh Post-Gazette this week, “there is an overwhelming case for continued reliance on, and expansion of, America’s nuclear energy infrastructure.”

Get all the latest news and industry updates by signing up for our weekly newsletter below.

Subscribe to AEE Weekly

Kategorier: Udenlandske medier

Energy-Related CO2 Emissions from Natural Gas Surpass Coal as Fuel Use Patterns Change

Energy Collective - 18. august 2016 - 4:00

Source: U.S. Energy Information Administration, Short-Term Energy Outlook (August 2016) and Monthly Energy Review

 

Energy-associated carbon dioxide (CO2) emissions from natural gas are expected to surpass those from coal for the first time since 1972. Even though natural gas is less carbon-intensive than coal, increases in natural gas consumption and decreases in coal consumption in the past decade have resulted in natural gas-related CO2 emissions surpassing those from coal. EIA’s latest Short-Term Energy Outlook projects energy-related CO2 emissions from natural gas to be 10% greater than those from coal in 2016.

From 1990 to about 2005, consumption of coal and natural gas in the United States was relatively similar, but their emissions were different. Coal is more carbon-intensive than natural gas. The consumption of natural gas results in about 52 million metric tons of CO2 for every quadrillion British thermal units (MMmtCO2/quad Btu), while coal’s carbon intensity is about 95 MMmtCO2/quad Btu, or about 82% higher than natural gas’s carbon intensity. Because coal has a higher carbon intensity, even in a year when consumption of coal and natural gas were nearly equal, such as 2005, energy-related CO2 emissions from coal were about 84% higher than those from natural gas.

In 2015, natural gas consumption was 81% higher than coal consumption, and their emissions were nearly equal. Both fuels were associated with about 1.5 billion metric tons of energy-related CO2 emissions in the United States in 2015.

Source: U.S. Energy Information Administration, Short-Term Energy Outlook (August 2016) and Monthly Energy Review.

 

Annual carbon intensity rates in the United States have generally been decreasing since 2005. The U.S. total carbon intensity rate reflects the relative consumption of fuels and those fuels’ relative carbon intensities. Petroleum, at about 65 MMmtCO2/quad Btu, is less carbon-intensive than coal but more carbon-intensive than natural gas. Petroleum accounts for a larger share of U.S. energy-related CO2 emissions because of its high levels of consumption.

Another contributing factor to lower carbon intensity is increased consumption of fuels that produce no carbon dioxide, such as nuclear-powered electricity and renewable energy. As these fuels make up a larger share of U.S. energy consumption, the U.S. average carbon intensity declines. Although use of natural gas and petroleum have increased in recent years, the decline in coal consumption and increase in nonfossil fuel consumption have lowered U.S. total carbon intensity from 60 MMmtCO2/quad Btu in 2005 to 54 MMmtCO2/quad Btu in 2015.

Republished on August 17, 2016 at 9:30 a.m. to correct the units for carbon dioxide intensities.

Principal contributor: Eliza Goren, Perry Lindstrom

Original Post

Kategorier: Udenlandske medier

What the New NASA ‘Hot Spot’ Study Tells Us About Methane Leaks

Energy Collective - 18. august 2016 - 3:00

Look up in New Mexico and on most days you’ll see the unmistakable blue skies that make the Southwest so unique.

But there’s also something ominous hovering over the Four Corners that a naked eye can’t detect: A 2,500-square mile cloud of methane, the highest concentration of the heat-trapping pollution anywhere in the United States. The Delaware-sized hot-spot was first reported in a study two years ago.

At the time, researchers were confident the cloud was associated with fossil fuels, but unsure of the precise sources. Was it occurring naturally from the region’s coal beds or coming from a leaky oil and gas industry?

Now a team mainly funded by NASA and the National Oceanic and Atmospheric Administration has published a new paper in a top scientific journal that starts to provide answers. They find that many of the highest emitting sources are associated with the production, processing and distribution of oil and natural gas.

For this study, the authors flew over a roughly 1,200-square-mile portion of the San Juan Basin and found more than 250 high-emitting sites, including many oil and gas facilities. They also noted that a small portion of them, about 10%, were responsible for more than half of the studied emissions.

This does not come as a big surprise. In 2014, according to industry’s self-reported emissions data, oil and gas sources accounted for approximately 80% or methane pollution in the San Juan Basin. The findings are also very consistent with results from one of EDF’s methane studies in Texas’ Barnett Shale, which also found disproportionate emissions from super emitters.

Finding super emitters

Since “super emitters” can and do appear anywhere at any time, it is critical to be constantly on the lookout for them so they can be fixed.

The good news is because of the outsized contribution of a fraction of sites, the authors note that reducing these emissions can be done cost-effectively through improved detection practices.

That’s consistent with what we know from a vast and growing body of methane research. And it means we can make a big dent in the methane cloud.

$100 million in wasted natural gas a year

Of course, that leaking methane isn’t just climate pollution; it’s also the waste of a finite natural resource.

In New Mexico, the vast majority of natural gas production takes place on public and tribal owned lands – meaning that when gas is wasted, it represents a tremendous amount of lost revenue for state and tribal governments. The San Juan Basin is responsible for only 4% of total natural gas production in the country, but responsible for 17% of the nation’s overall natural gas waste on federal and tribal lands. In fact nearly a third of all methane wasted on public and tribal lands occurs in New Mexico.

A report by ICF International found that venting, flaring and leaks from oil and gas sites on federal and tribal land in New Mexico, alone, effectively threw away $100 million worth of gas in 2013 – the worst record in the nation. That, in turn, represents more than $50 million in lost royalties to taxpayers over the last five years.

Nearby, we see a different story

Capturing methane and preventing waste at oil and gas operations on federal lands is an opportunity to save a taxpayer-owned energy resource while at the same time tackling a major source of climate pollution.

Over the past year, the Bureau of Land Management, which oversees federal and tribal lands such as those in the San Juan Basin, has moved to limit methane emissions. The agency based its action in Iarge part on experiences from New Mexico’s neighboring states, which have started to use modern practices and technologies to dramatically reduce this waste.

In 2013, San Juan Basin operators reported almost 220,000 metric tons of methane emissions. By comparison, Wyoming’s Upper Green River Basin has almost twice the natural gas production of the San Juan Basin, but only half the emissions.

Why the difference? Wyoming, like Colorado, has worked to put strong new rules in place to reduce emissions. And they are working.

Strong rules from BLM can do the same, but they must be completed and implemented quickly – to better protect the Land of Enchantment and federal and tribal lands across the U.S.

Image credit: NASA/JPL-Caltech/University of Michigan

By Ramon Alvarez, Ph.D., Senior Scientist

Original Post

Kategorier: Udenlandske medier

LEED Dynamic Plaque Performance: From Olympic Athletes to Buildings

Energy Collective - 18. august 2016 - 2:00

One of my favorite parts of the Olympics is the behind the scenes featurettes of different athletes. I love seeing home videos of their childhood athletic pursuits, hearing stories about how much time and effort went into honing their skill, and then finally witnessing their status as the high performing athletes they’ve become.

I’ve noticed that for much of the four-year gap between each Olympic games the efforts of these elite athletes go uncelebrated. For a brief period of 16 days, every four years, their performance is measured and scored and the world takes notice. But despite this gap in recognition, these athletes are measuring their performance every day.

Just like our athletes, we reward our high-performing buildings with Platinum, Gold, Silver and Certified designations. Our buildings are alive and implement initiatives year-round that keep their sustainability goals on track for record success. Training as an elite athlete in the Olympics and getting started with the LEED Dynamic Plaque begins with measuring and scoring performance.

What is the period of time during which we tell the performance story of a building?

For projects engaging with the LEED Dynamic Plaque, the performance period is 365 days and a key element of the platform that defines the boundaries of the Performance Score. Why does the performance period matter? It’s the window of time during which measured building data is scored. The score reflected in the platform represents a rolling annual average of the data provided. Critically, it is also the period for which GBCI reviews a performance score.

Project teams, engaging with the platform, input measured building data over the course of the 365 day performance period, and then submit that data as well as supporting documentation to GBCI for review. Let’s look at a sample project that signed up for a five-year subscription with the LEED Dynamic Plaque on June 1, 2016. Their first performance period would extend from 6/1/2016 – 5/31/17. Year 2 would extend from 6/1/17 – 5/31/18 and so on until 2021.

Project teams should make sure they understand their performance period and that their documentation covers all data reported within the timeframe, before submitting documentation to GBCI. One of the most common review comments that GBCI provides to project teams that are engaging with the platform to maintain certification is the lack of sufficient documentation to cover the entire performance period.

Are you ready to share your performance story?

By David Marcus

Original Post

Kategorier: Udenlandske medier

Infinite Solar?

Energy Collective - 17. august 2016 - 10:00

An infographic published earlier this year asks the question “Could the world be 100% solar?”. The question is answered in the affirmative by demonstrating that so much solar energy falls on the Earth’s surface, all energy needs could be met by covering just 500,000 km2 with solar PV. This represents an area a bit larger than Thailand, but still only ~0.3% of the total land surface of the planet. Given the space available in deserts in particular and the experience with solar PV in desert regions in places such as California and Nevada, the infographic argues that there are no specific hurdles to such an endeavor.

However, solar PV is both intermittent and only delivers electricity, which currently makes up just 20% of final energy use. Oil products make up the bulk of the remaining 80%. As I noted in a recent post, even in the Shell net-zero emissions scenario, electricity still makes up only 50% of final energy. In that case, what might a 100% solar world really look like and is it actually feasible beyond the simple numerical assessment?

The first task is of course to generate sufficient electricity, not just in terms of total gigawatt hours, but in gigawatt hours when and where it is needed. As solar is without question intermittent in a given location, this means building a global grid capable of distribution to the extent that any location can be supplied with sufficient electricity from a location that is in daylight at that time. In addition, the same system would likely need access to significant electricity storage, certainly on a scale that far eclipses even the largest pumped water storage currently available. Energy storage technologies such as batteries and molten salt (well suited to concentrated solar thermal) only operate on a very small scale today.

The Chinese State Grid has been busy building ultra-high voltage long distance transmission lines across China and they have imagined a world linked by a global grid (Wall Street Journal, March 30 2016 and Bloomberg, April 3rd 2016) with a significant proportion of electricity needs generated by solar from the equator and wind from the Arctic.

But could this idea be expanded to a grid which supplies all the electricity needs of the world? A practical problem here is that for periods of the day at certain times of the year the entire North and South American continents are in complete darkness, which means that the grid connection would have to extend across the Atlantic or Pacific Oceans. While the cost of a solar PV cell may be pennies in this world, the cost of deploying electricity from solar as a global 24/7 energy service could be considerable. The cost of the cells themselves may not even feature.

But as noted above, electricity only gets you part of the way there, albeit a substantial part. Different forms of energy will be needed for a variety of processes and services which are unlikely to run on direct or stored electricity, even by the end of this century. Examples are;

  • Shipping currently runs on hydrocarbon fuels, although large military vessels have their own nuclear reactors.
  • Aviation requires kerosene, with stored electricity a very unlikely alternative. The fuel to weight ratio of electro-chemical (battery) storage, even given advances in battery technology, makes this a distant option. Although a small electric plane for one person for 30 minutes flight has been tested, extending this to an A380 flying for 14 hours would require battery technology that doesn’t currently exist. Still, some short haul commuter aircraft might become electric.
  • While electricity may be suitable for many modes of road transport, it may not be practical for heavy goods transport and large scale construction equipment. Much will depend on the pace and scope of battery development.
  • Heavy industry requires considerable energy input, such as from furnaces powered by coal and natural gas. These reach the very high temperatures necessary for processes such as chemical conversion, making glass, converting limestone to cement and refining ores to metals. Economy of scale is also critical, so delivering very large amounts of energy into a relatively small space is important. In the case of the metallurgical industries, carbon (usually from coal) is also needed as a reducing agent to convert the ore to a refined metal. Electrification will not be a solution in all cases.

All the above argues for another energy delivery mechanism, potentially helping with (or even solving) the storage issue, offering high temperatures for industrial processes and the necessary energy density for transport. The best candidate appears to be hydrogen, which could be made by electrolysis of water in our solar world (although today it is made much more efficiently from natural gas and the resulting carbon dioxide can be geologically stored – a end-to-end process currently in service for Shell in Canada). Hydrogen can be transported by pipeline over long distances, stored for a period and combusted directly. Hydrogen could also feature within the domestic utility system, replacing natural gas in pipelines (where suitable) and being used for heating in particular. This may be a more cost effective route than building sufficient generating capacity to heat homes with electricity on the coldest winter days. It is even possible to use hydrogen as the reducing agent in metallurgical processes instead of carbon, although the process to do so still only exists at laboratory scale.

But the scale of a global hydrogen industry to support the solar world would far exceed the global Liquefied Natural Gas (LNG) we have today. That industry includes around 300 million tonnes per annum of liquefaction capacity and some 400 LNG tankers. That amounts to about 15 EJ of final energy compared to the current global primary energy demand of 500 EJ. In a 1000 EJ world that we might see in 2100, a role for hydrogen as an energy carrier that reached 100 EJ would imply an industry that was seven times the size of the current LNG system. But hydrogen has 2-3 times the energy content of natural gas and liquid hydrogen is one sixth the density of LNG (important for ships), so a very different looking industry would emerge. Nevertheless, the scale would be substantial.

Finally, but importantly, there are the things that we use, from plastic water bottles to the Tesla Model S. Everything has carbon somewhere in the supply chain or in the product itself. There is simply no escaping this. The source of carbon in plastics, in the components in a Tesla and in the carbon fibre panels in a Boeing 787 is crude oil (and sometimes natural gas). So our infinite solar world needs a source of carbon and on a very large scale. This could still come from crude oil, but if one objective of the solar world is to contain that genie, then an alternative would be required. Biomass is one and a bioplastics industry already exists. In 2015 it was 1-2 million tonnes per annum, compared to ~350 million tonnes for the traditional plastics industry.

Another source of carbon could be carbon dioxide removed directly from the atmosphere or sourced from industries such as cement manufacture. This could be combined with hydrogen and lots of energy to make synthesis gas (CO +H2), which can be a precursor for the chemical industry or an ongoing liquid fuels industry for sectors such as aviation. Synthesis gas is manufactured today on a large scale from natural gas in Qatar and then converted to liquid fuels in the Shell Pearl Gas to Liquids facility. Atmospheric extraction of carbon dioxide is feasible, but remains as a pilot technology today, although some companies are looking at developing it further.

The solar world may be feasible as this century progresses, but it is far from the simple solution that it is often portrayed as. Vast new industries would need to emerge to support it and each of these would take time to develop. The LNG industry first started in the early 1960s and is now a major part of the global economy, but still only carries a small fraction of global energy needs.

The new Shell publication, A Better Life with a Health Planet: Pathways to Net Zero Emissions, shows that in 2100 solar could be a 300 EJ technology, compared to 2.5 EJ energy source today. This is in a world with primary energy demand of 1000 EJ.

Scenarios are part of and ongoing process used in Shell for more than 40 years to challenge executives’ perspectives on the future business environment. They are based on plausible assumptions and quantification and are designed to stretch management thinking and even to consider events that may only be remotely possible.

Original Post

Kategorier: Udenlandske medier