Klima og energi

Udenlandske medier

Why Are Nuclear Plants Losing Money at an Astonishing Rate?

Energy Collective - 18. august 2016 - 10:00

Joe Romm recently wrote a piece for Climate Progress titled Nuclear Power Is Losing Money At An Astonishing Rate.

In that post Romm exaggerates the amount of support that the New York Zero Emissions Credit (ZEC) will provide, absolves the massive build out of industrial scale wind and solar from any responsibility for contributing to the situation, offers an incomplete analysis of the real causes and effects of unsustainably low wholesale market prices and provides a slanted suggestion for implementing a solution.

How big is New York’s Zero Emission Credit?

Like many observers that do not like the idea of helping existing nuclear plants survive temporary market conditions, Romm accepts the very high end of the estimated range of costs for the New York ZEC. With a link to a Huff Post piece By an infamously antinuclear author named Karl Grossman, Romm describes the plan as a “$7.6 billion bailout of nuclear.”

That description overlooks the fact that the equation for the administratively-determined price of the ZECs includes a term that will automatically reduce the amount paid when the predicted wholesale market price of electricity rises above $39 per MWh. Unsurprisingly, given Grossman’s known antinuclear attitude, his article did not mention How wholesale electricity prices above $56 per MWh would result in zero cost ZECs.

Romm should have known better than to trust a single source, especially one with a long-standing agenda of eliminating nuclear energy.

Should we blame renewables?

Romm also takes issue with the notion that unreliable energy system growth is causing market price problems for nuclear energy. He admits that wind, solar, biomass, geothermal and waste-to-energy are still receiving substantial direct payments and market mandates from governments at the local, state and federal level, but he claims that those subsidies are appropriate for emerging technologies that ar still progressing down the cost curve. Besides, he says, those generous subsidies are scheduled to be phased out.

Romm is apparently banking on the fact that many readers don’t know that wind and solar subsidies have been in place since the early 1990s and have been “scheduled” to be phased out at least 5 times already.

Instead of blaming his favorite power systems, Romm says that we should blame “cheap natural gas.” He avoids acknowledging that the massive buildout of wind and solar enabled by the $25 billion that the Recovery Act has dolled out to renewable energy programs has successfully increased the quantity of wind and solar electricity generated by an amount that has finally begun to be visible in EIA statistical reports.

Since weather-dependent sources displace mostly natural gas fired electricity if they are available, the increase in renewable generation has contributed to the current gas market oversupply and low price situation. If the electricity produced by new wind turbines in Texas, Illinois and Iowa had come from burning natural gas, there would not be a gas glut threatening to overflow available storage reservoirs.

The glut is the reason prices are low; fracking makes gas abundant, but it does not lower the cost of extraction compared to conventional drilling.

Blame market design

Romm turns to Peter Fox-Penner for an explanation of the market challenges facing nuclear power plants. His published quote from that conversation is intriguing and offers an opportunity to find common ground.

While I agree that premature closure of safely operating existing nuclear is a terrible idea from the climate policy standpoint, he overlooks the fact that this consequence is neither “unintended” nor the “fault” of solar and wind. This is the very-much-intended result of the way electric markets were designed, and you can be sure this design was not formulated by wind and solar producers and is in so sense their fault.

I concur. The current market structure was purposely invented by market-manipulating entities like Enron to give an advantage to natural gas traders, build an alliance between natural gas and renewable energy and crowd nuclear energy out of the market in order to enable increasing sales for both gas and renewable power systems.

Price on carbon

Romm once again blames the nuclear industry and independent nuclear advocates for not sufficiently supporting the fatally flawed 2009 climate and energy bill. He says the bill would have put a price on carbon, but it did so through the same “cap and trade” mechanism that has proven incapable of actually addressing carbon emissions in example markets like Europe and California.

The politically determined amount of available credits for incumbent producers has invariably been set so high that those credits usually trade for a value that is too low to influence system purchase decisions. They are often so low that they don’t even influence operational choices between burning high emission fuels like brown coal instead of somewhat lower emission fuels like natural gas.

Romm is right that a price on carbon would help nuclear energy from both established plants and new projects compete, but that price needs to be predictable and sufficiently high before it will influence decision making. The rising fee and dividend approach advocated by the Citizens Climate Lobby and James Hansen has a much better chance of successfully reducing CO2 emissions.

Effective solutions

Nuclear energy is not inherently uncompetitive. Fuel costs are low and predictable, equipment is durable and often needs little maintenance or repair, life cycle emissions levels are extremely low, waste is compact and easily managed and paying people fair salaries for productive work is an economic benefit, not a disadvantage.

The established nuclear industry, however, has done a lousy job of controlling its costs and marketing the benefits of its product.

It needs to work more effectively to ensure that regulators do not ratchet requirements, especially when they are imposed for the purpose of producing the right political “optics” with no measurable improvement in public safety. Achieving that condition will require the industry to take a more adversarial approach to regulators. Under the American system of jurisprudence regulators and the regulated are not supposed to be friends and partners.

There is also a need to recognize that innovative, advanced reactor systems that incorporate lessons learned during the past 60 years of commercial nuclear power generation offer the opportunity for long lasting cost reductions. Many of the smaller designs offer the same kind of volume-based learning curves that have so effectively reduced the cost of weather-dependent collector systems like wind turbines and solar panels.

I agree with Romm on the subject of nuclear energy subsidies. They are unaffordable because nuclear is far too productive to remain a small and emerging technology whose subsidies can be lost in the weeds. At this point, that same statement can be made about wind and solar energy. Subsidies for those systems need to actually phase out this time or they will become ever more unaffordable.

They will also become an increasingly important obstacle to the real goal of cleaning up our electricity generating system and enabling it to grow rapidly to provide an ever larger share of our total energy needs.

Photo Credit: IAEA Imagebank via Flickr

The post Why are nuclear plants losing money at an astonishing rate? appeared first on Atomic Insights.

Kategorier: Udenlandske medier

We Have Almost Certainly Blown the 1.5-Degree Global Warming Target

Energy Collective - 18. august 2016 - 9:00

The United Nations climate change conference held last year in Paris had the aim of tackling future climate change. After the deadlocks and weak measures that arose at previous meetings, such as Copenhagen in 2009, the Paris summit was different. The resulting Paris Agreementcommitted to:

Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognising that this would significantly reduce the risks and impacts of climate change.

The agreement was widely met with cautious optimism. Certainly, some of the media were pleased with the outcome while acknowledging the deal’s limitations.

Many climate scientists were pleased to see a more ambitious target being pursued, but what many people fail to realise is that actually staying within a 1.5℃ global warming limit is nigh on impossible.

There seems to be a strong disconnect between what the public and climate scientists think is achievable. The problem is not helped by the media’s apparent reluctance to treat it as a true crisis.

The 1.5 limit is nearly impossible

In 2015, we saw global average temperatures a little over 1℃ above pre-industrial levels, and 2016 will very likely be even hotter. In February and March of this year, temperatures were 1.38℃ above pre-industrial averages.

Admittedly, these are individual months and years with a strong El Niñoinfluence (which makes global temperatures more likely to be warmer), but the point is we’re already well on track to reach 1.5℃ pretty soon.

So when will we actually reach 1.5℃ of global warming?


Timeline showing best current estimates of when global average temperatures will rise beyond 1.5℃ and 2℃ above pre-industrial levels. Boxes represent 90% confidence intervals; whiskers show the full range. Image via Andrew King.

On our current emissions trajectory we will likely reach 1.5℃ within the next couple of decades (2024 is our best estimate). The less ambitious 2℃ target would be surpassed not much later.

This means we probably have only about a decade before we break through the ambitious 1.5℃ global warming target agreed to by the world’s nations in Paris.

University of Melbourne research group recently published these spiral graphs showing just how close we are getting to 1.5℃ warming. Realistically, we have very little time left to limit warming to 2℃, let alone 1.5℃.

This is especially true when you bear in mind that even if we stopped all greenhouse gas emissions right now, we would likely experience about another half-degree of warming as the oceans “catch up” with the atmosphere.

Parallels with climate change scepticism

The public seriously underestimates the level of consensus among climate scientists that human activities have caused the majority of global warming in recent history. Similarly, there appears to be a lack of public awareness about just how urgent the problem is.

Many people think we have plenty of time to act on climate change and that we can avoid the worst impacts by slowly and steadily reducing greenhouse gas emissions over the next few decades.

This is simply not the case. Rapid and drastic cuts to emissions are needed as soon as possible.

In conjunction, we must also urgently find ways to remove greenhouse gases already in the atmosphere. At present, this is not yet viable on a large scale.

Is 1.5℃ even enough to avoid “dangerous” climate change?

The 1.5℃ and 2℃ targets are designed to avoid the worst impacts of climate change. It’s certainly true that the more we warm the planet, the worse the impacts are likely to be. However, we are already experiencing dangerous consequences of climate change, with clear impacts on society and the environment.

For example, a recent study found that many of the excess deaths reported during the summer 2003 heatwave in Europe could be attributed to human-induced climate change.

Also, research has shown that the warm seas associated with the bleaching of the Great Barrier Reef in March 2016 would have been almost impossible without climate change.

Climate change is already increasing the frequency of extreme weather events, from heatwaves in Australia to heavy rainfall in Britain.

These events are just a taste of the effects of climate change. Worse is almost certainly set to come as we continue to warm the planet.

It’s highly unlikely we will achieve the targets set out in the Paris Agreement, but that doesn’t mean governments should give up. It is vital that we do as much as we can to limit global warming.

The more we do now, the less severe the impacts will be, regardless of targets. The simple take-home message is that immediate, drastic climate action will mean far fewer deaths and less environmental damage in the future.

By , Climate Extremes Research Fellow, University of Melbourne and , Research Fellow in Climate and Water Resources, University of Melbourne.

This article has been cross-posted from The Conversation.

Photo: Pixabay

Original Post

Kategorier: Udenlandske medier

American Way of Financing Energy Efficiency Projects Could Lead to Breakthrough in Europe

Energy Collective - 18. august 2016 - 8:00

Retrofitting (Photo: Province of British Columbia)

A retrofit project of the National Health Service (NHS) in the UK is the first in Europe to sign up to a new energy efficiency accreditation scheme, imported from the United States. This Investor Ready Energy Efficiency (IREE) certification gives investors and financial institutions guarantees that a project is environmentally and financially sound. It could pave the way for a huge expansion of energy efficiency projects across Europe: “the IREE represents the beginning of the creation of a recognisable standard investment class.”

A joint retrofit project carried out by a consortium of three National Health Service (NHS) Trusts in Liverpool has become the first in Europe to receive the Investor Ready Energy Efficiency (IREE) certification.

Launched on 30 June 2016 by the Environmental Defense Fund’s Investor Confidence Project (ICP) Europe, the IREE certification for commercial and multifamily residential buildings is granted to projects that follow ICP’s framework, and thereby provide investors with more confidence in financial and environmental results. ICP Europe is funded by the European Commission’s Horizon 2020 programme.

Developed by the Carbon and Energy Fund, the £13 million NHS retrofit project is predicted to achieve annual savings of £1.85 million over the fifteen-year span of the energy performance contract. Even by the standard of the NHS, a UK leader in ambitious retrofitting projects, this represents financial and energy savings on an impressive scale. Set to be the first of many accredited by ICP in Europe, this project will contribute to paving the way towards an expansion of the market in energy efficiency renovation projects. The ICP accreditation aims at increasing the bankability of these projects through the medium of more streamlined transactions and increased reliability of projected energy savings for investors.

At the moment, the market in Europe is not big enough to go down the refinancing and securitisation route

The energy efficiency market already employs more than around 136,000 people in the UK, is worth more than £18 billion annually, and delivers exports valued at nearly £1.9 billion per year. The UK’s commercial retrofit market potential is estimated at a further £9.7 billion.

Project investors in the Liverpool Energy Collaboration, Macquarie Group, stated: “Macquarie is pleased to support the NHS energy efficiency upgrade project in Liverpool. We look forward to financing future energy efficiency projects as the UK transitions to a clean, low-carbon economy.”

Opening up market potential

As it expands, ICP’s Investor Ready Energy Efficiency certification could point the way toward mass-scale financing of energy efficiency in the building sector – where €100 billion per year is needed to reach the EU’s 2020 energy efficiency target. The recently launched Investor Network has brought together investors with €1 billion in assets under management looking for energy efficiency opportunities.

Steven Fawkes, senior enerqy advisor to ICP Europe, sees this project as a significant first step. “We are working on certifying 18 projects and programmes in five European countries, ranging from retrofitting swimming pool dehumidifier systems in Portugal to the deep energy retrofit of a 2000 m2 student accommodation complex in Germany.”

ICP Europe itself is an offshoot of the Investor Confidence Project in the US, launched five years ago (by the Environmental Defense Fund), which has been accrediting an array of projects there, from an apartment block in California to a church in Connecticut, which expects to save $44,000 a year in energy bills.

The smallest project that investors would consider viable for refinancing would be a £200 million one

Fawkes explains the longer term aims of ICP Europe: “We are building a standard asset class. For financial institutions the value is twofold. Firstly, there is a reduced due diligence cost (they can be confident that the project is developed to a high standard). Secondly, it will be helpful when aiming to aggregate projects and issue green bonds.” He explains that, at the moment, the market in Europe is not big enough to go down the refinancing and securitisation route.

When it comes to the potential for aggregation and refinancing, the requirement for ongoing measurement and verification is very important to prove that energy is being saved. However, according to Fawkes, the smallest project that investors would consider viable for refinancing would be a £200 million one. At least the IREE represents the “beginning of the creation of a recognisable standard investment class.”

EU collaboration

The Energy Efficiency Financial Institutions Group (EEFIG) was convened three years ago by the United Nations Environment Program and the European Commission to examine barriers to investment in energy efficiency. EEFIG is working with ICP Europe on a derisking project with two main functions: building a database of energy efficiency project performance so that investors can look at the performance of certain projects, and the development of a guide to standardised underwriting procedures for financial institutions.

Fawkes points out that “when the European Commission’s DG Energy publishes this guide next year, financial institutions will sign up to use it, giving added value to the IREE standards, which will be incorporated in it.”

Interested parties are invited to contribute to ICP Europe’s efforts through the Technical Forum and help make energy efficiency a global asset class by joining the ICP Europe Ally Network.

by

Original Post

Kategorier: Udenlandske medier

Is The Oil Production Efficiency Boom Coming To An End?

Energy Collective - 18. august 2016 - 7:00

“How much more productive can these new wells get?” I asked my host who had kindly invited me on a field trip to some of western Canada’s most prolific oil fields.

Looking down the aisle of bobbing pump jacks, seven in a row on one side of the immaculate gravel pad, the veteran oil executive replied, “We can get up to 1,000 barrels a day out of some of the new ones, but that’s not a limit; we’re improving the economics and productivity with each new well.”

Impressive I thought, subconsciously nodding my head in sync with the leading pump jack.

“Back when I was in field exploration,” I said sounding like a grey-haired guy, “we used to high-five if a new well put out 100 barrels a day.”

The stats certainly show that the industry’s ability to pull oil out of the ground from parts of North America has recently improved by an order of magnitude. In other words, ‘rig productivity’ – the amount of new oil production that an average rig can bring on in one month of drilling– has increased 10-fold in the last 6 years.

Both of us stood marvelling at the operation in front of us, tacitly thinking the same questions. What are the limits to this remarkable energy megatrend: 1,000? 2,000? 5,000 barrels per day per well?

“All this appears amazing,” I said, “like some sort of Moore’s Law for guys in hard hats and overalls.”

Moore’s Law – the observation that the number of transistors on a computer chip doubles about every two years – is the gold standard for benchmarking innovation. But at the same time I explained to my host that I’m always cautious about being duped by exuberant technology, trends and numbers. For example, we’ve all subscribed to “high-speed” Internet services that claim dozens of Megabits per second, but grinds to a fraction of the advertised rate when half the neighborhood is binge-watching Netflix. “The notion of ‘rig productivity’ has to be taken with caution,” I noted. “We can’t assume that the best posted performance in the field is the norm for all wells.” Related: How Storage Could Transform The U.S. Power Grid

My thoughts turn to the US Energy Information Agency (EIA), which every month publishes a report on rig productivity.

July data from the Eagle Ford play in Texas – one of the most prolific in North America – shows that rig productivity in the area is now over 1,000 B/d per rig, similar to the Canadian play area I’m visiting. The rate of innovation has been impressive; between 2007 and 2014, when oil prices were above $100/B, rig productivity was increasing by 110 B/d per rig, every year.

(Click to enlarge)

But a curious thing is noticeable at the end of 2014: Rig productivity suddenly kicked up in all the US plays. The Eagle Ford was particularly impressive, going from 550 to 1,100 B/d per rig. I did a double take. Really? A doubling every 18 months? That’s better than Moore’s law! Related: Iran Undecided On Joining OPEC’s Production Meeting

The well pad I’m standing on is a testament to ongoing innovation, but there is a statistical distortion at play. Starting in late 2014, the severe downturn in oil prices forced the industry to park three-quarters of their rigs and “high-grade” their inventory of prospects. Producers focused on only their best rocks, drilling with only the most efficient rigs. All the low productivity stuff was culled out of the statistical sampling, skewing the average productivity numbers much higher.

(Click to enlarge)

It’s only an estimate, but the average productivity in the Eagle Ford using a wider spectrum of rocks is probably at least 30% less, in the 700 B/d per rig range. Higher oil prices are needed to attract rigs to those lesser quality locations. The much-vaunted Permian only averages about 350, despite showing 500 B/d per rig in the EIA data.

But top-end numbers like 700 B/d per rig should still be a cold-shower wake up call to high-cost oil companies, or to the champions of rival energy systems trying to supplant oil. Notwithstanding the high-grading of the rig sample to premium “sweet spots”, the average productivity in the best North American plays is still improving by about 100 B/d per rig per year, with several years of running room left. It’s not exponential like Moore’s Law, but the pace of innovation is on par with many trends in the tech world.

“We’ve got some of the best acreage and best practices in Western Canada,” said the CEO, looking into the distance and panning his hand across the company’s leased land.

“Yeah, it’s amazing, by the numbers you’re as good as or better than some of the best in North America,” I replied. “But ‘best’ also reminds me to consider that other areas are not as good.”

By Peter Tertzakian for Oilprice.com

The post, Is The Oil Production Efficiency Boom Coming To An End?, was first published on OilPrice.com.

Kategorier: Udenlandske medier

50(d): What Does It Mean for the Tax Credit Market?

Energy Collective - 18. august 2016 - 6:00

Recent guidance from the IRS will provide certainty to the tax credit market.

Last month, we wrote in SOURCE that after years of anticipation, a 50(d) income ruling would soon be released. Sure enough, the Internal Revenue Service (IRS) issued temporary regulations in the Federal Register on July 22. In its issuance, the IRS clarifies its recognition of the income associated with the tax credit for lease pass-through transactions and whether that income should be included in a partner’s outside basis calculations. It has broad implications for many market participants.

To take a step back, let’s look at the lease pass-through structure and understand how it has different capital accounting treatment for the investment tax credit (ITC) compared to the partnership flip.

Under the partnership flip structure, the regulations direct tax equity investors to deduct half of the value of the 30 percent investment tax credit (ITC) when calculating their outside basis in year 1. Solar tax equity investors utilize outside basis for the purposes of realizing the depreciation associated with the investment, and calculating a gain or loss upon exit from the partnership that owns the project after the recapture period. Reducing an investor’s outside basis therefore means reducing the investor’s ability to absorb losses and/or take a capital loss upon exit – both of which are benefits for tax equity investors.

In a lease pass-through structure, the ITC is instead passed through to the Master Tenant where it is then allocated to the partners. Section 50(d) requires the partner to include one-half the ITC ratably into income across the depreciable life of the asset (in this case, five years). The inclusion of one-half the ITC value into income results in greater taxable income for the partner.

But here’s the key. Under normal accounting treatment, any taxable income received increases the capital account. An increase to the capital account, you guessed it, allows a partner to absorb greater losses, offset taxable income, and enjoy a larger loss on exit (depending on the particular deal). In other words, for a tax-laden investor, 50(d) income is a good thing. The industry broadly followed this interpretation, making the lease pass-through structure quite popular despite its particular complexities.

Then, almost two years ago, the IRS sensed peace in the kingdom and announced it would issue clarity on this subject. In December, news broke that guidance was pending, creating uncertainty in the tax credit market, mainly in the form of price bifurcation as investors and syndicators priced these transactions differently depending on views on how IRS would ultimately interpret the rule, variations on who would wear the risk, and bets on when the guidance would in fact be released.

In its ruling, the IRS states that:

  • Investors are not entitled to an increase in their capital accounts under 50(d)
  • 50(d) income is a partner item not a partnership item, and each partner in the lessee partnership is the taxpayer
  • 50(d) income does increase a partner’s outside basis

Now that the industry has more clarity on IRS intent, it is our expectation that the tax credit market will find a new equilibrium for transactions moving forward. Moreover, additional certainty may attract new investors to the solar ITC space, as historic and other tax credit markets also adapt to these changes.

If this all sounds wonky, it is. Don’t worry; we are here to help. To learn more, contact from our tax structured team at finance@solsystems.com with the subject line “Tax Equity”. We have placed tax equity into over 200MW of solar assets across the country, and can explain what the temporary regulations mean for investors.

This is an excerpt from the August edition of SOURCE: the Sol Project Finance Journal, a monthly electronic newsletter analyzing the solar industry’s latest trends based on our unique position in the solar financing space. To view the full Journal or subscribe, please e-mail pr@solsystems.com.

By Sara Rafalson

ABOUT SOL SYSTEMS

Sol Systems is a leading solar energy investment and development firm with an established reputation for integrity and reliability. The company has financed approximately 450MW of solar projects, and manages over $500 million in assets on behalf of insurance companies, utilities, banks, and Fortune 500 companies.

Sol Systems works with its corporate and institutional clients to develop customized energy procurement solutions, and to architect and deploy structured investments in the solar asset class with a dedicated team of investment professionals, lawyers, accountants, engineers, and project finance analysts.

Original Post

Kategorier: Udenlandske medier

News: ‘Yes Nukes’ in New York and Tennessee as Old Plants and New Get New Lease on Life

Energy Collective - 18. august 2016 - 5:00

Image courtesy of Tennessee Valley Authority.

Nuclear power generates about one-fifth of all electricity in the United States, but it’s had a rough 40 years or so. Between high upfront costs for installation, complicated permitting, rare but dramatic accidents, and general NIMBY, development of new U.S. nuclear facilities all but halted in the last part of the twentieth century. But that might be changing. In the news this week: new nuclear projects in the Southeast, new nuke-supporting policy in New York, and small modular reactors unveiling some of their secrets. It’s a physics-filled news update from Advanced Energy Perspectives.

The Tennessee Valley Authority is doing final tests on Watts Bar 2, which, when it comes into commercial operation at summer’s end, will be the first new reactor in the United States since Watts Bar 1 came online in 1996. Last week, a major milestone in testing: the Watts Bar 2 reactor reached 75% reactor power, then operators deliberately lowered the generation level to 30%, and then a deliberate and successful shutdown.

Watts Bar unit 2 actually began construction in 1973, with work halted for significant periods in the interim, both for safety concerns and economics. The latest round of construction kicked off again in 2007, and the reactor first came online in May. TVA is cutting no corners. The testing process is slow: ramping up the reactor bit by bit before beginning shut down protocols. Watts Bar 2 also features the FLEX system developed by the nuclear industry in response to Nuclear Regulatory Commission’s Fukushima task force. FLEX involves additional backup power and emergency equipment protecting against a major disruption threatening the cooling system.

In Georgia, more signs of the long-promised nuclear renaissance. As we reported last week, state regulators gave Georgia Power Co. permission to start laying groundwork for a new nuclear facility south of Columbus in rural Stewart County.

Public Service Commission staff had recommended putting off the decision until 2019 but, in light of what the Atlanta Business Chronicle characterized as “growing pressure from the federal government on states to reduce carbon emissions from coal-burning power plants coupled with the volatility of natural gas prices,” the Commission voted to let Georgia Power go ahead with preliminary site work and licensing now.

“If we’re going to close coal plants and add renewables, we have to add more base-load [capacity],” said Commissioner Tim Echols. “It has to be carbon-free nuclear power.”

New York certainly thinks so. Earlier this month the New York PSC approved a clean energy standard of 50% renewable energy by 2030 that includes guaranteed income for nuclear power plants. The subsidies for three aging upstate plants use a formula outlined by Utility Dive as “based on expected power costs and the social price on carbon” federal agencies use. Basically, the PSC has decided to consider the positive externalities of nuclear energy – a reliable energy source that does not produce any emissions or pollutants, including greenhouse gases – at least while renewable energy capacity is built up. This is something that owners of existing nuclear plants, challenged by competition from low-priced natural gas, have been calling for, though mostly to no avail – until now.

Already this policy has borne fruit. Exelon has entered into an agreement to purchase the James A. FitzPatrick nuclear power plant located in Scriba, in upstate New York, which Entergy hadplanned to retire later this year or early next year if it couldn’t find a buyer. With the PSC’s decision to guarantee the plant’s ability to generate revenue, Exelon was ready to make a deal.

The $110 million price tag may turn out to be a screaming deal for Exelon. The plant does need to be refueled next year, which will cost money, but it’s nothing like the billions usually necessary to start up a nuclear plant. The Watts Bar 2 reactor, for instance, cost $4.7 billion to build, and that’s about half of what similar nuclear plants will cost in Georgia and South Carolina. But with New York’s commitment to keep its nukes going, that $110 million investment could pay off, at least through 2029.

Exelon’s CEO and President Chris Crane sure thinks so. “We are pleased to have reached an agreement for the continued operation of FitzPatrick,” Crane said in a statement. “We look forward to bringing FitzPatrick’s highly-skilled team of professionals into the Exelon Generation nuclear program, and to continue delivering to New York the environmental, economic and grid reliability benefits of this important energy asset.”

On the other end of the nuclear generation scale, small nuclear reactors are continuing to seek a spot in the market. TVA submitted the first-ever permit to locate a small modular reactor (SMR)near the Clinch River earlier this summer. Transatomic, one of the companies offering SMR designs, recently released a white paper detailing the design of their reactor. “This design is the result of years of open, clearly communicated scientific progress,” said Dr. Leslie Dewan, Transatomic’s CEO. “Our research has demonstrated many-fold increases in fuel efficiency over existing technologies, and we’re really excited about the next steps in our development process.”

As Penn State professor Edward Klevans wrote in an Op-Ed for the Pittsburgh Post-Gazette this week, “there is an overwhelming case for continued reliance on, and expansion of, America’s nuclear energy infrastructure.”

Get all the latest news and industry updates by signing up for our weekly newsletter below.

Subscribe to AEE Weekly

Kategorier: Udenlandske medier

Energy-Related CO2 Emissions from Natural Gas Surpass Coal as Fuel Use Patterns Change

Energy Collective - 18. august 2016 - 4:00

Source: U.S. Energy Information Administration, Short-Term Energy Outlook (August 2016) and Monthly Energy Review

 

Energy-associated carbon dioxide (CO2) emissions from natural gas are expected to surpass those from coal for the first time since 1972. Even though natural gas is less carbon-intensive than coal, increases in natural gas consumption and decreases in coal consumption in the past decade have resulted in natural gas-related CO2 emissions surpassing those from coal. EIA’s latest Short-Term Energy Outlook projects energy-related CO2 emissions from natural gas to be 10% greater than those from coal in 2016.

From 1990 to about 2005, consumption of coal and natural gas in the United States was relatively similar, but their emissions were different. Coal is more carbon-intensive than natural gas. The consumption of natural gas results in about 52 million metric tons of CO2 for every quadrillion British thermal units (MMmtCO2/quad Btu), while coal’s carbon intensity is about 95 MMmtCO2/quad Btu, or about 82% higher than natural gas’s carbon intensity. Because coal has a higher carbon intensity, even in a year when consumption of coal and natural gas were nearly equal, such as 2005, energy-related CO2 emissions from coal were about 84% higher than those from natural gas.

In 2015, natural gas consumption was 81% higher than coal consumption, and their emissions were nearly equal. Both fuels were associated with about 1.5 billion metric tons of energy-related CO2 emissions in the United States in 2015.

Source: U.S. Energy Information Administration, Short-Term Energy Outlook (August 2016) and Monthly Energy Review.

 

Annual carbon intensity rates in the United States have generally been decreasing since 2005. The U.S. total carbon intensity rate reflects the relative consumption of fuels and those fuels’ relative carbon intensities. Petroleum, at about 65 MMmtCO2/quad Btu, is less carbon-intensive than coal but more carbon-intensive than natural gas. Petroleum accounts for a larger share of U.S. energy-related CO2 emissions because of its high levels of consumption.

Another contributing factor to lower carbon intensity is increased consumption of fuels that produce no carbon dioxide, such as nuclear-powered electricity and renewable energy. As these fuels make up a larger share of U.S. energy consumption, the U.S. average carbon intensity declines. Although use of natural gas and petroleum have increased in recent years, the decline in coal consumption and increase in nonfossil fuel consumption have lowered U.S. total carbon intensity from 60 MMmtCO2/quad Btu in 2005 to 54 MMmtCO2/quad Btu in 2015.

Republished on August 17, 2016 at 9:30 a.m. to correct the units for carbon dioxide intensities.

Principal contributor: Eliza Goren, Perry Lindstrom

Original Post

Kategorier: Udenlandske medier

What the New NASA ‘Hot Spot’ Study Tells Us About Methane Leaks

Energy Collective - 18. august 2016 - 3:00

Look up in New Mexico and on most days you’ll see the unmistakable blue skies that make the Southwest so unique.

But there’s also something ominous hovering over the Four Corners that a naked eye can’t detect: A 2,500-square mile cloud of methane, the highest concentration of the heat-trapping pollution anywhere in the United States. The Delaware-sized hot-spot was first reported in a study two years ago.

At the time, researchers were confident the cloud was associated with fossil fuels, but unsure of the precise sources. Was it occurring naturally from the region’s coal beds or coming from a leaky oil and gas industry?

Now a team mainly funded by NASA and the National Oceanic and Atmospheric Administration has published a new paper in a top scientific journal that starts to provide answers. They find that many of the highest emitting sources are associated with the production, processing and distribution of oil and natural gas.

For this study, the authors flew over a roughly 1,200-square-mile portion of the San Juan Basin and found more than 250 high-emitting sites, including many oil and gas facilities. They also noted that a small portion of them, about 10%, were responsible for more than half of the studied emissions.

This does not come as a big surprise. In 2014, according to industry’s self-reported emissions data, oil and gas sources accounted for approximately 80% or methane pollution in the San Juan Basin. The findings are also very consistent with results from one of EDF’s methane studies in Texas’ Barnett Shale, which also found disproportionate emissions from super emitters.

Finding super emitters

Since “super emitters” can and do appear anywhere at any time, it is critical to be constantly on the lookout for them so they can be fixed.

The good news is because of the outsized contribution of a fraction of sites, the authors note that reducing these emissions can be done cost-effectively through improved detection practices.

That’s consistent with what we know from a vast and growing body of methane research. And it means we can make a big dent in the methane cloud.

$100 million in wasted natural gas a year

Of course, that leaking methane isn’t just climate pollution; it’s also the waste of a finite natural resource.

In New Mexico, the vast majority of natural gas production takes place on public and tribal owned lands – meaning that when gas is wasted, it represents a tremendous amount of lost revenue for state and tribal governments. The San Juan Basin is responsible for only 4% of total natural gas production in the country, but responsible for 17% of the nation’s overall natural gas waste on federal and tribal lands. In fact nearly a third of all methane wasted on public and tribal lands occurs in New Mexico.

A report by ICF International found that venting, flaring and leaks from oil and gas sites on federal and tribal land in New Mexico, alone, effectively threw away $100 million worth of gas in 2013 – the worst record in the nation. That, in turn, represents more than $50 million in lost royalties to taxpayers over the last five years.

Nearby, we see a different story

Capturing methane and preventing waste at oil and gas operations on federal lands is an opportunity to save a taxpayer-owned energy resource while at the same time tackling a major source of climate pollution.

Over the past year, the Bureau of Land Management, which oversees federal and tribal lands such as those in the San Juan Basin, has moved to limit methane emissions. The agency based its action in Iarge part on experiences from New Mexico’s neighboring states, which have started to use modern practices and technologies to dramatically reduce this waste.

In 2013, San Juan Basin operators reported almost 220,000 metric tons of methane emissions. By comparison, Wyoming’s Upper Green River Basin has almost twice the natural gas production of the San Juan Basin, but only half the emissions.

Why the difference? Wyoming, like Colorado, has worked to put strong new rules in place to reduce emissions. And they are working.

Strong rules from BLM can do the same, but they must be completed and implemented quickly – to better protect the Land of Enchantment and federal and tribal lands across the U.S.

Image credit: NASA/JPL-Caltech/University of Michigan

By Ramon Alvarez, Ph.D., Senior Scientist

Original Post

Kategorier: Udenlandske medier

LEED Dynamic Plaque Performance: From Olympic Athletes to Buildings

Energy Collective - 18. august 2016 - 2:00

One of my favorite parts of the Olympics is the behind the scenes featurettes of different athletes. I love seeing home videos of their childhood athletic pursuits, hearing stories about how much time and effort went into honing their skill, and then finally witnessing their status as the high performing athletes they’ve become.

I’ve noticed that for much of the four-year gap between each Olympic games the efforts of these elite athletes go uncelebrated. For a brief period of 16 days, every four years, their performance is measured and scored and the world takes notice. But despite this gap in recognition, these athletes are measuring their performance every day.

Just like our athletes, we reward our high-performing buildings with Platinum, Gold, Silver and Certified designations. Our buildings are alive and implement initiatives year-round that keep their sustainability goals on track for record success. Training as an elite athlete in the Olympics and getting started with the LEED Dynamic Plaque begins with measuring and scoring performance.

What is the period of time during which we tell the performance story of a building?

For projects engaging with the LEED Dynamic Plaque, the performance period is 365 days and a key element of the platform that defines the boundaries of the Performance Score. Why does the performance period matter? It’s the window of time during which measured building data is scored. The score reflected in the platform represents a rolling annual average of the data provided. Critically, it is also the period for which GBCI reviews a performance score.

Project teams, engaging with the platform, input measured building data over the course of the 365 day performance period, and then submit that data as well as supporting documentation to GBCI for review. Let’s look at a sample project that signed up for a five-year subscription with the LEED Dynamic Plaque on June 1, 2016. Their first performance period would extend from 6/1/2016 – 5/31/17. Year 2 would extend from 6/1/17 – 5/31/18 and so on until 2021.

Project teams should make sure they understand their performance period and that their documentation covers all data reported within the timeframe, before submitting documentation to GBCI. One of the most common review comments that GBCI provides to project teams that are engaging with the platform to maintain certification is the lack of sufficient documentation to cover the entire performance period.

Are you ready to share your performance story?

By David Marcus

Original Post

Kategorier: Udenlandske medier

Infinite Solar?

Energy Collective - 17. august 2016 - 10:00

An infographic published earlier this year asks the question “Could the world be 100% solar?”. The question is answered in the affirmative by demonstrating that so much solar energy falls on the Earth’s surface, all energy needs could be met by covering just 500,000 km2 with solar PV. This represents an area a bit larger than Thailand, but still only ~0.3% of the total land surface of the planet. Given the space available in deserts in particular and the experience with solar PV in desert regions in places such as California and Nevada, the infographic argues that there are no specific hurdles to such an endeavor.

However, solar PV is both intermittent and only delivers electricity, which currently makes up just 20% of final energy use. Oil products make up the bulk of the remaining 80%. As I noted in a recent post, even in the Shell net-zero emissions scenario, electricity still makes up only 50% of final energy. In that case, what might a 100% solar world really look like and is it actually feasible beyond the simple numerical assessment?

The first task is of course to generate sufficient electricity, not just in terms of total gigawatt hours, but in gigawatt hours when and where it is needed. As solar is without question intermittent in a given location, this means building a global grid capable of distribution to the extent that any location can be supplied with sufficient electricity from a location that is in daylight at that time. In addition, the same system would likely need access to significant electricity storage, certainly on a scale that far eclipses even the largest pumped water storage currently available. Energy storage technologies such as batteries and molten salt (well suited to concentrated solar thermal) only operate on a very small scale today.

The Chinese State Grid has been busy building ultra-high voltage long distance transmission lines across China and they have imagined a world linked by a global grid (Wall Street Journal, March 30 2016 and Bloomberg, April 3rd 2016) with a significant proportion of electricity needs generated by solar from the equator and wind from the Arctic.

But could this idea be expanded to a grid which supplies all the electricity needs of the world? A practical problem here is that for periods of the day at certain times of the year the entire North and South American continents are in complete darkness, which means that the grid connection would have to extend across the Atlantic or Pacific Oceans. While the cost of a solar PV cell may be pennies in this world, the cost of deploying electricity from solar as a global 24/7 energy service could be considerable. The cost of the cells themselves may not even feature.

But as noted above, electricity only gets you part of the way there, albeit a substantial part. Different forms of energy will be needed for a variety of processes and services which are unlikely to run on direct or stored electricity, even by the end of this century. Examples are;

  • Shipping currently runs on hydrocarbon fuels, although large military vessels have their own nuclear reactors.
  • Aviation requires kerosene, with stored electricity a very unlikely alternative. The fuel to weight ratio of electro-chemical (battery) storage, even given advances in battery technology, makes this a distant option. Although a small electric plane for one person for 30 minutes flight has been tested, extending this to an A380 flying for 14 hours would require battery technology that doesn’t currently exist. Still, some short haul commuter aircraft might become electric.
  • While electricity may be suitable for many modes of road transport, it may not be practical for heavy goods transport and large scale construction equipment. Much will depend on the pace and scope of battery development.
  • Heavy industry requires considerable energy input, such as from furnaces powered by coal and natural gas. These reach the very high temperatures necessary for processes such as chemical conversion, making glass, converting limestone to cement and refining ores to metals. Economy of scale is also critical, so delivering very large amounts of energy into a relatively small space is important. In the case of the metallurgical industries, carbon (usually from coal) is also needed as a reducing agent to convert the ore to a refined metal. Electrification will not be a solution in all cases.

All the above argues for another energy delivery mechanism, potentially helping with (or even solving) the storage issue, offering high temperatures for industrial processes and the necessary energy density for transport. The best candidate appears to be hydrogen, which could be made by electrolysis of water in our solar world (although today it is made much more efficiently from natural gas and the resulting carbon dioxide can be geologically stored – a end-to-end process currently in service for Shell in Canada). Hydrogen can be transported by pipeline over long distances, stored for a period and combusted directly. Hydrogen could also feature within the domestic utility system, replacing natural gas in pipelines (where suitable) and being used for heating in particular. This may be a more cost effective route than building sufficient generating capacity to heat homes with electricity on the coldest winter days. It is even possible to use hydrogen as the reducing agent in metallurgical processes instead of carbon, although the process to do so still only exists at laboratory scale.

But the scale of a global hydrogen industry to support the solar world would far exceed the global Liquefied Natural Gas (LNG) we have today. That industry includes around 300 million tonnes per annum of liquefaction capacity and some 400 LNG tankers. That amounts to about 15 EJ of final energy compared to the current global primary energy demand of 500 EJ. In a 1000 EJ world that we might see in 2100, a role for hydrogen as an energy carrier that reached 100 EJ would imply an industry that was seven times the size of the current LNG system. But hydrogen has 2-3 times the energy content of natural gas and liquid hydrogen is one sixth the density of LNG (important for ships), so a very different looking industry would emerge. Nevertheless, the scale would be substantial.

Finally, but importantly, there are the things that we use, from plastic water bottles to the Tesla Model S. Everything has carbon somewhere in the supply chain or in the product itself. There is simply no escaping this. The source of carbon in plastics, in the components in a Tesla and in the carbon fibre panels in a Boeing 787 is crude oil (and sometimes natural gas). So our infinite solar world needs a source of carbon and on a very large scale. This could still come from crude oil, but if one objective of the solar world is to contain that genie, then an alternative would be required. Biomass is one and a bioplastics industry already exists. In 2015 it was 1-2 million tonnes per annum, compared to ~350 million tonnes for the traditional plastics industry.

Another source of carbon could be carbon dioxide removed directly from the atmosphere or sourced from industries such as cement manufacture. This could be combined with hydrogen and lots of energy to make synthesis gas (CO +H2), which can be a precursor for the chemical industry or an ongoing liquid fuels industry for sectors such as aviation. Synthesis gas is manufactured today on a large scale from natural gas in Qatar and then converted to liquid fuels in the Shell Pearl Gas to Liquids facility. Atmospheric extraction of carbon dioxide is feasible, but remains as a pilot technology today, although some companies are looking at developing it further.

The solar world may be feasible as this century progresses, but it is far from the simple solution that it is often portrayed as. Vast new industries would need to emerge to support it and each of these would take time to develop. The LNG industry first started in the early 1960s and is now a major part of the global economy, but still only carries a small fraction of global energy needs.

The new Shell publication, A Better Life with a Health Planet: Pathways to Net Zero Emissions, shows that in 2100 solar could be a 300 EJ technology, compared to 2.5 EJ energy source today. This is in a world with primary energy demand of 1000 EJ.

Scenarios are part of and ongoing process used in Shell for more than 40 years to challenge executives’ perspectives on the future business environment. They are based on plausible assumptions and quantification and are designed to stretch management thinking and even to consider events that may only be remotely possible.

Original Post

Kategorier: Udenlandske medier

Why Oil Companies Must Look Beyond Oil To Survive

Energy Collective - 17. august 2016 - 9:00

Persistently low oil prices have had a devastating effect on the economies of all major oil producers/exporters who are accustomed to a price regime of over $100/b. The lifting of sanctions on Iran and its ability to quickly ramp up to pre-sanction (2012) levels of production and exports has made the market even more liquid and exerted downward pressure on oil prices.

Economic survival and grabbing market share – self destructive

Suddenly when oil prices collapsed, the major oil producers and exporters found themselves in a challenging situation, as falling oil revenues were not sufficient to balance government budgets. In an effort to sustain their economic growth, while finding it difficult to keep the economy growing at the desired pace; they had to take some unpopular measures. Austerity measures, downsizing, delaying of some major projects, removing energy subsidies, and draining of sovereign wealth funds are some of the many immediate measures that oil producing/exporting countries are undertaking to cope up with falling oil revenues. The question is for how long they can survive if such a unique situation persists over an extended period of time?

OPEC and Non-OPEC need to collaborate

Oil prices are likely to remain below $50/b for at least a year or so; unless all stake holders, including OPEC and major non-OPEC producers cooperate on a production freeze/cut. This is a difficult task and even harder to implement. The reason being that all oil producers/exporters are in a catch-22 situation. Almost all oil producing/exporting countries are facing a dilemma of a budget deficit due to deteriorating oil revenues. Each producer is trying to produce/export more by offering discounts to grab the market and trying to lift oil revenues to narrow-down their budget deficit. For example the headline: Saudi oil output sets record despite global glut signals that is, there is a silent war going on among oil producers/exporters to produce and sell more in an effort to sustain/revive their sluggish economies. Their individual actions continue to push up the already overflowing inventories and further exerts downward pressure on oil prices. Related: Oil Majors Leaving South-East Asia – A Red Flag For The Area?

Oil Demand and Regulatory Reforms

On the global oil demand front the international agencies’ forecast is always associated with strong oil demand in Asia and more particularly linked with China and India. For example, BP and IEA respectively predicted that non-OECD Asian oil demand is likely to increase by 15 and 11 million bpd during 2015/2035.

It is quite possible that these forecast may not materialize as predicted due to so many on-going initiatives in both the countries. Such optimistic forecast may lead to overinvestment in the upstream sector, further dis-balancing the oil market equilibrium in the medium to long-term.

In an effort to curb pollution, a number of ongoing regulatory and legislative initiatives in China and India are taking place. For example, total vehicle sales in China grew by 4.7 percent in 2015 to 24.6 million, down from 6.9 percent sales growth seen in 2014. This is partly contributable to weaker GDP growth, but mostly due to certain initiatives such as quotas imposed by the Chinese government in cities such as Beijing and Shanghai where aspiring car owners must enter a ballot to get a license plate. Other measures include alternative day driving restrictions and progressively improving average fuel consumption standards.

Rapid penetration of electric vehicles in China is yet another factor that could dent the projected oil demand growth. India’s oil demand is expected to grow as the ownership of vehicles is still expected to increase significantly. However, an Indian court handed down rulings in an effort to control air pollution in urban areas in particular. Examples of this are a ruling by India’s Supreme Court in 2015 which results in banning the sale of luxury diesel cars in New Delhi and the National Green Tribunal, a special environmental court, which directed the government last month to ban all diesel vehicles in the capital that are more than 10 years old.

Structural changes in auto-industry

The higher oil prices and environmental challenges in the past have motivated the auto-industry to revolutionize and move away from over a century old Internal Combustion Engine (ICE) dominance with Electric Vehicles (EVs) and Fuel Cells Vehicle (FCV). What does it mean to oil companies? Do oil companies need a new long-term strategy to remain successfully in the business? Or live with a status quo strategy? If they ignore such a threat and do nothing it may undermine their long-term objectives. One should not ignore the fact that more than 72 percent of oil demand is mainly associated with transport sector and out of this over 80 percent has been linked with road transport. Speedy penetration of EVs will certainly displace sizeable oil demand in decades to come and could be detrimental for the oil industry. Related: Can Mexico Reverse Its Steep Output Decline?

Until recently, the oil industry perhaps is not giving any attention as what is happening around the auto-industry. However, a recent announcement of almost all the ICEs car manufacturers’ plans of moving away from ICE to EVs while some companies are planning to completely stop manufacturing of ICEs beyond 2050 should be alarming and eye opening for oil industry.

Volkswagen and Audi are aiming for EVs to make up 25 percent or more of sales by 2025, while Mercedes is about to unveil an entire fleet of electric vehicles, other automakers Hyundai, BMW, GM, Chevrolet Bolt, Tesla, and others also unveiled big plans.

In addition, the introduction of semi and fully autonomous cars and drones to replace domestic delivery will surely have a substantial impact on global oil demand. What this paradigm shift means for oil companies? Surely rapid penetration of EVs will displace sizable oil demand in road transport which accounts for major chunk of oil consumed. Author and Andreas de Vries in their recently published the article “Wake up call for oil companies: electric vehicles will deflate oil demand” predicted that under the reference case the penetration of EVs will displace 13.8 million bpd by 2040 and under the high case it could reach 39.5 million bpd.

To defend the oil industry from complete disintegration, is the current lower $50/b oil price regime part of oil and gas companies’ strategy to discourage the speedy development of EVs industry? Or rather is it due to technological innovation or due to self destructive policies for individual survival (grabbing market share) rather than protecting the industry? Some smart companies already expanded their old philosophy of being purely oil and gas business towards energy business including development of renewables. I think sooner or later most oil companies will need to develop a new strategy before it is too late. Those sticking with the philosophy of “old is gold” could find themselves losing out.

By Salman Ghouri for Oilprice.com.

The post, Why Oil Companies Must Look Beyond Oil To Survive, was first published on OilPrice.com.

Kategorier: Udenlandske medier

Six Simple Policies that Can Give Corporate Purchasers the Advanced Energy They Want, and 11 States that Would Benefit Most from Adopting Them

Energy Collective - 17. august 2016 - 8:00

It’s no mystery that companies want advanced energy—they’re announcing major projects, making large purchases, and setting public goals. Even for companies without a specific target, advanced energy presents an attractive option to control and lower energy costs. Unsurprisingly, companies are pursuing advanced energy in growing numbers, with a record 3,100 MW of wind power purchases in 2015 signed by corporate customers—double the previous year.

But as companies sign power purchase agreements, build rooftop solar installations, install energy storage solutions, and develop fuel cell facilities, impressive national growth trends obscure the important fact that, in many states, the options to pursue such projects are limited at best. To identify how and where the list of purchasing options could be expanded, Advanced Energy Economy Institute commissioned Meister Consultants Group (MCG) to consider opportunities to increase corporate access to advanced energy through policy changes at the state level. What MCG found is six policies that would give corporate purchasers the renewable energy they are looking for, and 11 states that could reap the benefits of the advanced energy development that would result.

The report, Opportunities to Increase Corporate Access to Advanced Energy: A National Brief, looks at six policy options, identifying states with the largest corporate demand and strongest renewable resources in which the policies could expand options for corporate consumers. After considering states’ regulatory and political environments, 11 states stood out. The report summarizes the top five for one or more of the policies profiled on the basis of its potential to increase corporate access to renewable energy. Of these, 11 states made the top 5 list for one or more of the policies: Alabama, California, Florida, Georgia, Indiana, Kentucky, Michigan, Minnesota, North Carolina, Ohio, and Texas. Applicable policy options for each state are shown in the table below.

The report breaks down policy options into two major categories according to the types of access they enable: large-scale offsite purchases, and smaller onsite facilities.

Three policies outlined in the report enable companies to purchase electricity from large-scale renewable energy projects, opening up purchasing options generally not available to companies in states that do not allow electric choice. Specifically, Utility Renewable Energy Tariffs give companies the option to opt into a portfolio of renewable energy competitively sourced by their utility, “Back-to-back” Utility PPAs provide a mechanism for a customer to contract with a renewable energy developer with the utility acting as an intermediary, and Direct Access Tariffs allow certain customers in traditionally regulated markets to choose their electricity source. The states that could most benefit from these policies were distinguished by their large commercial and industrial sector and strong resource potential.

The three remaining policies allow companies to access energy from distributed energy resources. While many states around the country have policies in place that support onsite or distributed advanced energy, not all of these policies are structured to enable the participation of larger corporate users. The report identifies three policies that states could expand corporate access to distributed energy resources: raising system size limits for programs that credit distributed generation, such as net metering, allowing third-party ownership of onsite generation systems, and allowing virtual or aggregated metering to enable companies to benefit from distributed energy even when their needs are not met by a single onsite system at a single building. The top five states for each of these policies were identified by their strong commercial and industrial sector load at facilities capable of hosting onsite resources, and by their strong solar potential, since distributed generation is currently dominated by solar PV.

In a report focused on how and where, it’s easy to lose track of why, but for states interested in attracting and retaining top corporations the motivation is clear. Commenting on release of the report, Rob Bernard, chief environmental strategist at Microsoft, summed it up by saying, “Expanding renewable energy is a top priority for Microsoft, and we’re committed to using more clean energy every year. That requires greater availability of renewable energy in the markets where we operate.” Other companies agree. Patrick Flynn, director of sustainability for Salesforce, said his company has “made a corporate commitment to power 100 percent of our operations with renewable energy,” while Steven Center, vice president of the environmental business development office at American Honda Motor Co., Inc., called Honda’s commitment to renewable energy “a driving force for the future.”

States that ignore this driving force do so to their own detriment, ignoring not only corporate needs but a huge opportunity to expand advanced energy markets. To put the opportunity of policy action (or opportunity cost of inaction) into perspective, if half of electricity demand from commercial and industrial customers nationally were met by renewable energy, this would drive development of nearly 450 gigawatts (GW) of renewable energy—more than double current capacity nationwide, and enough to power over 100 million houses. Those states that lead the charge have the best shot at hosting new advanced energy development and attracting more happy corporate citizens. Sounds to us like an easy policy choice.

Download the report, “Opportunities to Increase Corporate Access to Advanced Energy.”

Original Post

Kategorier: Udenlandske medier

Hydrogen Is Working and It’s Much Cheaper Than We Thought

Energy Collective - 17. august 2016 - 7:00

The technology that produces hydrogen using renewable electricity has already passed crucial regulatory tests for grid balancing in a commercial environment, despite what I said here a month ago.

For over 30 years the prophets of green energy have been promoting the idea that the “hydrogen age” is just around the corner. The gas is abundant in the form of water, molecules of which possess two hydrogen atoms for every oxygen atom.

Making it from water using electrolysis releases only oxygen and no pollutants. It can then be burnt in any suitable boiler, cooker or vehicle and used in fuel cells. All we have to do is get it to the right place at the right time at the right price.

The problem has always been the right price, which provides the market incentive for investment in the necessary infrastructure.

A month ago I wrote a piece on a proposal to convert the UK’s gas grid to hydrogen. The reports I covered judged that the most likely route to creating the hydrogen was through the steam reforming of methane. This is not a climate friendly way of doing it, although it is currently by far the most common.

In a low carbon future, producing hydrogen this way in the required quantities would be unlikely without the ability to capture the carbon released by this process and store it underground, a relatively unproven and expensive process dubbed Carbon Capture and Storage.

I had compared in my article the cost of steam reforming with CCS with the cost of producing hydrogen by the electrolysis of water using wind or solar power. My source for the latter information was an apparently reliable one: the Energy Institute of University College London, which produced a report in April last year authored by Samuel L Weeks about using hydrogen as a fuel source in internal combustion engines. This states: “Hydrogen produced by electrolysis of water is extremely expensive, around US$1500/kWh [AU$1959/kWh].

The editor of The Ecologist magazine, Oliver Tickell, pulled me up on this, observing that it struck him as being way too expensive. I tried to get Professor Weeks and the UCL Energy Institute to give me the source for the $1500 figure but so far have not had a response.

So instead I turned to a company that is already making hydrogen from renewable electricity for grid balancing and fuel cell powered cars: ITM Power. They provided me with another professor, Marcus Newborough, who is their development director. He gave me a much lower figure.

Much, much lower.

He said: “We are currently selling high purity hydrogen at our refuelling stations for fuel cell cars at £10/kg of hydrogen. Each kilogram contains 39.4kWh of energy, so that’s about 25 pence/kWh or $0.33/kWh. The ambition is to decrease the $/kWh value as more stations are manufactured and more FC cars are in circulation. So yes the $1500/kWh number looks absurd to us.”

Indeed it does. It is 4545 times larger, if we are comparing like with like.

And I apologise for not checking more thoroughly.

And I’m still mighty curious as to why UCL Energy Institute got it so wrong.

Not only is ITM using the gas for hydrogen car filling stations, a chain of which it is opening in the UK (on a full tank of hydrogen a fuel cell car can drive up to 300 miles), it is also using it to inject into the grid.

Power-to-gas

The process is called power-to-gas (P2G) and it is useful when too much renewable electricity is being produced compared to the demand that exists at that moment. Instead of it going to waste it could be used to produce hydrogen as a form of energy storage and used when required.

Professor Newborough said, “The power-to-gas approach is a form of energy storage and (in the UK) there are various assessments and discussions ongoing [through organisations such as BEIS (the new UK government department dealing with energy and industry), OFGEM (the British energy regulator), UK National Grid, DG Energy in Brussels (the European Commission’s department dealing with energy) and The European Association for Storage of Energy] but no conclusive economic framework yet for energy storage to operate within.”

He said P2G was particularly advantageous for its following abilities:

  • to respond to an instruction from the grid operator to charge up or absorb electricity
  • to hold on to the stored energy for a significant period without incurring energy losses
  • to discharge energy on demand at a desired rate
  • to be scaled up in number or capacity as we head towards a much more renewable electricity system

“P2G is part of this alongside batteries, pumped storage, etcetera,” he said. “Fundamentally the economic benefit is greatest for those technologies that possess the operational advantages of being able to respond very rapidly and/or hold onto the energy for a long period and/or discharge energy at a controllable rate across a very long period. Now power-to-gas is particularly advantageous in each of these respects.”

ITM has a pilot P2G system operational in Frankfurt with 12 other companies that together form the Thüga group.

At the end of 2013, this plant injected hydrogen for the first time into the Frankfurt gas distribution network. It therefore became the first plant to inject electrolytic generated hydrogen into the German gas distribution network, and possibly anywhere in the world. Final acceptance of the plant was achieved at the end of March 2014.

Overall efficiency is said to be over 70 per cent and the plant is now participating in Germany’s secondary control (grid balancing) market.

The conditions for being allowed to do this are extremely stringent. Systems have to respond in under one second when they receive a command to increase to maximum power or decrease to zero power to demonstrate that they are suitable for frequency regulation. The energy is discharged as hydrogen and should be available for as long as required.

The Frankfurt system has been shown to do this and can react to variable loads in the network.

Work is ongoing to see how the plant can be integrated into an increasingly intelligent future energy system.

“For the duration of the demonstration, we want to integrate the plant so that it actively contributes to compensating for the differences between renewable energy generation and power consumption,” Thüga chief executive Michael Riechel said.

The regulatory framework is playing catch-up

Professor Newborough told me that the payment levels for providing such services have yet to emerge.

In the UK, the national grid is introducing an Enhanced Frequency Response service to pay energy storage technology operators to provide sub-second response.

“ITM has already pre-qualified to provide such a service,” he said.

They are also introducing a Demand Turn Up service, which will pay operators £60/MWh (AU$102/MWh) for operating overnight and on summer afternoons to absorb excess wind and solar power.

“Clearly the economics of P2G are a function of such balancing services payments from the grid operator and the electricity tariff,” he said, “but in addition P2G offers a greening agent to the gas grid operator in the form of injecting hydrogen at low concentrations into natural gas.

“So the economics are also a function of the value placed on greening up the gas grid. By analogy we have seen in recent years in France, Germany and the UK, feed-in tariffs for injecting bio-methane into the gas grid as a greening agent and these have been up to four times the value of a kWh of natural gas.

“The economic case therefore depends on a combination of value propositions and costs – providing services to the electricity grid, the electricity tariff paid, the value of green gas for the gas grid and the capital cost of the plant. In this context it is not possible to state firm figures at this time, but equally it is important to state the underpinning factors as described above.”

It was at this point in our conversation that he gave me the price at which the company is currently selling high purity hydrogen at its fuel cell car refuelling stations.

Advantages of hydrogen over batteries

A report on energy storage undertaken by McKinsey and Co last year found that using variable renewable electricity this way could use nearly all excess renewable energy in a scenario in the future in which there was a high installed capacity of renewable electricity generation.

Reusing this stored energy in the gas grid, for transport or in industry, it said, would provide a valuable contribution to decarbonising these sectors. The European potential, in 2050, of this value would be “in the hundreds of gigawatts”.

That’s massive.

This future scenario, in which countries are reliant for much of the electricity on renewables, is likely to be common.

The Kinsey report contrasts the use of hydrogen with the use of batteries, which it calls power-to-power or P2P because it’s electricity rather than gas that comes out.

In this situation hydrogen scores better as a storage medium because batteries can either be emptied (in which case they can’t supply the demand) or full (in which case they can not be charged even if the generator is generating). By contrast, hydrogen can continue to be pumped into the grid or into vehicles and the limiting factor instead is the limit of local demand for the distance to the demand from the generator. This is shown in the following diagram:

How low energy storage capacity is a limiting factor for the use of batteries.

Nevertheless the Kinsey report warns that current regulations lag behind the potential of these technologies. Reviewing them is the key to unlocking this enormous opportunity.

So it now seems that the most likely route to creating the hydrogen that goes into our gas grids could be from electrolysis using renewables after all.

Yet, like many cutting-edge low carbon technologies, it’s early days. The Germans are pioneering this method as part of their transition strategy. It’s one part of the picture.

With the UK Met office this week saying that we have already reached 1.38°C temperature rise since the beginning of the industrial revolution and the Paris Agreement aspiring to keeping that rise to 1.5°C, the task of mainstreaming these technologies becomes even more urgent.

This article is republished from The Fifth Estate. Originally published on 9 August.

David Thorpe is the author of:

Original Post

Kategorier: Udenlandske medier

Shale Gas Production Drives World Natural Gas Production Growth

Energy Collective - 17. august 2016 - 6:00

Source: U.S. Energy Information Administration, International Energy Outlook 2016 and Annual Energy Outlook 2016

 

In the U.S. Energy Information Administration’s International Energy Outlook 2016 (IEO2016) and Annual Energy Outlook 2016 (AEO2016), natural gas production worldwide is projected to increase from 342 billion cubic feet per day (Bcf/d) in 2015 to 554 Bcf/d by 2040. The largest component of this growth is natural gas production from shale resources, which grows from 42 Bcf/d in 2015 to 168 Bcf/d by 2040. Shale gas is expected to account for 30% of world natural gas production by the end of the forecast period.

Although currently only four countries—the United States, Canada, China, and Argentina—have commercial shale gas production, technological improvements over the forecast period are expected to encourage development of shale resources in other countries, primarily in Mexico and Algeria. Together, these six countries are projected to account for 70% of global shale production by 2040.

In the United States, shale gas production accounted for more than half of U.S. natural gas production in 2015 and is projected to more than double from 37 Bcf/d in 2015 to 79 Bcf/d by 2040, which is 70% of total U.S. natural gas production in the AEO2016 Reference case by 2040.

Several AEO2016 side cases illustrate the effect of technological improvements on cost and productivity. Shale gas production in 2040 is projected to be 50% higher under the High Oil and Gas Resources and Technology case, reaching 112 Bcf/d, while in the Low Oil and Gas Resources and Technology case, production is projected to be 50% lower than the Reference case, reaching 41 Bcf/d.

Canada has been producing shale gas since 2008, reaching 4.1 Bcf/d in 2015. Shale gas production in Canada is projected to continue increasing and to account for almost 30% of Canada’s total natural gas production by 2040.

China has been among the first countries outside of North America to develop shale resources. In the past five years, China has drilled more than 600 shale gas wells and produced 0.5 Bcf/d of shale gas as of 2015. Shale gas is projected to account for more than 40% of the country’s total natural gas production by 2040, which would make China the second-largest shale gas producer in the world after the United States.

Argentina’s commercial shale gas production was just 0.07 Bcf/d at the end of 2015, but foreign investment in shale gas production is increasing. Pipeline infrastructure in Argentina is adequate to support current levels of shale gas production, but it will need to be expanded as production grows. Current shortages of specialized rigs and fracturing equipment are expected to be resolved, and shale production is projected to account for almost 75% of Argentina’s total natural gas production by 2040.

Algeria’s production of both oil and natural gas has declined over the past decade, which prompted the government to begin revising investment laws that stipulate preferential treatment for national oil companies in favor of collaboration with international companies to develop shale resources. Algeria has begun a pilot shale gas well project and developed a 20-year investment plan to produce shale gas commercially by 2020. Algerian shale production is projected to account for one-third of the country’s total natural gas production by 2040.

Mexico is expected to gradually develop its shale resource basins after the recent opening of the upstream sector to foreign investors. At present, Mexico is expanding its pipeline capacity to import low-priced natural gas from the United States. Mexico is expected to begin producing shale gas commercially after 2030, with shale volumes contributing more than 75% of total natural gas production by 2040.

Source: U.S. Energy Information Administration, International Energy Outlook 2016 and Annual Energy Outlook 2016. Note: Other gas includes coalbed methane, tight gas and other (nontight) gas.

 

Principal contributors: Faouzi Aloulou, Victoria Zaretskaya

Original Post

Kategorier: Udenlandske medier

The Storage Story: What’s in Store?

Energy Collective - 17. august 2016 - 5:00

Energy storage can be used for a variety of applications, including utility infrastructiure, grid stabilization, peak load shifting and peak demand offsetting. Photo credit: NREL

Storage has been no stranger to the energy conversation over the last several years. According to the White House, the United States doubled the installed capacity of energy storage to 500MW in 2015 alone. As the U.S. solar market continues to grow, the industry is beginning to envision a future where solar projects are built with storage units to help offset peak demand (just ask Elon Musk). Where do we stand currently? Is storage growing in the U.S., or is it just a buzz word?

The Current State: Looking at the State Level

Energy storage incentive programs have emerged in some of the top solar markets. California’s Pacific Gas & Electric (PG&E) has had the Self-Generation Incentive Program (SGIP) in place since 2001. Recently, the California Public Utilities Commission approved reforms that will require 75 percent of the program’s $83 million annual budget be used for energy storage. Previously, much of the budget had been consumed by fuel cells.

On the East Coast, New Jersey, whose program we have previously addressed, incentivizes $300/kWh of electricity produced by qualifying projects which caps at $300,000 or 30 percent of the total cost (whichever is lower).

In addition to these incentives, some states have introduced mandates to encourage storage development. California, true to its history of ambitious renewable energy goals, led the way with a 1.3GW procurement mandate by 2020 for its three largest utilities. Oregon followed suit shortly after with a storage mandate of its own. Looking back East, comprehensive energy legislation passed in Massachusetts at the end of July orders the Department of Energy Resources (DOER) to develop a 2020 energy storage mandate.

On the Horizon: National Incentives

Storage’s momentum has continued at the federal level. Early last month, Senator Martin Heinrich of New Mexico introduced an investment tax credit (ITC) for energy storage modeled off the solar ITC. While a good idea in theory, the solar industry is already facing an undersupply of tax equity, and the number of investors willing and able to monetize the solar investment tax credit is already limited, which drives up return requirements for these investors.

In addition to federal legislation, the White House recently took actions that they estimate will accelerate storage procurement or deployment to at least 1.3GW in the next five years.

Outlook for Developers

Energy storage can be used for a variety of applications, including utility infrastructure, grid stabilization, peak load shifting and peak demand offsetting.

As the storage market grows, we have seen increased demand for storage + solar solutions in requests for proposals (RFPs), and we are participating in these RFPs with partners as we look to meet consumers’ demand for this “hot” technology.

While mass adoption of solar + storage is at the very least one year away, growing interest in the topic makes it critical for solar developers to expand their knowledge on this topic. It is important to know when to add storage to a project, but it may be just as vital to know when to disqualify it. For now, energy storage feasibility is still highly incentive-driven (and thus, location-driven). Even in states with energy storage incentives, however, adding energy storage to a solar project sometimes fails to increase customer savings. Part of the challenge is that energy storage and solar offset different parts of customers’ energy bills. Because of this, optimal tariffs for solar and energy storage savings are rarely the same. Nevertheless, if customers in states with incentives have large, “spiky” loads and high demand charges, or are on tariffs with time of day based rates, it is worth evaluating the addition of an energy storage system to a solar project.

Stay tuned. As more energy storage incentives become available and the cost of energy storage systems continues to decline, it will become increasingly important for solar experts to be energy storage experts.

By William Patterson

This is an excerpt from the August edition of SOURCE: the Sol Project Finance Journal, a monthly electronic newsletter analyzing the solar industry’s latest trends based on our unique position in the solar financing space. To view the full Journal or subscribe, please e-mail pr@solsystems.com.

ABOUT SOL SYSTEMS

Sol Systems is a leading solar energy investment and development firm with an established reputation for integrity and reliability. The company has financed approximately 450MW of solar projects, and manages over $500 million in assets on behalf of insurance companies, utilities, banks, and Fortune 500 companies.

Sol Systems works with its corporate and institutional clients to develop customized energy procurement solutions, and to architect and deploy structured investments in the solar asset class with a dedicated team of investment professionals, lawyers, accountants, engineers, and project finance analysts.

Original Post

Kategorier: Udenlandske medier

2 Steps We Can Take Right Now to Modernize Pennsylvania’s Electric Grid

Energy Collective - 17. august 2016 - 4:00

Each year, dozens of utilities across the U.S. embark on a complicated process called a “rate case.” Presented to a state public utility commission (PUC), a rate case is a utility’s pitch for higher electricity prices for customers. For most utilities, a rate case only happens once every several years. So, all sides argue for the rules of the road by which the utility will operate until the next rate case. A rate case is also where state and local governments, along with consumer and environmental advocacy groups, seek cleaner, cheaper, and more customer-friendly prices, products, and policies.

The Pennsylvania Public Utilities Commission (PPUC) is currently hearing a rate case for Metropolitan Edison (Met-Ed), which serves 560,000 residential and commercial customers, and represents one of the Pennsylvania utility branches of Ohio-based mega company FirstEnergy. Last month, Environmental Defense Fund (EDF) filed testimony in the case urging Pennsylvania to modernize its grid with both voltage optimization and customer data access. The PPUC should require Met-Ed to implement both programs so Pennsylvanians can benefit from a clean, modern electric grid.

Right-sizing voltage cuts cost, pollution

Voltage optimization enables a utility to send each customer the right amount of voltage. Think of it as “right-sizing” your electricity. Today, most utilities send customers more voltage than needed. Studies have shown customers routinely receive, on average, two to three percent higher voltage than needed to power their homes and businesses. Across an entire service area, this adds up to a lot of wasted electricity and unnecessary, harmful pollution from power plants. In short, using the right voltage is a proven way to save money and cut pollution.

Through a grant from the U.S. Department of Energy (DOE), Chicago-based ComEd found voltage optimization could eliminate 2,000 gigawatt-hours of electricity demand each year for less than 2 cents per kilowatt hour. That’s enough electricity to power 180,000 homes. Customer savings are estimated at $240 million a year, more than from ComEd’s other energy-efficiency programs.

FirstEnergy also received a $57 million DOE grant to test voltage optimization in Met-Ed’s service area. It reduced peak energy demand, saved electricity, and reduced line losses (electricity lost over transmission lines). But unlike ComEd, FirstEnergy abandoned the program, despite equipment already having been installed and ready to go. Even worse, FirstEnergy has not proposed to implement voltage optimization in Met-Ed’s rate case. EDF has been pushing FirstEnergy to use voltage optimization and we will not stop until it happens. As energy advocates (both environmental and consumer), we can encourage Pennsylvania to require it.

Customers need energy data

Blocking customer access to their own electricity-use data stifles the economy. It’s hard for customers to know how much electricity they’re using (and how they’re using it) without easy, timely access to their energy data. It’s also hard for third-party companies, like app developers and smart home appliance manufacturers, to develop products that make energy management easier if customers are unable to grant electricity products access to their energy data.

Moreover, blocking customer access to their own electricity-use data stifles personal freedom: It’s been established that a customer owns her energy data. However, Met Ed doesn’t allow customers timely access to this data.

The next step in modernizing our grid so it can operate like an “energy internet” is for states to make this critical energy data easily accessible to and sharable by utility customers. This is why EDF is working with Mission:data, a national coalition of over 35 technology companies that work together for customer-friendly data access policies, to urge utilities across the country to adopt best practices around customer data access. As with voltage optimization, Met-Ed’s rate case misses the mark on this critical component as a new customer data access policy is nowhere to be found.

Better, smarter, cleaner

America’s electric grid is, undoubtedly, undergoing a transformation. We know it can and must be better, smarter, and cleaner. Proven technologies like voltage optimization are important pieces of a modern grid, and easy access to energy data will unlock further innovation and help connect customers to clean energy solutions. This is why Pennsylvania should ensure its citizens benefit from these programs, while they still have the chance to do so in Met-Ed’s rate case.

By John Finnigan, Senior Regulatory Attorney

Original Post

Kategorier: Udenlandske medier

Doubling Battery Power of Consumer Electronics

Energy Collective - 17. august 2016 - 3:00

SolidEnergy Systems’ battery (far right) is twice as energy-dense, yet just as safe and long-lasting as the lithium ion batteries used in consumer electronics. The battery uses a lithium metal foil for an anode, which can hold more ions and is several times thinner and lighter than traditional lithium metal, graphite, carbon, or silicon anodes. A novel electrolyte also keeps the battery from heating up and catching fire. Courtesy of SolidEnergy Systems

 

An MIT spinout is preparing to commercialize a novel rechargeable lithium metal battery that offers double the energy capacity of the lithium ion batteries that power many of today’s consumer electronics.

New lithium metal batteries could make smartphones, drones, and electric cars last twice as long.

By Rob Matheson | MIT News Office

Founded in 2012 by MIT alumnus and former postdoc Qichao Hu ’07, SolidEnergy Systems has developed an “anode-free” lithium metal battery with several material advances that make it twice as energy-dense, yet just as safe and long-lasting as the lithium ion batteries used in smartphones, electric cars, wearables, drones, and other devices.

“With two-times the energy density, we can make a battery half the size, but that still lasts the same amount of time, as a lithium ion battery. Or we can make a battery the same size as a lithium ion battery, but now it will last twice as long,” says Hu, who co-invented the battery at MIT and is now CEO of SolidEnergy.

The battery essentially swaps out a common battery anode material, graphite, for very thin, high-energy lithium-metal foil, which can hold more ions — and, therefore, provide more energy capacity. Chemical modifications to the electrolyte also make the typically short-lived and volatile lithium metal batteries rechargeable and safer to use. Moreover, the batteries are made using existing lithium ion manufacturing equipment, which makes them scalable.

In October 2015, SolidEnergy demonstrated the first-ever working prototype of a rechargeable lithium metal smartphone battery with double energy density, which earned them more than $12 million from investors. At half the size of the lithium ion battery used in an iPhone 6, it offers 2.0 amp hours, compared with the lithium ion battery’s 1.8 amp hours.

SolidEnergy plans to bring the batteries to smartphones and wearables in early 2017, and to electric cars in 2018. But the first application will be drones, coming this November. “Several customers are using drones and balloons to provide free Internet to the developing world, and to survey for disaster relief,” Hu says. “It’s a very exciting and noble application.”

Putting these new batteries in electric vehicles as well could represent “a huge societal impact,” Hu says: “Industry standard is that electric vehicles need to go at least 200 miles on a single charge. We can make the battery half the size and half the weight, and it will travel the same distance, or we can make it the same size and same weight, and now it will go 400 miles on a single charge.”

Tweaking the “holy grail” of batteries

Researchers have for decades sought to make rechargeable lithium metal batteries, because of their greater energy capacity, but to no avail. “It is kind of the holy grail for batteries,” Hu says.

Lithium metal, for one, reacts poorly with the battery’s electrolyte — a liquid that conducts ions between the cathode (positive electrode) and the anode (negative electrode) — and forms compounds that increase resistance in the battery and reduce cycle life. This reaction also creates mossy lithium metal bumps, called dendrites, on the anode, which lead to short circuits, generating high heat that ignites the flammable electrolyte, and making the batteries generally nonrechargable.

Measures taken to make the batteries safer come at the cost of the battery’s energy performance, such as switching out the liquid electrolyte with a poorly conductive solid polymer electrolyte that must be heated at high temperatures to work, or with an inorganic electrolyte that is difficult to scale up.

While working as a postdoc in the group of MIT professor Donald Sadoway, a well-known battery researcher who has developed several molten salt and liquid metal batteries, Hu helped make several key design and material advancements in lithium metal batteries, which became the foundation of SolidEnergy’s technology.

One innovation was using an ultrathin lithium metal foil for the anode, which is about one-fifth the thickness of a traditional lithium metal anode, and several times thinner and lighter than traditional graphite, carbon, or silicon anodes. That shrunk the battery size by half.

But there was still a major setback: The battery only worked at 80 degrees Celsius or higher. “That was a showstopper,” Hu says. “If the battery doesn’t work at room temperature, then the commercial applications are limited.”

So Hu developed a solid and liquid hybrid electrolyte solution. He coated the lithium metal foil with a thin solid electrolyte that doesn’t need to be heated to function. He also created a novel quasi-ionic liquid electrolyte that isn’t flammable, and has additional chemical modifications to the separator and cell design to stop it from negatively reacting with the lithium metal.

The end result was a battery with energy-capacity perks of lithium metal batteries, but with the safety and longevity features of lithium ion batteries that can operate at room temperature. “Combining the solid coating and new high-efficiency ionic liquid materials was the basis for SolidEnergy on the technology side,” Hu says.

Blessing in disguise

On the business side, Hu frequented the Martin Trust Center for MIT Entrepreneurship to gain valuable insight from mentors and investors. He also enrolled in Course 15.366 (Energy Ventures), where he formed a team to develop a business plan around the new battery.

With their business plan, the team won the first-place prize at the MIT $100K Entrepreneurship Competition’s Accelerator Contest, and was a finalist in the MIT Clean Energy Prize. After that, the team represented MIT at the national Clean Energy Prize competition held at the White House, where they placed second.

In late 2012, Hu was gearing up to launch SolidEnergy, when A123 Systems, the well-known MIT spinout developing advanced lithium ion batteries, filed for bankruptcy. The landscape didn’t look good for battery companies. “I didn’t think my company was doomed, I just thought my company would never even get started,” Hu says.

But this was somewhat of a blessing in disguise: Through Hu’s MIT connections, SolidEnergy was able to use the A123’s then-idle facilities in Waltham — which included dry and clean rooms, and manufacturing equipment — to prototype. When A123 was acquired by Wanxiang Group in 2013, SolidEnergy signed a collaboration agreement to continue using A123’s resources.

At A123, SolidEnergy was forced to prototype with existing lithium ion manufacturing equipment — which, ultimately, led the startup to design novel, but commercially practical, batteries. Battery companies with new material innovations often develop new manufacturing processes around new materials, which are not practical and sometimes not scalable, Hu says. “But we were forced to use materials that can be implemented into the existing manufacturing line,” he says. “By starting with this real-world manufacturing perspective and building real-world batteries, we were able to understand what materials worked in those processes, and then work backwards to design new materials.”

After three years of sharing A123’s space in Waltham, SolidEnergy this month moved its headquarters to a brand new, state-of-the-art pilot facility in Woburn that’s 10 times larger — and “can house the wings of a Boeing 747” Hu says — with aims of ramping up production for their November launch.

Reprinted with permission of MIT News

Original Post

Kategorier: Udenlandske medier

Back to Basics: Reducing Waste in Office Buildings

Energy Collective - 17. august 2016 - 2:00
There are several strategies that building owners and managers can use to encourage recycling in office buildings.

Corporate responsibility and environmental efforts are becoming a much higher priority for a number of companies, regardless of industry. In fact, according to a 2012 Deloitte CFO survey, 93 percent of CFOs believe that there is a direct link between sustainability programs and business performance.

One of the most basic steps for an organization looking to enhance its sustainable practices is recycling, often referred to as resource management or waste management.

Although socially conscious companies can set recycling policies and goals, it’s up to the building ownership and management to ensure the right programs and avenues are in place to allow tenants to achieve those goals. Implementing a recycling program at an existing building is no small feat, as it oftentimes requires an overhaul of in-place systems and tenant engagement. Here are four common obstacles and strategies to overcome them, and to help building owners and tenants reach their recycling goals.

1) Knowledge

Before implementing a recycling program, it’s important to know what’s being thrown away and what portion of that could have been recycled. Armed with knowledge, property owners and managers can set up a plan of action.

Strategies: Perform a waste audit. Look through trash streams, pick out recyclable materials that were thrown in the trash and look through recycling receptacles to see if people are recycling appropriate materials. This hands-on analysis can show where holes exist in the current recycling program, as well as large-scale totals of how much recyclable material is getting thrown into a trash stream.

2) Infrastructure

In some cities, buildings were not designed with infrastructure in place for both trash and recycling storage. In many cases, there simply is not enough space to accommodate both.

Strategies: Work with buildings and waste management groups to adjust the size of on-site containers. Often, there will be a large trash compactor and really small recycling bins. Change up the size and amount of containers to accommodate more recycling, which in turn will limit trash. Think of it this way: if there is room for the cardboard in the recycling container, the cardboard is more likely to make it into the right bin.

3) Convenience

Encouraging a recycling mindset can be a challenge. Habits are hard to break, so make it easy for people to turn over a new leaf.

Strategies: Create ample opportunities to recycle. Make recycling containers easily accessible, such as placing recycling bins at employees’ desks. On the flip side, not having any bins (neither trash nor recycling) at employees’ desks can also promote recycling by requiring a concerted effort to seek out a receptacle for waste, whether trash or recycling. Having both containers in the same place helps individuals make the right decisions. Tenant space recycling is ultimately all about convenience.

4) Education

In general, people do not have an in-depth understanding of what can or cannot be recycled.

Strategies: Display materials prominently at trash and recycling receptacles that show what materials can go into recycling bins. Recycling rules can vary significantly between work and home, so education is important. Waste haulers are often available to educate building management and tenants about what is recyclable. Posters or signs double as both a form of education and a simple reminder: although someone may not have intended to recycle something, a sign could prompt them to once they get to the receptacle.

And why wait until the building is occupied? The recycling process can start much earlier, even when a building is being constructed. In new construction projects, much of the construction and demolition (C&D) waste ends up in landfills—even recyclable materials. There are ample opportunities for C&D waste to be turned into new resources, leading to job creation and industrial activity. Here are a few examples from Waste Management:

  • Inerts can become road base.
  • Cardboard, paper, plastics and metals can be converted into new goods.
  • Clean wood becomes mulch or biomass fuel.
  • Dirt, rock and sand turn into alternative daily cover (ADC) in landfills.
  • Crushed concrete can be turned into gravel or dry aggregate for new concrete.

Recycling C&D waste can be tricky, but there are several strategies to improve the percentage of building materials that are diverted from landfills.

  • Start early: By integrating a sustainability and waste management plan into the design and construction process early, all members of the construction and design team are able to contribute to reaching the final goal.
  • Get to know the waste management company: Recycling is handled very differently from state to state. It is very important to work directly with a waste management company to identify opportunities to recycle materials and set goals based on those opportunities.
  • Be vocal: Make recycling rates and sustainability goals known. Some owners will include these numbers in a waste management contract to stay on target.
By Dan Hartsig and Samantha Longshore, Transwestern

Original Post

Kategorier: Udenlandske medier

A Reality Check on Renewable Energy Potential

Energy Collective - 16. august 2016 - 11:00
Highlights
  • On a global level, the potential for renewable energy is more than sufficient.
  • Problems emerge on a regional level, however, especially in developing Asia and Africa.
  • Renewable energy technology forcing in these regions can have serious socio-economic consequences.
Introduction

We often see images like the one below which imply that the potential of renewable energy is essentially limitless. Thus, if we only had the will, we could easily power the world with clean and everlasting renewable energy.

Solar PV is generally viewed as the most limitless of all the renewable energy options. The little squares on the map below shows just how easy it is to power the world with solar.

The reality is, however, that realistic renewable energy potential is some orders of magnitude lower than these simplistic illustrations.

Firstly, areas covered by urban developments, forests, protected zones, ice, dunes or rock need to be excluded. In addition, areas with excessive slope or elevation are also not eligible build sites. After these eliminations, only areas with a sufficiently strong solar irradiation and wind speeds can be considered.

From the remaining land area, only a small fraction can be used before serious social resistance or natural habitat interference is encountered. For example, only about 1-2% of available land area is covered by onshore wind in European countries like Denmark, Germany and the Netherlands, but these issues are already becoming significant.

All of these factors have recently been quantified in a very interesting study published in the Elsevier journal “Global Environmental Change”. Findings from this study are further discussed below.

Globally – more than enough

Even after all of these realistic assumptions, the total global wind and solar resource still easily meets projected demand by the year 2070 even under the most pessimistic assumptions (the dark bands in the graphs).

It is clear that PV, CSP and offshore wind hold the greatest potential. Onshore wind has a much smaller potential, however, especially under low (3%) and medium (6%) land availability assumptions. PV on buildings also has quite a large potential in the year 2070 due to assumptions of large urban buildouts and large gains in solar panel efficiency (35% in 2070).

The projected electricity demand by 2070 is set within the range of 24-40 GJ/person/year. For perspective, the average American currently consumes about 44 GJ of electricity per year and electricity accounts for only about 20% of final energy consumption.

Regionally – problems arise

Unlike hydrocarbon fuels, electricity is not easily tradeable between different world regions. It is therefore very important to assess renewable energy resource availability on a regional basis. The following highly informative graphic tells the story:

 

It is clear that only North America, developing Europe and Australia have access to a well-balanced mix of renewable energy resources with more than enough potential. A well-balanced mix of resources is important to minimize the effects of intermittency in order to allow for higher renewable energy market shares. For example, the positive effect of mixing wind and solar in terms of preserving more value with increasing market share is shown below (the y-axis illustrates the value of generated electricity where 1 is the average market value):

When deploying only wind or only solar PV, the solar PV option is especially challenging. Because solar’s variability is very pronounced and highly correlated within a reasonable distance, its value falls rapidly with increasing market share. This is illustrated below:

Offshore wind and rooftop solar are about twice as expensive as onshore wind and utility-scale PV for obvious reasons. In addition, the development of CSP has been slower than anticipated. It should be mentioned, however, that the low-cost inclusion of thermal energy storage in CSP significantly increases its value.

Given these considerations, most regions around the world will have a very tough time achieving high market shares of renewable energy. The two most populous regions in 2070: Sub-Saharan Africa and South Asia will have to rely heavily on solar power. If solar thermal technology can be greatly improved, this will help Sub-Saharan Africa, but South Asia will have to rely almost completely on solar PV – mostly the expensive distributed kind. North Africa and the Middle East face similar challenges.

The highly populous East Asian, South-East Asian and South American regions can achieve greater balance if they heavily rely on expensive offshore wind. South-East Asia will be especially dependent on offshore wind together with EU Europe.

It should be noted that further refinement of the data to a country or state level will further accentuate these challenges. The situation outlined above assumes lots of long distance electricity lines and excellent performance by politicians to establish cross-border regional electricity markets.

Special challenges for the developing world

The challenges outlined above are further augmented in the developing world, especially Asia and Africa which may well be home to 80% of the world population by the end of this century:

These regions and their enormous populations still have a lot of industrialization to do. Industrialization is critical to give these people a reasonable quality of life, to shield them against the effects of climate change, and to naturally curb population growth. Unfortunately, industrialization is also an incredibly expensive and resource intensive undertaking. Insisting on driving industrialization primarily through renewable energy will therefore come at a tremendous cost in terms of quality of life, especially given the challenges outlined in the previous section.

As a simple example, I estimated the effects of renewable energy technology forcing on economic growth in India as an example at the bottom of this article. The example showed that deployment of only solar and wind to grow Indian electricity production to support economic growth would cut the Indian growth rate in half. After 20 years of this practice, the Indian economy would literally be only half the size it could otherwise have been. This situation will be further worsened given the fact that South Asia will have to rely heavily on expensive distributed solar PV which will rapidly lose value as market share increases. Such a development strategy is simply not going to happen unless rich nations finance the necessary subsidies. And that is not going to happen any time soon.

Final word

This article was definitely not written to write off renewable energy. As often stated before, I wholeheartedly support moderate wind and solar deployment in regions where they make sense. For example, the US is one country where renewable energy makes a lot of sense due to its vast available land areas, high quality wind and solar resources, and affluent population.

Wind and solar technology forcing in regions with much lower potential and much poorer populations is a completely different story though. I fear that this strategy will be highly inefficient at best and disastrous at worst.

Kategorier: Udenlandske medier

Fixing a Major Flaw in Cap-And-Trade

Energy Collective - 16. august 2016 - 9:00

While many Californians are spending August burning fossil fuels to travel to vacation destinations, the state legislature is negotiating with Gov. Brown over whether and how to extend the California’s cap-and-trade program to reduce carbon dioxide and other greenhouse gases (GHGs).   The program, which began in 2013, is currently scheduled to run through 2020, so the state is now pondering what comes after 2020.

The program requires major GHG sources to buy “allowances” to cover their emissions, and each year reduces the total number of allowances available, the “cap”.  The allowances are tradeable and their price is the incentive for firms to reduce emissions.  A high price makes emitters very motivated to cut back, while a low price indicates that they can get down to the cap with modest efforts.

Before committing to a post-2020 plan, however, policymakers must understand why the cap-and-trade program thus far has been a disappointment, yielding allowance prices at the administrative price floor and having little impact on total state GHG emissions.  California’s price is a little below $13/ton, which translates to about 13 cents per gallon at the gas pump and raises electricity prices by less than one cent per kilowatt-hour.

The low prices in the three major markets for GHGs mean little impact on behavior

And it’s not just California. The two other major cap-and-trade markets for greenhouse gases – the EU’s Emissions Trading System and the Regional Greenhouse Gas Initiative in the northeastern U.S. — have also seen very low prices (about $5/ton in both markets) and scant evidence that the markets have delivered the emissions reductions.  In fact the low prices in the EU-ETS and RGGI have persisted even after they have effectively lowered their emissions caps to try to goose up the prices.

In all of these markets, some political leaders have argued the outcomes demonstrate that other policies – such as increased auto fuel economy and requiring more electricity from renewable sources – have effectively reduced emissions without much help from a price on GHGs. That view is partially right, but a study that Jim Bushnell, Frank Wolak, Matt Zaragoza-Watkins and I released last Tuesday shows that a major predictor of variation in GHG emissions is the economy.  While emissions aren’t perfectly linked to economic output, more jobs and more output mean generating more electricity and burning more gasoline, diesel and natural gas, the largest drivers of GHG emissions.

Accurately predicting California’s GSP 10-15 years in the future is extremely difficult

Because it is extremely difficult to predict economic growth a decade or more in the future, there is huge uncertainty about how much GHGs an economy will spew out over long periods, even in the absence of any climate policies, what climate wonks call the “Business As Usual” (BAU) scenario.

If the economy grows more slowly than anticipated — as happened in all three cap-and-trade market areas after the goals of the programs were set – then BAU emissions will be low and reaching a prescribed reduction will be much easier than expected.  But if the economy suddenly takes off — as happened in the California’s boom of the late 1990s — emissions will be much more difficult to restrain.  Our study finds that the impact of variation in economic growth on emissions is much greater than any predictable response to a price on emissions, at least to a price that is within the bounds of political acceptability.

California emissions since 1990 have fluctuated with economic growth

Our finding has important implications for extending California’s program beyond 2020.     If the state’s economy grows slowly, we will have no problem and the price in a cap-and-trade market will be very low.  In that case, however, the program will do little to reduce GHGs, because BAU emissions will be below the cap.  But if the economy does well, the cap will be very constraining and allowance prices could skyrocket, leading to calls for raising the emissions cap or shutting down the cap-and-trade program entirely.

Our study shows that the probability of hitting a middle ground — where allowance prices are not so low as to be ineffective, but not so high as to trigger a political backlash — is very low.  It’s like trying to guess how many miles you will drive over the next decade without knowing what job you’ll have or where you will live.

So, can California’s cap-and-trade program be saved? Yes. But it will require moderating the view that there is one single emissions target that the state must hit. Instead, the program should be revised to have a price floor that is substantially higher than the current level, which is so low that it does not significantly change the behavior of emitters.   And the program should have a credible price ceiling at a level that won’t trigger a political crisis.  The current program has a small buffer of allowances that can be released at high prices, but would have still risked skyrocketing prices if California’s economy had experienced more robust growth.

The state would enforce the price ceiling and floor by changing the supply of allowances in order to keep the price within the acceptable range. California would refuse to sell additional allowances at a price below the floor. This is already state policy, but the floor is too low. California would also stand ready to sell any additional allowances that emitters need to meet their compliance obligation at the price ceiling.

Essentially, the floor and ceiling would be a recognition that if the cost of reducing emissions is low, we should do more reductions rather than just letting the price fall to zero, and if the cost is high, we should do less rather than letting the price of the program shoot up to unacceptable levels.

But should California’s cap-and-trade program be saved?  I think so.  My first choice would be to replace it with a tax on GHG emissions, setting a reliable price that would make it easier for businesses to plan and invest.  But cap-and-trade is already the law in California and with a credible price floor and ceiling it can still be an effective part of the state’s climate plan.

Putting a price on GHGs creates incentives for developing new technologies, and in the future might motivate large-scale switching from high-GHG to low-GHG energy sources as their relative costs change.  The magnitudes of these effects could be large, but they are extremely uncertain, which is why price ceilings and floors are so important in a cap-and-trade program.  With these adjustments, California can still demonstrate why market mechanisms should play a central role in fighting climate change while maintaining economic prosperity.

A shorter version of this post appeared in the Sacramento Bee August 14 (online Aug 11)

I’m still tweeting energy news articles and studies @BorensteinS

Original Post

Kategorier: Udenlandske medier