Category Archives: miscellaneous

Kenny Shields’ passing

Many of you are not aware of my second career path – artist management – which I have not done for 4 years.  I was privileged to work with Kenny Shields and Streetheart; Kenny passed away recently – he will be missed as a great person, artist, friend, father and husband.

5-years ago today, this photo was taken at the Travelodge in Saskatoon.  It was of a supper with the artists I was working with at the time; Kenny Shields/Streetheart, Jully Black, and Donny Parenteau.  See also http://www.ericandersonmanagement.com

Supper Kenny Jully and Donny

To save the planet, we must ignore anti-nuclear ideologues

To save the planet, we must ignore anti-nuclear ideologues

KONRAD YAKABUSKI

The Globe and Mail

Published Thursday, Jul. 20, 2017 5:01PM EDT

Last updated Thursday, Jul. 20, 2017 5:44PM EDT

 

There might be a way for the world to meet its carbon-reduction targets that does not involve building more nuclear power plants. The problem is, no one has come up with one. Until that happens, politicians need to get real about nuclear energy’s essential role in saving the planet.

Unfortunately, most of them still have their heads stuck in their solar panels.

The latest greener-than-thou politician to make the perfect the enemy of the good is France’s awkwardly titled Minister for the Ecological and Inclusive Transition, Nicolas Hulot. This month, Mr. Hulot announced the shutdown of as many as 17 of France’s 58 nuclear reactors over the next eight years as part of President Emmanuel Macron’s promise to cut his country’s reliance on nuclear-generated electricity to 50 per cent from 75 per cent by 2025.

Mr. Hulot says he has “absolute faith” in renewable power sources, mainly wind and solar energy, to fill the gap. But as Germany shows, closing emissions-free nuclear power plants, more often than not, leads to burning more fossil fuels to produce power. That’s because wind and solar remain intermittent power sources, while nuclear, coal and natural gas plants can run full-steam 24/7.

In a report last month, the International Energy Agency said “premature closure of operational nuclear power plants remains a major threat to meeting targets,” set under the 2015 Paris climate agreement, to prevent global temperatures from rising more than two degrees above preindustrial levels by the end of the century.

But don’t try telling that to Mr. Hulot. A former star television journalist tapped by Mr. Macron to boost his credibility with environmentalists, Mr. Hulot is France’s version of David Suzuki. In 2012, he sought the presidential nomination for France’s anti-nuclear Green Party. He appears unmoved by expert warnings that France will pay a heavy environmental and economic price if he sticks to his nuclear-reduction plan.

France has long been at the forefront of nuclear research and its nuclear industry, led by state-owned Areva and Électricité de France, is a global leader. But just as some Canadian ideologues want to shut down the oil sands, France’s green ideologues want to shut the country’s reactors.

This promises to be hugely expensive and, ironically, make it much harder for France to meet its greenhouse gas reduction targets under the Paris climate agreement. Wind and solar are unreliable power sources, so “we are obligated to have something else to take over” from nuclear, French climate scientist François-Marie Bréon told Agence France-Presse following Mr. Hulot’s announcement. That “something else” is inevitably fossil-fuel-generated electricity.

The French paradox is being repeated across Europe, where Germany, Spain, Belgium and Switzerland have committed to phasing out nuclear power. This will not only prevent the closing of the continent’s coal plants, it will also increase Europe’s dependence on Russian natural gas, making Vladimir Putin even more powerful than he is now.

In the United States, nuclear power is up against not only opposition from environmentalists but also against fierce lobbying by the powerful American Petroleum Institute. Without a carbon tax, cheap natural gas has hurt the competitiveness of existing nuclear plants. The API, which represents natural-gas producers, seeks to quash the financial incentives that some states provide to enable existing nuclear plants to stay open. Wind and solar power are heavily subsidized. So, the reasoning goes, why shouldn’t emissions-free nuclear power plants be similarly rewarded?

Keeping existing nuclear power plants open is only half the battle. The world needs more nuclear. China and India are adding nuclear power capacity but not fast enough to replace plants being closed in the developed world. Even Britain’s Hinkley nuclear station, set to open in 2026, won’t make up for British capacity reductions before then.

The IEA projects that nuclear capacity additions of 20 gigawatts annually are needed to meet the Paris accord targets by the year 2100, but the world is far off the mark. Nuclear “retirements due to phase-out policies in some countries, long-term operation limitations in others, or loss of competitiveness against other technologies” mean that as much as 50 GW of nuclear capacity could be lost by 2025 alone. Politicians who cave to the anti-nuclear lobby are deluding themselves or misleading voters when they insist wind and solar can make up the difference.

“Increasing nuclear capacity deployment could help bridge the [two-degree scenario] gap and fulfill the recognized potential of nuclear energy to contribute significantly to global decarbonization,” the IEA report said. It called for “clear and consistent policy support for existing and new capacity, including clean-energy incentive schemes for development of nuclear alongside other clean forms of energy.”

Vous écoutez, Monsieur Hulot?

 

 

 

Antarctica’s Larsen Ice Shelf Break-Up driven by Geological Heat Flow Not Climate Change

WRITTEN BY JAMES EDWARD KAMIS, GUEST POST ON JANUARY 19, 2017

http://climatechangedispatch.com/antarcticas-larsen-ice-shelf-break-up-driven-by-geological-heat-flow-not-climate-change/

Antarctica’s Larsen Ice Shelf Break-Up driven by Geological Heat Flow Not Climate Change

Antarctic Larsen Ice Shelf 1
Figure 1) North tip of Antarctic Continent including Larsen Ice Shelf Outline (black line), very active West Antarctica Rift / Fault System (red lines), and currently erupting or semi-active volcanoes (red dots).

Progressive bottom melting and break-up of West Antarctica’s seafloor hugging Larsen Ice Shelf is fueled by heat and heated fluid flow from numerous very active geological features, and not climate change.

This ice shelf break-up process has been the focus of an absolute worldwide media frenzy contending man-made atmospheric global warming is at work in the northwest peninsula of Antarctica.

As evidence, media articles typically include tightly edited close-up photos of cracks forming on the surface of the Larsen Ice Shelf (Figure 2) accompanied by text laced with global warming alarmist catch phrases.

This “advertising / marketing” approach does in fact produce beautiful looking and expertly written articles. However, they lack subsidence, specifically a distinct absence of actual scientific data and observations supporting the purported strong connection to manmade atmospheric global warming.

Working level scientists familiar with, or actually performing research on, the Larsen Ice Shelf utilize an entirely different approach when speaking about or writing about what is fueling this glacial ice break-up.

They ascribe the break-up to poorly understood undefined natural forces (see quote below). Unfortunately, comments by these scientists are often buried deep in media articles and never seem to match the alarmist tone of the article’s headline.

“Scientists have been monitoring the rift on the ice shelf for decades. Researchers told NBC News that the calving event was “part of the natural evolution of the ice shelf,” but added there could be a link to changing climate, though they had no direct evidence of it.” (see here)

Antarctic Larsen Ice Shelf 2
Figure 2) An oblique view of crack in the Antarctic’s Larsen C ice shelf on November 10, 2016. (NBC News Article credit John Sonntag / NASA via EPA

This article discusses what more properly explains what is fueling the Larsen Ice Shelf break-up. A theory that is supported by actual scientific data and observations thereby strongly indicating that the above mentioned undefined natural forces are in fact geological.

Let’s begin by reviewing the map atop this article (Figure 1). This map is a Google Earth image of the local area surrounding, and immediately adjacent to, the Larsen Ice Shelf, here amended with proven active geological features.

If ever a picture told a thousand words this is it. The Larsen Ice Shelf lies in and among: twenty-six semi-active (non-erupting but heat-flowing) land volcanoes, four actively erupting land volcanoes, two proven semi-active seafloor volcano (seamounts), and a proven actively heat flowing major fault system named the West Antarctic Rift.

Not shown on this map are known seafloor hydro-thermal vents (hot seafloor geysers), likely heat emitting fractures, and prominent cone-shaped seafloor mountains that are most likely seamounts (ocean volcanoes).

This geological information paints a very clear and compelling picture that the Larsen Ice Shelf is positioned in an extremely active geological setting. In fact a strong case can be made that the Larsen Ice Shelf owes its very existence to a down-faulted low valley that has acted as a glacial ice container (see research on the Bentley Subglacial Trench of the West Antarctic Rift / Fault).

Next let’s review in more detail a few of the key very local areas on the Figure 1 map which will help clarify the power and recent activity of these areas.

First up, the Seal Nunataks area which is labeled on the Figure 1 map as “16 Semi-Active Volcanoes“.  In general, these volcanoes lie within and push up through the northern portion of the Larsen Ice Shelf (Figure 3).

More specifically, the Larsen Ice Shelf is formally divided into three sub-areas: northern “A” segment, central “B” segment, and southern “C” segment. The 16 Seal Nunataks‘ volcanoes are strongly aligned in a west to east fashion and are designated as the boundary between the Larsen “A” and B” segments.

This 50-mile-long and 10-mile-wide chain of visible land volcanoes has likely been continuously volcanically active for at least the last 123 years based on limited amounts of data from this remote and largely unmonitored area.

Each time humans have visited this area they have recorded obvious signs of heat and heated fluid flow in the form of: fresh lava flows on volcanoes, volcanic ash on new snow, and volcanic debris in relatively new glacial ice. Remember, these observations only document volcanic activity on exposed land surfaces, and not the associated volcanic activity occurring on the seafloor of this huge volcanic platform.

More modern research published in 2014 by Newcastle University is here interpreted to indicate that the Larsen “B” portion of the greater Larsen Ice Shelf pulsed a massive amount of heat in 2002. Research elevation instruments showed that a huge portion of the Larsen “B” area quickly rose up, likely in response to swelling of underlying deep earth lava pockets (mantle magma chambers).

This process heated the overlying uplifted ground. This heated ground then acted to bottom melt the overlying glaciers (quote below). This is an awesome display of the power geologically induced heat flow can have on huge expanses of glacial ice.

“Scientists led by Newcastle University in the UK studied the impact of the collapse of the giant Larsen B ice shelf in 2002, using Global Positioning System (GPS) stations to gauge how the Earth’s mantle responded to the relatively sudden loss of billions of tonnes of ice as glaciers accelerated. As expected, the bedrock rose without the weight but at a pace ‚Äì as much as 5 centimetres a year in places ‚Äì that was about five times the rate that could be attributed by the loss of ice mass alone”, said Matt King, now at the University of Tasmania (UTAS), who oversaw the work.

“It’s like the earth in 2002 was prodded by a stick, a very big stick, and we’ve been able to watch how it responded,” Professor King said. “We see the earth as being tremendously dynamic and always changing, responding to the forces.”  Such dynamism – involving rocks hundreds of kilometres below the surface moving “like honey” – could have implications for volcanoes in the region. Professor King said. (see here)

Antarctic Larsen Ice Shelf 3
Figure 3) Map of the Seal Nunataks 16 Semi-active volcanoes relative to the three Larsen Ice Shelf segments, “A”, “B”, and “C” (see here). Also, a historical aerial photo of several Seal Nunatak volcanic cones pushing up through the Larsen Ice Shelf.

It is clear that the vast Seal Nunataks’ volcanic plateau at the very least pulsed significant amounts of heat and likely heated fluid flow in the following years: 1893, 1968, 1982, 1985, 1988, 2002, 2010.

The next key local area on the Figure 1 map is portion labeled as “6 Semi-Active Volcanoes” of which two are seamounts (seafloor volcanoes) and four are land volcanoes. All of these geological features are known to be currently emitting heat and heated fluid flow, however the rate and volume of this flow is not well understood. The most noteworthy feature is Deception Island, which is a huge six-mile-wide collapsed land volcano (caldera).

This volcanic feature: extends a great distance outward and downward into the surrounding ocean, has been earthquake active in the years 1994 / 1995 / 1996, and has moderately erupted in the years 1820, 1906 / 1910 / 1967 / 1969 / 1992.

Early explorers used the harbor created by this collapsed volcano, however, on occasion they had to abandon their moorings when the seawater in the harbor boiled (see quote below). More modern research stations in the 1960s had to temporarily abandon the island due to moderate eruptions.

“The fifth volcano, off the northern tip of the Antarctic Peninsula, is a crater that has been ruptured by the sea to form a circular harbor known as Deception Island. Beginning in the 1820s, it was used as shelter by sealing fleets from New England and later by whalers.

On occasion, water in that harbor has boiled, peeling off bottom paint from the hulls of ships that did not escape in time. An eruption a decade ago damaged research stations established there by both Britain and Chile.

This volcano and the two newly discovered ones on the opposite side of the peninsula, the longest on earth, are thought to be formed by lava released from a southeastward-moving section of the Pacific floor that is burrowing under the peninsula in the same process thought to have formed the Andean mountain system farther north, in South America.” (see here)

The last two key local areas on the Figure 1 map are labeled as “1 Erupting and 5 Semi-Active Volcanoes” and “3 Erupting Volcanoes. These two areas represent major currently erupting land volcanoes that are spewing huge amount of ash into the atmosphere, and most importantly, massive amounts of heat and heated fluid flow into the surrounding ocean (see here and here).

These ongoing eruptions all lie along, and are generated by, deep earth faulting associated with the northern extension of the West Antarctic Rift / Fault System (red lines on Figure 1 map). The reader is directed to previous Climate Change Dispatcharticles detailing heat flow and volcanic activity along this West Antarctic 5,000-mile-long fault system (see here and here)

Reviewing how mega-geological forces drive Earth’s internal heat engine also has direct bearing on what is fueling the Larsen Ice Shelf Break-up as follows:

  • Earth is undergoing an extremely active period of volcanic and earthquake activity during the last three years especially major deep ocean fault systems such as those associated with the Pacific Rim of Fire and Icelandic Mid-Atlantic Ocean Rift. It makes perfect sense that the West Antarctic Rift / Fault System which underlies the Larsen Ice Shelf has also become more active of during this time frame.
  • The 2015-2016 El Ni√±o Ocean “Warm Blob” has now been proven to be caused / generated by “natural forces”, and not manmade or other purely atmospheric forces. These natural forces are almost certainly geological as per numerous previous Climate Change Dispatch articles (see hereand here). If geological forces have the power to warm the entire Pacific Ocean, they can certainly act to warm the ocean beneath the Larsen Ice Shelf.
  • Climate scientists favoring manmade Global Warming continue to force fit all anomalous warming events into an atmospheric framework because this is the only abundant data source they have available. However, there is very little global atmospheric data that supports rapid local Larsen Ice Shelf melting or local rapid ocean warming. Most global atmospheric data indicates that the Antarctic atmosphere is cooling or not changing temperature.
  • The surface of our planet is 70% water and 90 % of all active volcanoes are present on the floor of Earth’s oceans. Quite amazingly only 3-5% of the ocean floors have been explored by human eyes, and virtually none of this area is monitored. This is especially true in the nearly unexplored / completely unmonitored deep regions of the oceans in and around the Larsen Ice Shelf.
  • It just makes sense that if major rift / fault zones that from the boundary of Earth’s outer crustal plates that have the power to move entire continents 1-2inches per year, certainly have the power to warm oceans as per the Plate Climatology Theory. The Weddell Sea which surrounds the Larsen Ice Shelf, no problem.

In summary huge amounts of research and other readily available information clearly indicates that the Larsen Ice shelf lies within a geologically active region. Media reports that do not mention this aspect relative to the potential cause of bottom melting and subsequent break-up of the Larsen “A”, “B”, and “C” glaciers are best characterized as “Fake News” and not “97% Proven / the Debate is Over” news.

Thankfully there are smaller media venues such as Climate Change Dispatch that provide scientists with a platform to present viable alternative explanations to complicated climate and climate-related events, specifically in this case…. Antarctica’s Larsen Ice Shelf Break-Up is Fueled by Geological Heat Flow and Not Climate Change.

James Edward Kamis is a Geologist and AAPG member of 42 years with a B.S. and M.S. in geology who has always been fascinated by the connection between Geology and Climate. More than 12 years of research / observation have convinced him that the Earth’s Heat Flow Engine, which drives the outer crustal plates, is also an important driver of the Earth’s climate. The Plate Climatology Theory (plateclimatology.com) was recently presented / published at the annual 2016 American Meteorological Society Conference in New Orleans, LA. (see here)

 

REFERENCES

http://www.nbcnews.com/science/environment/iceberg-size-delaware-poised-break-antarctica-n703821

http://volcano.si.edu/volcano.cfm?vn=390050

https://wattsupwiththat.com/2008/01/22/surprise-theres-an-active-volcano-under-antarctic-ice/

http://usatoday30.usatoday.com/news/science/2004-05-20-new-volcano_x.htm

http://news.nationalgeographic.com/2016/11/foehn-winds-melt-ice-shelves-antarctic-peninsula-larsen-c/ Warm Winds not Climate Change but from Geologically Warmed Ocean.

https://www.volcanodiscovery.com/seal-nunataks-group.html

http://www.seeker.com/three-volcanoes-erupting-nasa-satellite-2031377713.html Three Volcanoes North of Antarctica Erupt at Once

http://earthobservatory.nasa.gov/IOTD/view.php?id=87995 Bristol Island Eruption May 1, 2016

http://www.antarcticglaciers.org/glaciers-and-climate/shrinking-ice-shelves/antarctic-peninsula-ice-shelves/

https://www.researchgate.net/publication/231840457_Glacial_trough_under_Larsen_Ice_Shelf_Antarctic_Peninsula

https://www.nsf.gov/news/news_summ.jsp?cntn_id=100385

http://www.ldeo.columbia.edu/research/blogs/operation-icebridge-scientists-map-thinning-ice-sheets-antarctica Lamont Doherty Involvement in Operation Ice Bridge Antarctica

https://www.researchgate.net/publication/228444593_Pattern_of_retreat_and_disintegration_of_Larsen_B_Ice_Shelf_Antarctic_Peninsula

http://www.academia.edu/15165441/Late_Holocene_tephrochronology_of_the_northern_Antarctic_Peninsula

http://www.academia.edu/5715929/Volcanic_tremors_at_Deception_Island_South_Shetland_Islands_Antarctica Deception Island South Shetland Islands Antarctica

http://www.sciencedirect.com/science/article/pii/S0377027314002492

http://www.smh.com.au/environment/fire-and-ice-melting-antarctic-poses-risk-of-volcanic-activity-study-shows-20140520-zrj06.html Mantle Under Larsen Shelf Rises and Activates Volcanoes.

http://earthsky.org/earth/new-glimpse-of-geology-under-antarcticas-ice Bentley Subglacial Trench in West Antarctica

https://www.nasa.gov/pdf/121653main_ScambosetalGRLPeninsulaAccel.pdf  Map Larsen Ice Shelf

http://www.nytimes.com/1982/05/24/us/2-volcanoes-found-in-antarctica.html

http://www.sciencedirect.com/science/article/pii/089598119090022S Deception Island and Bransfield Strait

http://visibleearth.nasa.gov/view.php?id=87995

 

 

Why Commodity Traders Are Fleeing the Business

Why Commodity Traders Are Fleeing the Business

The number of trading houses has dwindled, and the institutional, pure-play commodity hedge funds that remain are few.

By Shelley Goldberg

July 12, 2017, 3:00 AM CST July 12, 2017, 11:32 AM CST

Bloomberg

copper

Copper, the “beast” of commodities.

 Photographer: John Guillemin/Bloomberg

Profiting from commodity trading often requires a combination of market knowledge, luck, and most importantly, strong risk management. But the number of commodity trading houses has dwindled over the years, and the institutional, pure-play commodity hedge funds that remain — and actually make money — can be counted on two hands. Here is a list of some of the larger commodity blow-ups:

1990

Phillip Brothers

The largest and most successful commodity trading house in its day caved, triggered by copper trading

1993

Metallgesellschaft AG

The New York branch of this large German conglomerate lost $1.5 billion in heating oil and gasoline derivatives

1995

Sumitomo Corp.

Yasuo Hamanaka blamed for $2.6 billion loss in copper scandal

2001-2002

Enron Corp.

Dissolves after misreporting natural gas trades, resulting in Arthur Andersen, a ‘Big 5’ accounting firm’s fall from grace

2005

Refco

Broker of commodities and futures contracts files for bankruptcy after accounting fraud

2006

Amaranth Advisors

Energy hedge fund folds after losing over $6 billion on natural gas futures

2011

BlueGold Capital

One of the best-performing hedge funds in 2011, closed its doors in 2012, shrinking from $2 billion to $1.2 billion on crude oil bets

2014

Brevan Howard Asset Management

One of the largest hedge funds globally. Closed its $630 million commodity fund after having run well over $1 billion of a $42 billion fund

2015

Phibro

The sister and energy trading arm of Phillip Brothers, ranked (1980) the 15thlargest U.S. company, dissolves

2015

Vermillion Asset Management

Private-equity firm Carlyle Group LP split with the founders of its Vermillion commodity hedge fund, which shrank from $2 billion to less than $50 million.

Amid the mayhem, banks held tightly to their commodity desks in the belief that there was money to be made in this dynamic sector. The trend continued until the implementation of the Volcker rule, part of the Dodd-Frank Act, which went into effect in April 2014 and disallowed short-term proprietary trading of securities, derivatives, commodity futures and options for banks’ own accounts. As a result, banks pared down their commodity desks, but maintained the business.

Last week, however, Bloomberg reported that Goldman Sachs was “reviewing the direction of the business” after a multi-year slump and yet another quarter of weak commodity prices.

What happened?

In the 1990s boom years, commodity bid-ask spreads were so wide you could drive a freight truck through them. Volatility came and went, but when it came it was with a vengeance, and traders made and lost fortunes. Commodity portfolios could be up or down about 20 percent within months, if not weeks. Although advanced trading technologies and greater access to information have played a role in the narrowing of spreads, there are other reasons specific to the commodities market driving the decision to exit. Here are the main culprits:

  1. Low volatility: Gold bounces between $1,200 and $1,300 an ounce, WTI crude straddles $45 to $50 per barrel, and corn is wedged between $3.25 and $4 a bushel. Volatility is what traders live and breathe by, and the good old days of 60 percent and 80 percent are now hard to come by. Greater efficiency in commodity production and consumption, better logistics, substitutes and advancements in recycling have reduced the concern about global shortages. Previously, commodity curves could swing from a steep contango (normal curve) to a steep backwardation (inverted curve) overnight, and with seasonality added to the mix, curves resembled spaghetti.
  2. Correlation: Commodities have long been considered a good portfolio diversifier given their non-correlated returns with traditional asset classes. Yet today there’s greater evidence of positive correlations between equities and crude oil and Treasuries and gold.
  3. Crowded trades: These are positions that attract a large number of investors, typically in the same direction. Large commodity funds are known to hold huge positions, even if these only represent a small percent of their overall portfolio. And a decision to reverse the trade in unison can wipe out businesses. In efforts to eke out market inefficiencies, more sophisticated traders will structure complex derivatives with multiple legs (futures, options, swaps) requiring high-level expertise.
  4. Leverage: Margin requirements for commodities are much lower than for equities, meaning the potential for losses (and profits) is much greater in commodities.
  5. Liquidity: Some commodities lack liquidity, particularly when traded further out along the curve, to the extent there may be little to no volume in certain contracts. Futures exchanges will bootstrap contract values when the markets close, resulting in valuations that may not reflect physical markets and grossly swing the valuations on marked-to-market portfolios. Additionally, investment managers are restricted from exceeding a percentage of a contract’s open interest, meaning large funds are unable to trade the more niche commodities such as tin or cotton.
  6. Regulation: The Commodity Futures Trading Commission and the Securities and Exchange Commission have struggled and competed for years over how to better regulate the commodities markets. The financial side is far more straightforward, but the physical side poses many insurmountable challenges. As such, the acts of “squeezing” markets through hoarding and other mechanisms still exist. While the word “manipulation” is verboten in the industry, it has reared its head over time. Even with heightened regulation, there’s still room for large players to maneuver prices — for example, Russians in platinum and palladium, cocoa via a London trader coined “Chocfinger,” and a handful of Houston traders with “inside” information on natural gas.
  7. Cartels: Price control is not only a fact in crude oil, with prices influenced by the Organization of Petroleum Exporting Countries but with other, more loosely defined cartels that perpetuate in markets such as diamonds and potash.
  8. It’s downright difficult: Why was copper termed “the beast” of commodities, a name later applied to natural gas? Because it’s seriously challenging to make money trading commodities. For one, their idiosyncratic characteristics can make price forecasting practically impossible. Weather events such as hurricanes and droughts, and their ramifications, are difficult to predict. Unanticipated government policy, such as currency devaluation and the implementation of tariffs and quotas, can cause huge commodity price swings. And labor movements, particularly strikes, can turn an industry on its head. Finally, unlike equity prices, which tend to trend up gradually like a hot air balloon but face steep declines (typically from negative news), commodities have the reverse effect — prices typically descend gradually, but surge when there’s a sudden supply shortage.

What are the impacts? The number of participants in the sector will likely drop further, but largely from the fundamental side, as there’s still a good number of systematic commodity traders who aren’t concerned with supply and demand but only with the market’s technical aspects. This will keep volatility low and reduce liquidity in some of the smaller markets. But this is a structural trend that feasibly could reverse over time. The drop in the number of market makers will result in inefficient markets, more volatility and thus, more opportunity. And the reversal could come about faster should President Donald Trump succeed in jettisoning Dodd-Frank regulations.

(Corrects attribution of Goldman’s review of commodity operations in third paragraph.)

Bloomberg Prophets Professionals offering actionable insights on markets, the economy and monetary policy. Contributors may have a stake in the areas they write about.

To contact the author of this story:
Shelley Goldberg at shelleyrg3@gmail.com

To contact the editor responsible for this story:
Max Berley at mberley@bloomberg.net

 

Fear of radiation is more dangerous than radiation itself

Fear of radiation is more dangerous than radiation itself

By David Ropeik:

He is an instructor in the environmental programme of the Harvard Extension School, and an author, consultant and public speaker who focuses on risk perception, communication and management. His latest book is How Risky Is it, Really? Why Our Fears Don’t Always Match the Facts (2010). He lives near Boston, Massachusetts.

https://aeon.co/ideas/fear-of-radiation-is-more-dangerous-than-radiation-itself

Fear of radiation

Photo by Gregg-Webb-IAEA

The fear of ionising (nuclear) radiation is deeply ingrained in the public psyche. For reasons partly historical and partly psychological, we simply assume that any exposure to ionising radiation is dangerous. The dose doesn’t matter. The nature of the radioactive material doesn’t matter. The route of exposure – dermal, inhalation, ingestion – doesn’t matter. Radiation = Danger = Fear. Period.

The truth, however, is that the health risk posed by ionising radiation is nowhere near as great as commonly assumed. Instead, our excessive fear of radiation – our radiophobia – does more harm to public health than ionising radiation itself. And we know all this from some of the most frightening events in modern world history: the atomic bombings of Japan, and the nuclear accidents at Chernobyl and Fukushima.

Much of what we understand about the actual biological danger of ionising radiation is based on the joint Japan-US research programme called the Life Span Study (LSS) of survivors of Hiroshima and Nagasaki, now underway for 70 years. Within 10 kilometres of the explosions, there were 86,600 survivors – known in Japan as the hibakusha – and they have been followed and compared with 20,000 non-exposed Japanese. Only 563 of these atomic-bomb survivors have died prematurely of cancer caused by radiation, an increased mortality of less than 1 per cent.

While thousands of the hibakusha received extremely high doses, many were exposed to moderate or lower doses, though still far higher than those received by victims of the Chernobyl or Fukushima nuclear accidents. At these moderate or lower doses, the LSS found that ionising radiation does not raise rates of any disease associated with radiation above normal rates in unexposed populations. In other words, we can’t be sure that these lower doses cause any harm at all, but if they do, they don’t cause much.

And regardless of dose, the LSS has found no evidence that nuclear radiation causes multi-generational genetic damage. None has been detected in the children of the hibakusha.

Based on these findings, the International Atomic Energy Agency estimates that the lifetime cancer death toll from the Chernobyl nuclear accident might be as high as 4,000, two-thirds of 1 per cent of the 600,000 Chernobyl victims who received doses high enough to be of concern. For Fukushima, which released much less radioactive material than Chernobyl, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) predicts that ‘No discernible increased incidence of radiation-related health effects are expected among exposed members of the public or their descendants.’

Both nuclear accidents have demonstrated that fear of radiation causes more harm to health than radiation itself. Worried about radiation, but ignoring (or perhaps just unaware of) what the LSS has learned, 154,000 people in the area around the Fukushima Daiichi nuclear plants were hastily evacuated. The Japan Times reported that the evacuation was so rushed that it killed 1,656 people, 90 per cent of whom were 65 or older. The earthquake and tsunami killed only 1,607 in that area.

The World Health Organization found that the Fukushima evacuation increased mortality among elderly people who were put in temporary housing. The dislocated population, with families and social connections torn apart and living in unfamiliar places and temporary housing, suffered more obesity, heart disease, diabetes, alcoholism, depression, anxiety, and post-traumatic stress disorder, compared with the general population of Japan. Hyperactivity and other problems have risen among children, as has obesity among kids in the Fukushima prefecture, since they aren’t allowed to exercise outdoors.

Though Chernobyl released far more radioactive material than Fukushima, fear caused much more health damage still. In 2006, UNSCEAR reported: ‘The mental health impact of Chernobyl is the largest public health problem caused by the accident to date … Rates of depression doubled. Post-traumatic stress disorder was widespread, anxiety and alcoholism and suicidal thinking increased dramatically. People in the affected areas report negative assessments of their health and wellbeing, coupled with … belief in a shorter life expectancy. Life expectancy of the evacuees dropped from 65 to 58 years. Anxiety over the health effects of radiation shows no signs of diminishing and may even be spreading.’

The natural environment around the Chernobyl and Fukushima Daiichi accidents adds evidence that ionising radiation is less biologically harmful than commonly believed. With people gone, those ecosystems are thriving compared with how things were before the accidents. Radiation ecologists (a field of study that blossomed in the wake of Chernobyl) report that radiation had practically no impact on the flora and fauna at all.

The risk from radiophobia goes far beyond the impacts in the immediate area around nuclear accidents. Despite the fact that radiation released from Fukushima produced no increase in radiation-associated diseases, fear of radiation led Japan and Germany to close their nuclear power plants. In both nations, the use of natural gas and coal increased, raising levels of particulate pollution and greenhouse gas emissions.

Neither country will meet its 2020 greenhouse gas emissions-reduction targets. Across Europe, fear of radiation has led Germany, France, Spain, Italy, Austria, Sweden and Switzerland to adopt policies that subsidise solar, wind and hydropower over nuclear as a means of reducing greenhouse gas emissions, despite the fact that most energy and climate-change experts say that intermittent renewable energy sources are insufficient to solve the problem. In the United States, 29 state governments subsidise wind and solar power, but only three offer incentives for nuclear, which produces far more clean power, far more reliably.

Fear of radiation has deep roots. It goes back to the use of atomic weapons, and our Cold War worry that they might be used again. Modern environmentalism was founded on fear of radioactive fallout from atmospheric testing of such weapons. A whole generation was raised on movies and literature and other art depicting nuclear radiation as the ultimate bogeyman of modern technology. Psychologically, research has found that we worry excessively about risks that we can’t detect with our own senses, risks associated with catastrophic harm or cancer, risks that are human-made rather than natural, and risks that evoke fearful memories, such as those evoked by the very mention of Chernobyl or Three Mile Island. Our fear of radiation is deep, but we should really be afraid of fear instead.

 

USA Canada Trade Infographic

US Trade_14

Liberals vow greater Indigenous input, tougher environmental hurdles for resource projects

Liberals vow greater Indigenous input, tougher environmental hurdles for resource projects

SHAWN MCCARTHY

OTTAWA — The Globe and Mail

Published Thursday, Jun. 29, 2017 4:36PM EDT

Last updated Thursday, Jun. 29, 2017 4:48PM EDT

 

The Liberal government is proposing new rules that would require resource companies to consult with Ottawa and indigenous communities on major projects well before the firms finalize their plans and apply for regulatory approval.

The companies would also be expected to provide greater opportunity for partnership with aboriginal Canadians, and seek indigenous peoples’ consent over whether developments that impact their traditional territory should proceed, though they would not have a veto.

The Liberals released a discussion paper Thursday that proposes sweeping changes to the federal regulatory review process for major projects, undoing many of the controversial changes put in place by the Conservatives just five years ago. The government plans to introduce legislation by the end of the year.

Its proposals call for additional environmental protection and increase public and indigenous involvement in decision-making in the review period, while insisting the new regime will “ensure good projects go ahead and resources get to market.”

The former Conservative government overhauled the assessment process in 2012, with the aim of accelerating decision-making; limiting the environmental considerations that would be taken into account, and preventing project opponents from bogging down hearings.

Echoing concerns of environmentalists and First Nations, the Liberals argued the Conservative tilted the process in favor of pipelines and other resource projects. In its discussion paper, the government says the current process is not transparent and lacks scientific rigor; fails to protect waterways and fisheries, and does not allow for sufficient participation by the public and indigenous Canadians.

Despite those shortcomings, the Liberals decided when they took power in late 2015 to review pipeline projects already in the queue under the existing process, saying it would be unfair to make the companies wait for the reforms. Instead, the government adopted “interim measures” that included increased consultations and an assessment of the greenhouse-gas-emission impacts of pipeline expansion.

The oil industry’s proposed pipeline expansions have drawn the most opposition, and have prompted an often-heated national debate over the risks and benefits of resource development, pitting Alberta and its oil industry against vocal opponents in British Columbia and Quebec.

However, the proposed reforms – which are to some degree driven by that debate – would not impact the major oil pipelines. Ottawa approved two projects last November – including the hugely controversial expansion of the Trans Mountain line to Vancouver – and is currently reviewing TransCanada Corp.’s Energy East project.

Industry officials reacted cautiously to the proposals on Thursday, saying they needed to to review the discussion paper to understand the full impact. The government proposes to keep the National Energy Board (NEB) as a regulator of ongoing operations, and maintain its office in Calgary, but the board would have a diminished role in the environmental assessment process.

The government proposes to create a single review agency that would replace the NEB in assessing pipeline and major energy projects, but the agency would work with the NEB to benefit from its technical expertise.

It aims to eliminate federal-provincial overlap by promoting a “one project, one assessment” approach, and would have legislated timelines for reviews, though ministers could approve exceptions.

The Canadian Energy Pipeline Association (CEPA) – which represents companies like TransCanada Corp – had proposed a two-step approval process, with an initial finding on whether a project was in the national interest and a more technical environmental assessment.

The Liberals’ discussion paper rejects that approach, but CEPA president Chris Bloomer said there are elements in the plan that could ensure the political debates and decision are made outside the environmental assessment process. “There’s a lot in the discussion paper that requires further definition,” Mr. Bloomer said. “We think we can work with it but this is not the end of the process.”

Mining projects are reviewed under the Canadian Environmental Assessment Act, which will also be up-dated. Canadian Mining Association president Pierre Gratton said he was pleased to see Ottawa rejected recommendations made this spring by a government-appointed advisory panel that industry feared would have greatly bogged down the review process.

However, one environmental lawyer said government’s proposals “fall far short” of establishing a process that would ensure sustainable development for the benefit of the communities. The measures “would not give Canada a leading-edge, world-class environmental review process,” Anna Johnston, staff lawyer at West Coast Environmental Law, said. “It would still let short-term economic benefit that go to a few trump environmental harm.”

 

 

 

How news organizations unintentionally misinform the public on guns, nuclear power, climate change, etc.

This story can easily be switched to topics of nuclear power, greenhouse gases, climate change, etc.

Eric

misinform mislead

___________________________________

How news organizations, including this one, unintentionally misinformed the public on guns

June 27,2017

Mike Wilson, Editor

The Dallas Morning News

https://www.dallasnews.com/opinion/commentary/2017/06/27/news-organizations-including-one-unintentionally-misinformed-public-guns

 

Steve Doud, a subscriber from Plano, emailed me to say he’d read something in the June 21 Dallas Morning News that couldn’t possibly be true.

An eight-paragraph Washington Post article on page 10A reported on a national study about kids and guns. The last sentence said 4.2 percent of American kids have witnessed a shooting in the past year.

“Really?” Doud wrote. “Does it really sound believable that one kid out of every 24 has witnessed a shooting in the last year? I think not, unless it was on TV, in a movie, or in a video game. In that case it would probably be more like 100 percent.”

His instincts were right. The statistic was not.

Here is the unfortunate story of how a couple of teams of researchers and a whole bunch of news organizations, including this one, unintentionally but thoroughly misinformed the public.

It all started in 2015, when University of New Hampshire sociology professor David Finkelhor and two colleagues published a study called “Prevalence of Childhood Exposure to Violence, Crime, and Abuse.” They gathered data by conducting phone interviews with parents and kids around the country.

The Finkelhor study included a table showing the percentage of kids “witnessing or having indirect exposure” to different kinds of violence in the past year. The figure under “exposure to shooting” was 4 percent.

Those words — exposure to shooting — are going to become a problem in just a minute.

Earlier this month, researchers from the CDC and the University of Texas published a nationwide study of gun violence in the journal Pediatrics. They reported that, on average, 7,100 children under 18 were shot each year from 2012 to 2014, and that about 1,300 a year died. No one has questioned those stats.

The CDC-UT researchers also quoted the “exposure to shooting” statistic from the Finkelhor study, changing the wording — and, for some reason, the stat — just slightly:

“Recent evidence from the National Survey of Children’s Exposure to Violence indicates that 4.2 percent of children aged 0 to 17 in the United States have witnessed a shooting in the past year.”

The Washington Post wrote a piece about the CDC-UT study. Why not? Fascinating stuff! The story included the line about all those kids witnessing shootings.

The Dallas Morning News picked up a version of the Washington Post story.

And Steve Doud sent me an email.

When I got it, I asked editorial writer Michael Lindenberger to do some research. He contacted Finkelhor, who explained the origin of the 4 percent “exposure to shooting” stat.

According to Finkelhor, the actual question the researchers asked was, “At any time in (your child’s/your) life, (was your child/were you) in any place in real life where (he/she/you) could see or hear people being shot, bombs going off, or street riots?”

So the question was about much more than just shootings. But you never would have known from looking at the table.

Finkelhor said he understood why “exposure to shooting” might have misled the CDC-UT researchers even though his team provided the underlying question in the appendices. Linda Dahlberg, a CDC violence prevention researcher and co-author of the study featured in The Post and this newspaper, said her team didn’t notice anything indicating the statistic covered other things.

Then again, the Finkelhor study didn’t say anything about kids “witnessing” shootings; that wording was added by the CDC-UT team. Dahlberg said she’ll ask Pediatrics about running a correction.

All of this matters because scientific studies — and the way journalists report on them — can affect public opinion and ultimately public policy. The idea that one in 25 kids witnessed a shooting in the past year was reported around the world, and some of the world probably believed it.

No matter where you stand on guns or any other issue, we ought to be making decisions based on good information.

Finkelhor’s team caused confusion by mislabeling a complicated stat. The CDC-UT researchers should have found the information suspect. The Washington Post should have asked more questions about that line from the CDC-UT study.

And we should have been as skeptical of the Washington Post report as Steve Doud was.

 

 

 

Fracking rarely causes earthquakes—except in Oklahoma: U of A research

Fracking rarely causes earthquakes—except in Oklahoma: U of A research

June 26, 2017

Deborah Jaremko

http://www.jwnenergy.com/article/2017/6/fracking-rarely-causes-earthquakesexcept-oklahoma-u-research/

 

It has become accepted that a recent surge in seismic activity in Oklahoma is related to fracking and wastewater injection as a result of increased oil and gas production, but new research from the University of Alberta says this doesn’t mean that earthquakes follow fracks everywhere.

In fact the team of researchers, led by U of A geophysicist Mirko Van der Baan, concluded that Oklahoma is the only region in the nine top hydrocarbon-producing places in the US and Canada where the trend exists.

Before 2009, Oklahoma might have experienced one to two low-magnitude earthquakes per year, but since 2014 the state has experienced one to two low-magnitude earthquakes per day, according to a report last week from the US Energy Information Administration (EIA).

The EIA notes that most of these earthquakes are small, measuring in the three- to four- magnitude range on the moment magnitude scale; large enough to be felt by most people but not often causing damage to structures.

Since 2014 there have been a few instances of higher magnitude earthquakes in Oklahoma (between magnitude 5 and 6) that have caused some damage, the EIA reports.

The U of A says the increase in seismic activity in Oklahoma has an 85 percent correlation to increased oil production, likely primarily due to saltwater disposal.

However, after studying the last thee to five decades of data (depending on data availability), Van der Baan’s team found that Oklahoma is an anomaly.

Researchers examined data from Oklahoma, Ohio, Pennsylvania, Texas, West Virginia, Alberta, British Columbia and Saskatchewan–the top oil and gas producing regions in the US and Canada.

“The other areas do not display state/province-wide correlations between increased seismicity and production, despite 8-16 fold increases in production in some states,” reads a paper by Van der Baan and U of A postdoctoral fellow Frank Calixto that appeared in the scientific journal Geochemistry, Geophysics, Geosystems. However, the researchers acknowledged that in various cases seismicity has locally increased.

“It’s not as simple as saying ‘we do a hydraulic fracturing treatment, and therefore we are going to cause felt seismicity.’ It’s actually the opposite. Most of it is perfectly safe,” Van der Baan said in a statement released by the U of A.

“What we need to know first is where seismicity is changing as it relates to hydraulic fracturing or saltwater disposal. The next question is why is it changing in some areas and not in others,” Van der Baan said.

For example, the researchers said that while data shows that human-caused seismic activity is less likely in areas with lower existing seismic risk, the opposite is not necessarily true.

“If we can understand why seismicity changes, then we can start thinking about mitigation strategies.”

 

 

 

Visualizing Canada’s carbon flows

It’s the carbon, stupid: Visualizing Canada’s carbon flows

June 26, 2017, 1:16 p.m. |

http://www.jwnenergy.com/article/2017/6/its-carbon-stupid-visualizing-canadas-carbon-flows/

Energy systems—the production and use of fuels and electricity—are under intense pressure to change for the good of the environment and the economy.

However, the flow of energy through these systems is not the problem.

Rather it is the flow of carbon, especially when those flows bring carbon dioxide (CO2) and methane (CH4) to the atmosphere where they become greenhouse gases (GHGs).

Six months ago, Canadian Energy Systems Analysis Research (CESAR) released Sankey diagrams for Canada showing how energy flows, from the sources we extract from nature to the demands of society for fuels and electricity.

Today, CESAR released a new set of Sankey visualizations—the first of their kind—showing how carbon flows through the same fuel and electricity systems.

These new Sankey diagrams come with a brand new and improved web portal where the user can toggle between the energy and the carbon Sankeys for a given province or year to explore 1056 different Sankey diagrams.

The portal has a number of other new and useful features that users can learn about by checking out the brand new User’s Guide to Energy-Carbon Sankeys. Literally hours of fun for energy systems nerds!

The title of this post (originally published on the CESAR website) gives a nod to the 1992 US presidential election: Bill Clinton strategist James Carville outlining one of three campaign pillars for workers as “the economy, stupid.”

CESAR tries to highlight a few of the things that one can learn by focusing on carbon. For the record, in saying “stupid,” CESAR says it pointing to itself. Its focus, even the CESAR name, has been about energy, but the core problem is carbon.

Comparing the Energy and Carbon Sankeys

Figure 1 provides a side-by-side comparison of energy (Panel A) and carbon (Panel B) flows associated with the production and use of fuels and electricity in Canada in 2013. There are many similarities and some important differences between the two visualizations.

In CESAR Sankey diagrams, the nodes on the left-hand side represent the energy (Figure 1A) or carbon (Figure 1B) content of all the energy resources produced in (i.e. Primary energy or carbon) or imported into Canada.

CESAR carbon flow infographic
Figure 1: The flows of energy (A) and carbon (B) through the fuel and electricity systems of Canada in 2013.

The nodes in the middle portion of the Sankey represent the companies in the energy sectors. They convert the energy (and carbon, if present) into fuels and electricity that are then either exported or passed to a number of demand sectors.

These include transportation demands (personal and freight), building (residential, commercial and institutional), energy using industries (cement, agriculture, chemicals, etc), and non-energy uses (plastics and asphalt) Some recovered energy and carbon has little or no commercial value (e.g. petcoke), so it is stockpiled (mostly in northern Alberta) and allocated to an node called “stored energy” in these Sankey diagrams.

Since neither energy nor carbon can be created or destroyed (putting aside some nuclear reactions), the sum of all flows at the right side of the each diagram equals the sum of all the flows on the left side of each diagram.

In the energy Sankey (Figure 1A), the energy that was consumed (or lost by venting) through domestic processes is depicted as orange and grey flows lines that were either attributed to delivering the valuable end-use service (i.e. useful energy) or consumed as a “conversion loss” in the process of providing the fuel/electricity or end-use service. While all this energy still exists, it is highly dispersed (i.e. low exergy) and therefore of little economic value.

Exergy is the energy that is available to be used. A fuel that can create, on combustion, very high temperatures relative to the surroundings has high exergy since the differential temperature can do some valuable work.

In the C sankey (Figure 1B), the grey flows are predominantly in the form of carbon dioxide (CO2), the end product of fossil fuel combustion. However, some fossil carbon may be in the form of methane (CH4), a potent GHG that can leak to the atmosphere.

The dark grey flows on the right side of Figure 1B represent the flows of CO2 from the combustion of bio-based fuels.

Although similarities exist in the two Sankeys shown in Figure 1, there are substantial differences that reflect variations among feedstocks in their carbon: energy content.

As shown in Figure 2, the C content of energy feedstocks vary widely, from zero for nuclear, hydro, wind and solar to about 24 or 25 kgC/GJ for coal or biomass.

cesar figure-2

Figure 2: The carbon content of energy feedstocks in Canada’s fuel and electricity systems. The black lines represent the range of values for each feedstock type, while the bar shows the typical value.

Oil and most refined petroleum products tend to have 19-20 kgC/GJ while natural gas is about 14 kgC/GJ. Consequently, some of the energy flows do not appear on the carbon sankeys (nuclear, hydro, wind) while others (esp. coal, biomass) become proportionately larger relative to the flow for natural gas.

It is important to note that this C only accounts for the C that is contained within the energy feedstock itself; it does not provide information on how that C may contribute to GHG emissions. Nor does it reflect the life cycle emissions associated with recovering, refining and transporting each energy resource.

Insights from CESAR’s Carbon Sankeys 

So what can we learn from a closer study of the Carbon Sankey for Canada (Figure 1B), insights that could not be gained from the Energy Sankey (Figure 1A)?

  1. Visualizing CO2equivalents (CO2e).

To extract the maximum amount of energy from fossil fuels, they need to be combusted in the presence of oxygen to produce CO2. This reality is reflected in the light grey flows to ‘fossilCO2’ in the C ankey (Figure 1B). Each tC as CO2 contributes one tC as CO2 equivalent (CO2e) GHG emissions.

cesar -figure-3
Figure 3: Details of the C Sankey showing the origin of CO2 equivalents.

However, fugitive emissions of fossil C (especially gaseous methane) does occur in Canada’s fuel and electricity systems, and those flows are tracked within the CanESS model. When displayed in the Sankey diagram (Figure 1B or Figure 3 for a higher resolution perspective), the flows are small compared to “fossilCO2.”

However, methane is a potent GHG, with a global warming potential of 9.1 tC as CO2e per tC as CH4 (see footnote 1). Therefore, the CH4 C flows are multiplied by 9.1 to calculate their contribution to GHG emissions measured as tC as CO2e.

It is important to note that the C Sankey in Figure 1B only shows the GHG emissions coming from fossil C feedstocks. Process-based CO2 emissions such as that associated with cement or steel making are not shown in these Sankey diagrams, nor are the CH4 or N2O emissions associated with Canada’s agricultural systems or landfills.

  1. Biogenic carbon is treated differently.

In Figure 1B (or Figure 3), the dark grey flows on the right side of the Sankey diagram show the magnitude of biomass flows to the atmosphere as a result of their use as fuels. However, in agreement with international convention, these emissions do not contribute to Canada GHG emissions (CO2e in Figure 1B or 3).

The convention assumes that the plants from which this bio-carbon is obtained had recently (i.e. within the last year for crop plants, or the last century for trees) removed it from the atmosphere. Under international agreement, countries are expected to report C stock changes in their managed biological systems to ensure they are being sustainably managed.

Canada does report these numbers (see footnote 2), but the reported carbon stock changes are not counted in national totals for GHG emissions, nor are they shown in Figure 1B since those flow only include the biocarbon associated with fuel and electricity production and use.

  1. Following the carbon from source to demand.

Starting from the left side of the carbon Sankey for 2013 (Figure 1B) we can see that a total of 392 MtC entered the Canadian energy system that year. Canadian oil, gas and coal producers took 300 million tonnes of carbon out of the ground — 182 MtC in the form of crude oil, 78 MtC in the form of natural gas and 38 MtC in the form of coal.

In addition, 21 MtC in wood and agricultural biomass were used for energy, and an additional 72 MtC were imported in the form of petroleum, natural gas and coal.

The nodes on right side of the diagram show where all that carbon went. Half of the total carbon – fully 194 MtC – was exported as oil (130 MtC), gas (41 MtC) or coal (23 MtC). This carbon would almost all have ended up in the atmosphere when the exported fuels were burned, but the responsibility for those emissions rests with the importing country (primarily the US). Another 178 MtC, (including 21 MtC of “biogenic” carbon from biomass combustion) was emitted into the atmosphere in Canada, from power plant stacks, residential and commercial building chimneys, personal and commercial vehicle tailpipes, and industrial energy consumption, including the fossil fuel industry itself.

All totalled, 95% of the carbon taken out of the ground ends up in the atmosphere. The remainder is stored or sequestered, mostly in plastics and other non-consumable petrochemicals (16 Mt), with smaller quantities in oil sands tailings and petroleum coke stockpiles (7 MtC).

  1. Tracking fossil fuel CO2 emissions.

By focusing on the light grey flows in any of CESAR’s carbon Sankeys, users have a rapid and easy way to explore the origin of the emissions of fossil carbon. Hint: on the web portal, hover your cursor over the flow of interest and a pop-up will appear to giving you the number of MtC flowing through that part of the fuel and electricity system.

For example, in the 2013 carbon Sankey for Canada, the fuel and electricity industries can be seen to emit 59 MtC as CO2, or 38% of the 156 Mt of fossil C that Canada emitted to the atmosphere as CO2 in 2013.

The energy industry’s emissions were split evenly between power plants (29 Mt) and the oil and gas extraction and processing industry (30 Mt).

The remaining 62% of fossil CO2 emissions in 2013 (97 MtC) were associated with the demand sectors, and more than half of that (34% or 53 MtC) came from transportation, both personal and freight. Fossil carbon emissions from buildings accounted for 13% of 20 MtC and the remaining 16% (24 MtC) were from the energy using industries of Canada.

Overall, these visualizations reveal the rivers of carbon from their headwaters in the primary production of fossil fuels and biomass through to their final emission into the atmosphere from the smokestacks, chimneys and tailpipes of our industrial plants, buildings and vehicles.

In this post, CESAR provided a few directions for navigating these rivers and encourages readers to use the new portal to compare energy and carbon Sankeys, explore how they have changed with time, and how they compare among provinces (especially using the per capita option).

Footnotes 

  1. The 9.1 value assumes 25 tCO2e per tCH4; see User’s Guide for details.
  2. According to Environment and Climate Change Canada’s 2017 National Inventory Report, if we don’t count the C stock losses that occur on managed lands as a result of major forest fires, Canada’s managed biological systems annually accumulate about 9.3 MtC/yr. 
    That is equivalent to about 34 Mt CO2 /yr of net removal from the atmosphere.
%d bloggers like this: