What Makes a Great Cleantech Team?

tl;dr: Winning cleantech start-up teams are complete at founding, have strong pre-existing relationships, and include the inventor of the core technology.

This post was co-written with Josh Rogers, a former Venrock intern who’s now in National Grid’s Strategic Planning and Corporate Development group. A version of it also appeared at GigaOM.

A year ago I published a post called “What It Takes to Build A Cleantech Winner” based on an analysis of 18 cleantech success stories – venture-backed start-ups that executed big IPOs. The conclusion was that it’s not the technology (the best one rarely wins) and it’s not the market (if the market’s already big and attractive, you’re probably too late); instead, it’s the team that determines success.

That begs the question: What makes a great team?

To answer this question, you’d need to do two things. First, you’d need to analyze the personal histories of core team members at a slew of successful cleantech start-ups to figure out what they had in common. Second, you’d need to compare these people against their peers at unsuccessful companies in the same domains, to learn whether the winning teams differed from the losing ones.

Taking up the challenge was Josh Rogers – then a student at Tufts’ Fletcher School of Law and Diplomacy – who interned with me and conducted this research for his master’s thesis. Josh went about it like this:

  • First, he established a set of 27 winners – VC-backed cleantech start-ups that had either gone public on a major exchange since 2000 or filed an outstanding S-1 at the time of the analysis, and for which we could build fine-grained histories of the executive team. Examples: Tesla, Color Kinetics, Silver Spring.
  • Second, he assembled a set of matched-pair companies that were in the same industries as the winners and were founded at about the same time, but which unambiguously failed: They either went bankrupt or sold in a fire sale. We would have liked to have had a counterpart for every winner, but because so few companies have tanked completely instead of limping forward, we were limited to ten matches. Examples: Solyndra, GreenFuel, WebGen.
  • Then he collected exhaustive data about the backgrounds of every key executive in each of these 37 businesses – 122 people total – including their age, education, country of origin, past work experience, and a host of other variables (39 altogether).

When Josh began his work, we joked that maybe he’d crack a hidden code: Perhaps I’d hear “Well, Matthew, at all the winning start-ups the CEOs were in their 40s and joined from large companies, while the CTOs hailed from the following five universities.” If so, I could simply ignore all the other business plans I get and focus on the ones that matched the template. Hey, a man can dream, right?

That didn’t happen.

In fact, when we looked at the winners, we found that nothing at all seemed to correlate with success. Founding team members’ ages were all over the map, from Genomatica CEO Chris Schilling (26 years old at company founding) to First Solar impresario Harold McMaster (an octogenarian at 83):

No variety of undergraduate education dominated (although Ivy League degree-holders should perhaps beware):

Among graduate degree-holders, no university stood out. In fact, across the 51 successful team members with advanced technical degrees, 39 universities were represented with only three appearing more than twice (MIT, U. Illinois, and CMU):

And so on. In fact, the only interesting correlation we found was that team members at winning companies tended to be industry outsiders: A mere 28% of them had direct work experience in their start-up’s industry. However, this attribute didn’t predict success because it was the same for our sample of failed companies too (where 26% of execs had prior direct work experience).

At this point, we changed our approach. Perhaps we were asking the wrong questions? Instead of studying the individuals, Josh began looking at the relationships between them. It’s here that we found the trends hiding in plain sight:

Winning teams were complete at company founding. Of the 88 key executives profiled in the 27 successful companies, 74% were present at founding and another 9% joined during the first year. Only one out of six joined after that.

CEOs changed rarely. MBA orthodoxy holds that different stages of a company’s life require different leadership skills, so the CEO should be swapped out as companies develop. Our data didn’t support that. Eleven out of 27 successful companies had a CEO at founding who stayed through the IPO or S-1 filing; another eight were founded without a CEO, but recruited one (usually in the first year) who stayed for the long haul. Only eight winning companies changed CEOs, with only one clearly hostile transition (namely Elon Musk’s takeover at Tesla).

Successful founding teams had strong pre-existing relationships. At 74% of successful companies, at least two of the founding team members had strong relationships before the company was formed – either from working together in past lives (e.g. the four Color Kinetics co-founders, who shared lab space at CMU) or knowing one another well outside of work (e.g. Solazyme’s CEO and CTO, who became close friends as freshmen at Emory).

Winning start-ups included the accomplished core scientist who invented the technology as part of the founding team. Two-thirds of the winning companies exhibited this trait – think Frances Arnold at Gevo or Yet-Ming Chiang at A123Systems. I frequently see start-ups out of universities where the key technologist declines to join the founding team, choosing to remain in academia instead and consult with the company at most; this behavior doesn’t seem to correlate with success.

When Josh examined our matched-pair set of failed companies, they exhibited the opposite trends:

  • Six out of 10 failed companies replaced their CEOs along the way (versus three out of ten).
  • Only half had strong pre-existing relationships (versus three out of four).
  • Only three out of ten had the accomplished core scientist as part of the founding team (versus two out of three).

The conclusion: Great founders hail from every age, background, and school. What differentiates winning teams is their relationships. Successful cleantech companies tend to be bands of brothers and sisters – including the core inventor – that come together on their own, form a complete team, and have a leader fit for the long haul. In contrast, here’s the recipe for a failure: Find an interesting technology, assemble a team of competent people around it who didn’t previously know one another, and don’t worry about bringing the original inventor along.

I don’t want to present false absolutism here: There’s a great deal of subjectivity involved, the sample sizes are small, and the errors bars are wide. But these trends whacked me over the head hard enough that they changed the way I look at energy and environmental start-ups. It’s the team – and relationships make the team.

Posted in Numbers | 5 Comments

Bright Future for the Marginal Megawatt

tl;dr: Life is about to get a lot better for demand response and energy efficiency companies.

One of the challenges of venture capital is that you invest in companies now based on what you know now, but the world may look very different by the time the company exits (i.e., when it’s bought or goes public).

When people talk about this, they usually cite the investment bets that look dumb in retrospect – where investors deployed capital at a time of heady expectations and woke up to cold reality later on. (Amidst dot-com hysteria, otherwise-smart people could envision their morning coffee delivered by Kozmo and paid for with Flooz; afterward, not so much.)

However, one can also make the opposite blunder: Deciding not to place bets in a downer environment, and then missing the opportunity to reap returns when things look up.

This is the milieu that demand response and energy efficiency start-ups face today.

Whether they are reducing electricity demand at peak times (Enernoc, Gridium), deploying energy-efficient retrofits (NextStep Living, Ameresco), or doing high-tech real-time stuff to balance the grid (Enbala, CUE), these companies all have one thing in common. They traffic in what I call marginal megawatts – the MW at the very top of the load curve that determine whether the peaker plant gets turned on or whether a new transmission line must be built. The demand response players do this by clipping peaks while the energy efficiency ones do it by dropping the baseline, but they deliver a similar net result. (You could add grid-scale energy storage to this grouping if you wanted to.)

Such companies are poorly valued today. Public stocks tell the tale – for example, as I write this, Enernoc, Ameresco, and PowerSecure are all trading at less than 1x sales and 12x EBITDA. (For those of you who don’t often think about valuations: That’s bad for a growth company.)

This situation is about to change.

What’s the value of a marginal megawatt? In my mind, it should be proportional to two things – 1) the cost to deliver that same MW from conventional generation resources, and 2) the amount of free capacity that’s available to do the generating. Both are hitting inflection points right now.

First, let’s take the marginal cost per MW. For this analysis, let’s consider the market for “frequency regulation,” a horrible misnomer of utility-speak that means “injecting or removing power on the grid over fine time scales to balance supply and demand.” (The name comes from the fact that imbalances cause the grid to deviate from its 60 Hz AC frequency.) Frequency regulation is traded in open marketplaces on a $/MW/hr basis, and its price is probably the purest measure of a marginal megawatt.

As it turns out, the price of frequency regulation correlates very closely with the price of natural gas, because gas plants are usually the market price-setters. See the chart below, which plots the clearing price for frequency regulation (in the United States’ biggest electricity market, the 13-state PJM region) against the price of natural gas (as measured at the Henry Hub distribution center). The r2 on this is 0.80, meaning that natural gas accounts for 80% of the variance in frequency regulation price:

Natural gas prices started plummeting in 2008 due to the hydrofracking revolution and reached a 12-year low of $1.82/MMBtu this past April. As that price was below most producers’ breakeven levels, many folks speculated that drilling only continued because the exploration companies would lose their land leases if they didn’t keep making holes. Since then, new drilling in gas plays has cratered and the price has started climbing back up – it’s at $3.15 as of this writing, and the futures market has it north of $4 by the end of next year.

As the price of natural gas rises, so will the value of marginal megawatts. And there’s reason to believe that the price will increase sharply beyond 2013 if U.S. natural gas starts getting used in new ways – like being exported. Export applications currently filed at the DOE would ship out 16 billion cubic feet per day, which is two-thirds of current U.S. shale gas production!

So higher gas price = more valuable marginal megawatts. Now let’s look at generating capacity.

As goes GDP, so goes electricity demand. When U.S. GDP peaked in 2007, so did our electricity consumption. And when the economy tanked, electricity consumption fell. 2012 should be the first year that these indicators exceed their 2007 levels.

When there’s idle generating capacity around, the companies that own it get hammered. Consider independent power producers, the companies that operate conventional power plants. Their share prices closely track total electricity generation, which in turn tracks GDP – all of which dropped sharply after 2007:

So do I need to write this next paragraph? Only now is electricity demand getting back to its 2007 peak. Doubtless there were new plants getting built five years ago which were completed but unused, so excess capacity will likely persist for a couple more years. But, inexorably, that capacity will get mopped up as GDP rises and electricity demand grows with it, and sooner or later we’ll find ourselves bumping into a new ceiling. Just as predictably, the value of companies that resolve this supply/demand imbalance – those that deliver marginal megawatts – will jump. Note that when Enernoc went public right before the 2007 electricity demand peak, it did so at 20x the previous year’s revenues. It’s now trading at 0.6x. I’ll bet that looks really different in, say, 2016.

The kicker: Demand response and energy efficiency companies will slaughter conventional generators on cost. A new fossil generator costs $1 million per MW in capex, plus or minus, and requires fuel and transmission on top of that. Setting a big user of electricity up to curtail its demand by 1 MW costs maybe $50k – and that’s it. As we climb to a new electricity peak, generators will lose the battle for the marginal megawatt.

So whether your start-up is trimming peaks, lowering baselines, or synchronizing supply and demand, take heart. It’s been a long, hard five years. But a brighter day is just around the corner.

Posted in Demand response, Energy efficiency | 9 Comments

“Financing Your Start-up 101” in 45 Minutes

For this year’s ARPA-E Summit I was asked to give a talk about different ways to finance an energy start-up. The challenge as it was given to me was “cover all sources of financing – VC, angels, grants, debt, everything – in 45 minutes.” The ARPA-E folks have now posted the presentation publicly, embedded below.

If you’re looking for a high-speed primer on start-up financing, feel free to watch the whole thing. If you’d prefer to laugh, check out:

  • Live-playing NES Zelda as an start-up CEO analogy at 3:55
  • The investor reward system in one image at 12:30
  • A visual on venture capital demographics at 16:50

Posted in Unsolicited advice, Venture capital | Leave a comment

Why I Suspend Disbelief

tl;dr: Arrogant humans think we have the natural world all figured out. We don’t.

I spend a small but meaningful amount of my time at Venrock intentionally looking at crazy stuff. If you are developing a cold fusion generator or a zero point energy harvester and you have spoken to a venture capitalist, there’s a high probability that it’s me.

These topics account for, at most, a few percentage points of my investment scouting activity. But they’re enduring percentage points. There’s a lot of this kind of work out there, and I recognize that 99%+ of it falls somewhere on the spectrum between experimental error and deliberate fraud – so I narrow the funnel very quickly, much more so than in other domains. With that said, I endeavor to treat innovators with integrity and respect throughout, and on those exceedingly rare occasions where extraordinary claims hold up under initial scrutiny, I dig in with the same diligence I’d devote to the most-credentialed academic. (Of course, sometimes it’s a credentialed academic who brings the crazy idea.)

There are a few fellow travelers in the venture capital/angel financier community who share these investment interests and devote resources to them. Most, however, view these possibilities with derision – or simply feel they’re so improbable that every last second of one’s time is better spent elsewhere.

Let me give you an example of why I choose to suspend disbelief.

I got my first dose of middle-school biology in the mid-80s. And the living world as I learned it was pretty simple: DNA makes RNA, RNA makes proteins, and proteins do stuff. Information flows only in one direction, so the idea that you could pass on a characteristic that you acquired during your life was silly talk. We’d already figured out the handful of letters in the genetic code (easy!) and the sequences that corresponded to each amino acid (no prob!), so the only thing left was to decode the genome and the proteome, and then match the DNA up with the proteins.

Congratulations, you’ve solved life! Stanley Cohen and Herbert Boyer’s creation of the first recombinant organism in 1973 seemed to drive the point home – if we could insert foreign DNA into a living being to make it do what we wanted, certainly we had everything figured out? There were a few little things left to clear up – it wasn’t obvious why DNA was so often chemically modified, or why it was wrapped around these things we called histones, or why so much of it appeared to be non-coding junk – but surely those were minor points.

As we now know, that view of the world wasn’t wrong per se. It was just radically oversimplified.

The holes in the story began appearing almost immediately after Cohen and Boyer’s landmark achievement. In 1975 Robin Holliday and John Pugh (and independently, Arthur Riggs) observed that the methyl groups regularly seen hanging off cytosine and adenosine weren’t, as previously thought, errors in DNA’s signal: They formed a vital mechanism by which cells ramped expression of genes up and down. Shortly thereafter Michael Grunstein and his collaborators demonstrated that the histone proteins around which DNA winds were not simply passive spools, but that histones regulated gene activation depending on how they were chemically altered. In 1999, David Baulcombe showed that short strands of RNA could silence the effect of otherwise-activated genes – information flowed backward; the product of genetic expression could affect the expression itself! Finally, in the last decade, work by researchers such as Larry Feig and David Sweatt has controversially suggested that a mother’s life experiences can endow her developing fetus with features that weren’t in the fetus’s DNA at conception.

These exceptions to the rule have piled so high that we have a name for them: Epigenetics. And a whole slew of start-up companies are aiming to profit from what was heterodoxy 30 years ago.

I have a sneaking suspicion that physics today is something like biology in the 1970s.

Once you split atoms with such destructive force as to kill tens of thousands of people, it’s pretty easy to convince yourself that you’ve got it all figured out. And, as I see it, that’s what the academy did post-World War II after the nuclear genie left the bottle. Sure, there were some minor details to clear up – like the particulars of the units of matter and force that shape atomic interactions, and how to harmonize the way things work at large scales with how they work at small ones – but for the most part, we had it nailed.

Half a century onward, our list of known and suspected subatomic particles exceeds 200, and it continues to grow. We can’t precisely predict the size, structure, or properties of anything more complex than hydrogen. We’re no closer to integrating quantum mechanics with general relativity than we were when I was a child. (Flame shield up: I realize there will be healthy disagreement on these points).

Perhaps these anomalies aren’t anomalies at all. Maybe they are evidence that we don’t, in fact, have everything figured out.

Improbable, yes. Impossible, no. So with a wink to Jean-Baptiste Lamarck, who is doubtless shaking his fist from the grave – mocked for decades for suggesting that inherited characteristics could be acquired, and now facing an ounce of vindication through epigenetics – I suspend disbelief. I trust that there are improbable breakthroughs in the physical sciences yet to be made: breakthroughs which will transform how energy is produced and used. I’m with Bill Gates – we need more crazy energy entrepreneurs!

Takeaway: Whether you are a university scientist or a garage inventor, if you’re working on a way-out-there energy idea and you have data that shows something extraordinary, call me. Casimir effect, solar antennae, low-energy nuclear reactions, electricity crops: The door is open! I won’t (and can’t) promise you time or engagement a priori; if I did, I couldn’t do the other 98% of my job. But I do promise you a hearing – and respect.

Posted in Leprechauns | 3 Comments

How I Missed My Window to Short Natural Gas

tl;dr: Saw it coming. Didn’t act. D’oh.

I did a podcast recently about water treatment in oil and gas for Platt’s, the veteran trade publisher in the sector. We focused specifically on flowback water and produced water from shale sites. You can listen to it here:


I’ve spent a lot of time hunting for new technologies that address shale oil and gas, water treatment included. I think it’s possible to build large, independent technology companies in this domain – most likely with services business models – and we’re eager at Venrock to deploy some capital into the sector. But putting my professional life aside, I missed my opportunity to make a personal buck here three years ago.

I first realized that something was up in shale plays back in mid-2008 (prior to joining Venrock, when I was at Lux Research). I’d been tracking two numbers – on one hand, the Baker Hughes rig count for natural gas rigs (which tells us how much drilling for natural gas is going on in the U.S.), and on the other hand U.S. dry natural gas production (which tells us how much gas is coming out of the ground). Based on that data, I started presenting the following two charts (originally sourced from The Oil Drum, which I read daily and you should too):

On the left, you see the number of natural gas drilling rigs in operation. The x-axis is months, so every year is a line. And every year the number of rigs goes up – until late 2007, when it’s flat.

On the right, you see the amount of natural gas extracted. Same deal; every year is a line. And every year gas extraction is roughly flat – until late 2007, when it kicks upward.

Huh? Less drilling, but more gas?

This, my friends, is the impact of a technology disruption – specifically, the combination of horizontal drilling and hydraulic fracturing applied to shale formations.

I vividly remember presenting this data in the boardroom of a prominent east coast venture capital firm in early 2009. The partners there knew a lot more than I did about the oil and gas industry, so I approached the talk humbly. If this increase in supply persisted, I said, think of the possibilities! The 10-year average price of natural gas was about $7/mmbtu, but the price going forward could be more like $4.50:

And if that occurred, it would have a big impact on the world:

  • Coal generation would lose its cost advantage. We’d become a nation of baseload gas.
  • Natural gas as a transportation fuel – and a feedstock for chemicals – would become increasingly attractive. At the margin, cheap gas could swing siting decisions for manufacturing facilities toward the U.S.
  • Renewables would get kneecapped on an unsubsidized basis. Fuel costs account for about 70% of the levelized cost of electricity (LCoE) from natural gas plants, so a 33% decrease in gas prices would mean a ~15% decrease in LCoE – further raising the bar that solar and wind would have to clear to compete with gas-fired generation.

It was at this point that my hosts started shaking their heads. As much as they wanted to believe my thesis, they said, they’d heard it all before. Every time the price of natural gas dropped into the $4-5/mmbtu range, they patiently explained, everybody thought it would stay there forever. But it never did – it always went back up, dashing hopes, dreams, and business plans.

I listened carefully. And as I did, I mentally shelved my plan to short UNG, the exchange-traded fund that tracks the price of natural gas – because my expectation of long-term gas pricing in the $4-5/mmbtu range clearly wouldn’t come to pass.

That was three years ago. As of this morning, natural gas was at $2.11/mmbtu. The futures curve currently has the price under $4.50 through 2015, and it even pegs the 2020 price at a mere $5.33. (I keep this market data permanently open in a browser tab window.)

Live and learn, right?

Posted in Oil and gas | 1 Comment