Le grand hack!

It happened! The Subsurface Hackathon drew to a magnificent close on Sunday, in an intoxicating cloud of code, creativity, coffee, and collaboration. It will take some beating.

Nine months in gestation, the hackathon was on a scale we have not attempted before. Total E&P joined us as co-organizers and made this new reach possible. They also let us use their amazing Booster — a sort of intrapreneurship centre — which was perfect for the event. Their team (thanks especially to Marine and Caroline!) did an amazing job of hosting, as well as providing several professionals from their subsurface software (thanks Jonathan and Yannick!) and data science teams (thanks Victor and David!). Arnaud Rodde and Frédéric Broust, who had to do some organization hacking of their own to make something as weird as a hackathon happen, should be proud of their teams.

Instead of trying to describe the indescribable, here are some photos:

BY THE NUMBERS

16 hours of code
13 teams
62 hackers
44 students
4 robots
568 croissants
0 lost-time incidents

I won't say much about the projects for now. The diversity was high — there were projects in thin section photography, 3D geological modeling, document processing, well log prediction, seismic modeling and inversion, and fault detection. All of the projects included some kind of machine learning, and again there was diversity there, including several deep learning applications. Neural networks are back!

Feel the buzz!

If you are curious, Gram and I recorded a quick podcast and interviewed a few of the teams:

It's going to take a few days to decompress and come down from the high. In a couple of weeks I'll tell you more about the projects themselves, and we'll edit the photos and post the best ones to Flickr (and in the meantime there are a few more pics there already). 

Thank you to the sponsors!

Last thing: we couldn't have done any of this without the support of Dell EMC. David Holmes has been a rock for the hackathon project over the last couple of years, and we appreciate his love of community and code! Thank you too to Duncan and Jane at Teradata, Francois at NVIDIA, Peter and Jon at Amazon AWS, and Gram at Sandstone for all your support. Dear reader: please support these organizations!


Looking forward to EAGE

Evan, Diego and I are flying to Paris today for the EAGE Conference and Exhibition. It's exciting. We're excited. 

But the excitement starts before the conference. The Subsurface Hackathon is this weekend!

My diary

Even the hackathon excitement starts before the weekend, because tomorrow, Friday, we're running the hacker's bootcamp — a sort of short course appetizer for the hackathon. We have about 25 geoscientists coming to the Booster TOTAL (an event space at TOTAL's La Défense offices) to get some hands-on practice with Python and the latest in machine learning tools. It's especially exciting because we'll also have engineers from NVIDIA on hand to help with the coaching. The idea is to help people hit the ground running when the hackathon starts on Saturday.

After that, on Saturday and Sunday,  it's the hackathon itself. We have no fewer than 60 geoscientists and engineers registered for this breakout event. They're coming to the Booster to work on a wide array of machine learning ideas for the subsurface. It's going to be epic. You can read all about what happens next week, I promise. 

Then on Monday it's the Data Science for Geoscience workshop, at which I'm giving a keynote. Since I'm far from possessing expertise, I'm using it as a chance to get people jazzed about helping make the coming AI revolution in geoscience a positive experience. I'm really looking forward to it.

The conference itself starts on Tuesday. In the afternoon I'm co-chairing a session on machine learning (have you spotted the theme yet?) in seismic interpretation, along with Victor Aare of Schlumberger. It will be awesome to see what kind of progress our community is making in this field — it's fun to imagine what seismic interpretation might be like in a few years. There are so many fascinating problems to work on! Here are the talks in that session:

On Wednesday we'll be taking in some more talks and posters, then in the afternoon I'm reprising my keynote talk at IFPEN, a subsurface research institute in the Bois de Boulogne. I've never been there before, although I have met a few IFP scientists before. I'm looking forward to it very much. 

It all ends for us on Thursday. Evan and Diego fly home and I'm off to Cambridge (the old one in the fens, not the one in Massachusetts) for a few days with family (and bookshops). Until then, expect much blogging!


Going to EAGE?

If you're reading this and would like to meet up with us at Agile or some of the Software Underground crowd — the friendliest bunch of coding geoscientists you could hope for — let's plan to meet at the end of the workshop, at the workshop location. Look for the Software Underground shirts.

What should national data repositories do?

Right now there's a conference happening in Stavanger, Norway: National Data Repository 2017. My friend David Holmes of Dell EMC, a long time supporter of Agile's recent hackathons and general geocomputing infrastructure superhero, is there. He's giving a talk, I think, and chairing at least one session. He asked a question today on Software Underground:

If anyone has any thoughts or ideas as to what the regulators should be doing differently now is a good time to speak up :)

My response

For me it's about raising their aspirations. Collectively, they are sitting on one of the most valuable — or invaluable — datasets in the world, comparable to Hubble, or the LHC. Better yet, the data are (in most cases) already open and they actually want to share it. And the community (us) is better tooled than ever, and perhaps also more motivated, to get cracking. So the possibility is there to see a revolution in subsurface science and exploration (in the broadest sense of the word) and my challenge to them is:

Can they now create the conditions for this revolution in earth science?

Some things I think they can do right now:

  • Properly fund the development of an open data platform. I'll expand on this topic below.
  • Don't get too twisted off on formats (go primitive), platforms (pick one), licenses (go generic), and other busy work that committees love to fret over. Articulate some principles (e.g. public first, open source, small footprint, no lock-in, componentize, no single provider, let-users-choose, or what have you), and stay agile. 
  • Lobby NOCs and IOCs hard to embrace integrated and high-quality open data as an advantage that society, as well as industry, can share in. It's an important piece in the challenge we face to modernize the industry. Not so that it can survive for survival's sake, but so that it can serve society for as long as it's needed. 
  • Get involved in the community: open up their processes and collaborate a lot more with the technical societies — like show up and talk about their programs. (How did I not hear about the CDA's unstructured data challenge — a subject I'm very much into — till it was over? How many other potential participants just didn't know about it?)

An open data platform

The key piece here is the open data platform. Here are the features I'd like to see of such a platform:

  • Optimized for users, not the data provider, hosting provider, or system administrator.
  • Clear rights: well-known, documented, obvious, clearly expressed open licenses for re-use.
  • Meaningful levels of access that are free of charge for most users and most use cases.
  • Access for humans (a nice mappy web interface) with no awkward or slow registration processes.
  • Access for machines (a nice API, perhaps even a couple of libraries expressing it).
  • Tools for query, discovery, and retrieval; ideally with user feedback paths ('more like this, less like that').
  • Ways to report, or even fix, problems in the data. This relieves you of "the data's not ready" procrastination.
  • Good documentation of all of this, ideally in a wiki or something that people can improve.
  • Support for a community of users and developers that want to do things with the data.

Building this platform is not trivial. There is massive file storage, database back end, web front end, licensing, and so on. Then there's the community of developers and users to engage and support. It will take years, and never be finished. It sounds hard... but people are doing it. Prototypes for seismic data exist already, and there are countless models in other verticals (just check out the Registry of Research Data Repositories, or look at the list on PLOS). 

The contract to build data infrostructure is often awarded to the likes of Schlumberger, Halliburton or CGG. In theory, these companies have the engineering depth to pull them off (though this too is debatable, especially in today's web-first, native-never world). But they completely lack the culture required: there's no corporate understanding of what 'open' means. So the model is broken in subtle but fatal ways and the whole experiment fails. 

I'm excited to hear what comes out of this conference. If you're there, please tell!

Conversation not discussion

It's a while since we had a 'conferences are broken' rant on the Agile blog!

Five or six of the sessions at this year's conference were... different. I already mentioned the Value In Geophysics session, which was a cross between a regular series of talks and a panel discussion. I went to another, The modern geoscientist, which was structured the same way. A third one, Fundamentals of Professional Career Branding, was a mini workshop with Jackie Rafter of Higher Landing. There were at least a couple of other such sessions.

It's awesome to see the societies experimenting with something outside the usual plethora of talks and posters. I hope they were well received, because we need more of this in our discipline, now more than ever. If you went to one and enjoyed it, please let the organizers know.

But... the sessions — especially the panel discussion sessions — lacked something. One thing really:

The sessions we saw were nowhere near participatory enough. Not even close.

The 'expert-panel-enlightens-audience' pattern is slowing us down, perpetuating broken models of leadership and hierarchy. There isn't an expert in Calgary or the universe that knows how or when this downturn is going to end, or what we need to do to improve our chances of continuing to contribute to society and make a living in our profession. So please, stop throwing people up on a stage, making them give 5 minute presentations, and occasionally asking for questions from the audience. That is nothing like a discussion. Tune in to a political debate show to see what those look like: rapid-fire, punchy, controversial. In short: interesting. And, from an organizer's point of view, really hard, which is why we should stop.

Real conversation

What I think is really needed right now, more than half-baked expert discussion, is conversation. Conversations happen between small groups of people, all sitting on the same plane, around a table, with napkins to draw on and time to draw on them. They connect people and spread awesome ideas like viruses. What's more, great conversations have outcomes.

I don't want to claim that Agile has all this figured out, but we have demonstrated various ways of connecting scientists in meaningful ways and with lasting outcomes. We've also written extensively on the subject (e.g. here and here and here and here). Other verticals have conducted many more experiments, and documented the results. Humans know how to do this.

So there's no excuse — it's not too dramatic to call the current 'situation' a crisis in our profession in Canada — so we need to get beyond tinkering at the edges and half-hearted attempts at change. Our societies need to pay attention to what's needed, and get on with making it happen.

Still more ranting...

We talked about this topic at some length on the Undersampled Radio podcast yesterday. Here's the uncut video version:

Unweaving the rainbow

Last week at the Canada GeoConvention in Calgary I gave a slightly silly talk on colourmaps with Matteo Niccoli. It was the longest, funnest, and least fruitful piece of research I think I've ever embarked upon. And that's saying something.

Freeing data from figures

It all started at the Unsession we ran at the GeoConvention in 2013. We asked a roomful of geoscientists, 'What are the biggest unsolved problems in petroleum geoscience?'. The list we generated was topped by Free the data, and that one topic alone has inspired several projects, including this one. 

Our goal: recover digital data from any pseudocoloured scientific image, without prior knowledge of the colourmap.

I subsequently proferred this challenge at the 2015 Geophysics Hackathon in New Orleans, and a team from Colorado School of Mines took it on. Their first step was to plot a pseudocoloured image in (red, green blue) space, which reveals the colourmap and brings you tantalizingly close to retrieving the data. Or so it seems...

Here's our talk:

The new reality

In Calgary last week I heard the phrase "when the industry recovers" several times. Dean Potter even went so far as to say:

Don’t believe anyone who says ‘It’s different this time’. It isn’t.

He knows what he's talking about — the guy sold his company to Vermillion in 2014 for $427 million.

But I think he's dead wrong.

What's different this time?

A complete, or at least non-glacially-slow, recovery seems profoundly unlikely to me. We might possibly be through the 'everything burns to the ground' phase, but the frenzy of mergers and takeovers has barely started. That will take at least a couple of years. If and when any stability returns to operations, it seems highly probable that it will have these features:

  1. It will be focused on shale. (Look at the Permian Basin today.)
  2. It will need fewer geoscientists. (There are fewer geological risks.)
  3. It will be driven by data. (We have barely started on this.)
  4. It will end in another crash. (Hungry animals bolt their food.)

If you're a geoscientist and have never worked find-grained plays, I think the opportunities in front of you are going to be different from the ones you're used to. And by 'different', I mean 'scarcer'.

Where else can you look?

It may be time to think about a pivot, if you haven't already. (Pivot is lean-startup jargon for 'plan B' (or C). And I don't think it's a bad idea to think of yourself, or any business, as a start-up. Indeed, if you don't, you're headed for obsolescence.)

What would you pivot to? What's your plan B? If you think of petroleum geoscience as having a position in a matrix, think about our neighbours in that matrix. Industries are vertical; disciplines are horizontal.

Opportunities in neighbouring cells are probably within relatively easy reach. Think about:

  • Near surface: archaeology, UXO detection, engineering geophysics.
  • Geomatics, remote sensing, and geospatial analysis. Perhaps in mining or geothermal energy.
  • Stepping out of industry into education or government. People with applied knowledge have a lot to offer.
  • Making contacts in a new industry like finance or medicine. Tip: go to a conference. Talk to everyone you can find.

Think about your technical skills more broadly

I don't know where those new opportunities will come from, but I think it only takes a small shift in perspective to spot them. Think of your purpose, not your tasks. For example:

  • Many geophysicists are great quantitative scientists. If you know linear algebra or geostatistics and write code too, you have much sought-after skills in any industry.
  • Many geologists are great at spatial analysis. If you can wield geodatabases and GIS software like a wizard, you are a valuable asset to any industry.
  • Many engineers are great at project management and analytics. If you have broken out of Excel and can drive Spotfire or Tableau, you are gold in any industry.

If you forgot to keep your skills up to date and are locked into clicking buttons in Petrel, or making PowerPoint maps of the Cardium, or fiddling with charts in Excel, I'm not sure what to tell you. Everyone has those skills. You're yesterday's geoscientist and you don't have a second to lose. 

GeoConvention highlights

We were in Calgary last week at the Canada GeoConvention 2017. The quality of the talks seemed more variable than usual but, as usual, there were some gems in there too. Here are our highlights from the technical talks...

Filling in gaps

Mauricio Sacchi (University of Alberta) outlined a new reconstruction method for vector field data. In other words, filling in gaps in multi-compononent seismic records. I've got a soft spot for Mauricio's relaxed speaking style and the simplicity with which he presents linear algebra, but there are two other reasons that make this talk worthy of a shout out:

  1. He didn't just show equations in his talk, he used pseudocode to show the algorithm.
  2. He linked to his lab's seismic processing toolkit, SeismicJulia, on GitHub.

I am sure he'd be the first to admit that it is early days for for this library and it is very much under construction. But what isn't? All the more reason to showcase it openly. We all need a lot more of that.

Update on 2017-06-7 13:45 by Evan Bianco: Mauricio, has posted the slides from his talk

Learning about errors

Anton Birukov (University of Calgary & graduate intern at Nexen) gave a great talk in the induced seismicity session. It was a lovely mashing-together of three of our favourite topics: seismology, machine-learning, and uncertainty. Anton is researching how to improve microseismic and earthquake event detection by framing it as a machine-learning classification problem. He's using Monte Carlo methods to compute myriad synthetic seismic events by making small velocity variations, and then using those synthetic events to teach a model how to be more accurate about locating earthquakes.

Figure 2 from Anton Biryukov's abstract. An illustration of the signal classification concept. The signals originating from the locations on the grid (a) are then transformed into a feature space and labeled by the class containing the event or…

Figure 2 from Anton Biryukov's abstract. An illustration of the signal classification concept. The signals originating from the locations on the grid (a) are then transformed into a feature space and labeled by the class containing the event origin. From Biryukov (2017). Event origin depth uncertainty - estimation and mitigation using waveform similarity. Canada GeoConvention, May 2017.

The bright lights of geothermal energy
Matt Hall

Two interesting sessions clashed on Wednesday afternoon. I started off in the Value of Geophysics panel discussion, but left after James Lamb's report from the mysterious Chief Geophysicists' Forum. I had long wondered what went on in that secretive organization; it turns out they mostly worry about how to make important people like your CEO think geophysics is awesome. But the large room was a little dark, and — in keeping with the conference in general — so was the mood.

Feeling a little down, I went along to the Diversification of the Energy Industry session instead. The contrast was abrupt and profound. The bright room was totally packed with a conspicuously young audience numbering well over 100. The mood was hopeful, exuberant even. People were laughing, but not wistfully or ironically. I think I saw a rainbow over the stage.

If you missed this uplifting session but are interested in contributing to Canada's geothermal energy scene, which will certainly need geoscientists and reservoir engineers if it's going to get anywhere, there are plenty of ways to find out more or get involved. Start at cangea.ca and follow your nose.

We'll be writing more about the geothermal scene — and some of the other themes in this post — so stay tuned. 


DID YOU KNOW?

You can get regular updates right to your email, just drop your address in the box:

The fine print: No spam, we promise! We never share email addresses with 3rd parties. Unsubscribe any time with the link in the emails. The service is provided by MailChimp in accordance with Canada's anti-spam regulations.

Running away from easy

Matt and I are in Calgary at the 2017 GeoConvention. Instead of writing about highlights from Day 1, I wanted to pick on one awesome thing I saw. Throughout the convention, there is a air of sadness, of nostalgia, of struggle. But I detect a divide among us. There are people who are waiting for things to return to how they were, when life was easy. Others are exploring how to be a part of the change, instead of a victim of it. Things are no longer easy, but easy is boring. 


Want to start an oil and gas company? What resources are you going to need? Computers, pricey software applications, data. Purchase all of this stuff as a one-time capital expense, build a team, get an office lease, buy desks and a Keurig. Then if all goes well, 18 months later you'll have a slide deck outlining a play that you could pitch to investors. 

Imagine getting started without laying down a huge amount of capital for all those things. What if you could rent a desk at a co-working space, access the suite of software tools that you're used to, and use their Keurig. The computer infrastructure and software is managed and maintained by an IT service company so you don't have to worry about it. 

Yesterday at the Calgary Geoconvention I heard all about ReSourceYYC, a co-working space catering to oil and gas professionals, introduced ResourceNET, a subscription-based cloud workstation environment for freelancers, consultants, startups, and the newly and not-so-newly underemployed community of subsurface professionals.

In making this offering, ReSourceYYC has partnered up with a number of software companies: Entero, Seisware, Surfer, ValNav, geoLOGIC, and Divestco, to name a few. The limitations and restrictions around this environment, if any, weren't totally clear. I wondered: Could I append or swap my own tools with this stack? Can I access this environment from anywhere?

It could be awesome. I think it could serve just as many freelancers and consultants as "oil and gas startups". It seems a bit too early to say, but I reckon there are literally thousands of geoscientists and engineers in Calgary that'd be all over this.

I think it's interesting and important and I hope they get it right.

The Computer History Museum

Mountain View, California, looking northeast over US 101 and San Francisco Bay. The Computer History Museum sits between the Googleplex and NASA Ames. Hangar 1, the giant airship hangar, is visible on the right of the image. Imagery and map data © Google, Landsat/Copernicus.

A few days ago I was lucky enough to have a client meeting in Santa Clara, California. I had not been to Silicon Valley before, and it was more than a little exciting to drive down US Route 101 past the offices of Google, Oracle and Amazon and basically every other tech company, marvelling at Intel’s factory and the hangars at NASA Ames, and seeing signs to places like Stanford, Mountain View, and Menlo Park.

I had a spare day before I flew home, and decided to visit Stanford’s legendary geophysics department, where there was a lecture that day. With an hour or so to kill, I thought I’d take in the Computer History Museum on the way… I never made it to Stanford.

The museum

The Computer History Museum was founded in 1996, building on an ambition of über-geek Gordon Bell. It sits in the heart of Mountain View, surrounded by the Googleplex, NASA Ames, and Microsoft. It’s a modern, airy building with the museum and a small café downstairs, and meeting facilities on the upper floor. It turns out to be an easy place to burn four hours.

I saw a lot of computers that day. You can see them too because much of the collection is in the online catalog. A few things that stood out for me were:

No seismic

I had been hoping to read more about the early days of Texas Instruments, because it was spun out of a seismic company, Geophysical Service or GSI, and at least some of their early integrated circuit research was driven by the needs of seismic imaging. But I was surprised not to find a single mention of seismic processing in the place. We should help them fix this!

SEG-Y Rev 2 again: little-endian is legal!

Big news! Little-endian byte order is finally legal in SEG-Y files.

That's not all. I already spilled the beans on 64-bit floats. You can now have up to 18 quintillion traces (18 exatraces?) in a seismic line. And, finally, the hyphen confusion is cleared up: it's 'SEG-Y', with a hyphen. All this is spelled out in the new SEG-Y specification, Revision 2.0, which was officially released yesterday after at least five years in the making. Congratulations to Jill Lewis, Rune Hagelund, Stewart Levin, and the rest of the SEG Technical Standards Committee

Back up a sec: what's an endian?

Whenever you have to think about the order of bytes (the 8-bit chunks in a 'word' of 32 bits, for example) — for instance when you send data down a wire, or store bytes in memory, or in a file on disk — you have to decide if you're Roman Catholic or Church of England.

What?

It's not really about religion. It's about eggs.

In one of the more obscure satirical analogies in English literature, Jonathan Swift wrote about the ideological tussle between between two factions of Lilliputians in Gulliver's Travels (1726). The Big-Endians liked to break their eggs at the big end, while the Little-Endians preferred the pointier option. Chaos ensued.

Two hundred and fifty years later, Danny Cohen borrowed the terminology in his 1 April 1980 paper, On Holy Wars and a Plea for Peace — in which he positioned the Big-Endians, preferring to store the big bytes first in memory, against the Little-Endians, who naturally prefer to store the little ones first. Big bytes first is how the Internet shuttles data around, so big-endian is sometimes called network byte order. The drawing (right) shows how the 4 bytes in a 32-bit 'word' (the hexadecimal codes 0A, 0B, 0C and 0D) sit in memory.

Because we write ordinary numbers big-endian style — 2017 has the thousands first, the units last — big-endian might seem intuitive. Then again, lots of people write dates as, say, 31-03-2017, which is analogous to little-endian order. Cohen reviews the computational considerations in his paper, but really these are just conventions. Some architectures pick one, some pick the other. It just happens that the x86 architecture that powers most desktop and laptop computers is little-endian, so people have been illegally (and often accidentally) writing little-endian SEG-Y files for ages. Now it's actually allowed.

Still other byte orders are possible. Some processors, notably ARM and other RISC architectures, are middle-endian (aka mixed endian or bi-endian). You can think of this as analogous to the month-first American date format: 03-31-2017. For example, the two halves of a 32-bit word might be reversed compared to their 'pure' endian order. I guess this is like breaking your boiled egg in the middle. Swift did not tell us which religious denomination these hapless folks subscribe to.

OK, that's enough about byte order

I agree. So I'll end with this handy SEG-Y cheatsheet. Click here for the PDF.


References and acknowledgments

Cohen, Danny (April 1, 1980). On Holy Wars and a Plea for PeaceIETF. IEN 137. "...which bit should travel first, the bit from the little end of the word, or the bit from the big end of the word? The followers of the former approach are called the Little-Endians, and the followers of the latter are called the Big-Endians." Also published at IEEE ComputerOctober 1981 issue.

Thumbnail image: “Remember, people will judge you by your actions, not your intentions. You may have a heart of gold -- but so does a hard-boiled egg.” by Kate Ter Haar is licensed under CC BY 2.0