Dynamic geology at AAPG

Brad Moorman stands next to his 48 inch (122 cm) Omni Globe spherical projection system on the AAPG exhibition floor, greeting passers by drawn in by its cycling animations of Getech's dynamic plate reconstructions. His map-lamp projects evolutionary visions of geologic processes like a beacon of inspiration for petroleum explorers.

I've attended several themed sessions over the first day and a half at AAPG and the ones that have stood out for me have had this same appeal.

Computational stratigraphy

Processes such as accommodation rate and sedimentation rate can be difficult to unpeel from stratal geometries. Guy Prince's PhD Impact of non-uniqueness on sequence stratigraphy used a variety of input parameters and did numerical computations to make key stratigraphic surfaces with striking similarity. By forward modeling the depositional dynamics, he showed that there are at least two ways to make a maximum flooding surface, a sequence boundary, and top set aggradations. Non-uniqueness implies that there isn't just one model that fits the data, nor two, however Guy cleverly made simple comparisons to illustrate such ambiguities. The next step in this methodology, and it is a big step, is to express the entire model space: just how many solutions are there? 

If you were a farmer here, you lost your land

Henry Posamentier, seismic geomorphologist at Chevron, showed extremely high-resolution 3D sparker seismic imaging just beneath the seafloor in the Gulf of Thailand. Because this locale is more than 1000 km from the nearest continental shelf, it has been essentially unaffected by sea-level change, making it an ideal place to study pure fluvial depositional patterns. Such fluvial systems result in reservoirs in their accretionary point bars, but they are hard to predict.

To make his point, Henry showed a satellite image of the Ping River from a few years ago in the north of Chiang Mai, where meander loops had shifted sporadically in response to one flood season: "If you were a farmer here, you lost your land."

Wells can tell about channel thickness, and seismic may resolve the channel width and the sinuosity, but only a dynamic model of the environment can suggest how well-connected is the sand.

The evolution of a single meandering channel belt

Ron Boyd from ConocoPhillips showed a four-step process investigating the evolution of a single channel belt in his talk, Tidal-Fluvial Sedimentology and Stratigraphy of the McMurray Formation.

  1. Start with a cartoon facies interpretation of channel evolution.
  2. Trace out the static geomorphological model on seismic time slices.
  3. Identify directions of fluvial migrations point by point, time step by time step.
  4. Distribute petrophysical properties within each channel element in chronological sequence.

Mapping the dynamics of a geologic scenario along a timeline gives you access to all the pieces of a single geologic puzzle. But what really matters is how that puzzle compares with the handful of pieces in your hand.

More tomorrow — stay tuned.

Google Earth imagery ©2014 DigitalGlobe, maps ©2014 Google

This post was modified on April 16, 2014, mentioning and giving redirects to Getech.

Hacking logs

Over the weekend, 6 intrepid geologist-geeks gathered in a coworking space in the East Downtown area of Houston. With only six people, I wasn't sure we could generate the same kind of creative buzz we had at the geophysics hackathon last September. But sitting with other geoscientists and solving problems with code works at any scale. 

The theme of the event was 'Doing cool things with log data'. There were no formal teams and no judging round. Nonetheless, some paired up in loose alliances, according to their interests. Here's a taste of what we got done in 2 days...

Multi-scale display

Jacob Foshee and Ben Bougher worked on some JavaScript to display logs with the sort of adaptive scrolling feature you often see on finance sites for displaying time series. The challenge was to display not just one log with its zoomed version, but multiple logs at multiple scales — and ideally core photos too. They got the multiple logs, though not yet at multiple scales, and they got the core photo. The example (right) shows some real logs from Panuke, a real core photo from the McMurray, and a fake synthetic seismogram. 

Click on the image for a demo. And the code is all open, all the way. Thanks guys for an awesome effort!

Multi-scale log attributes

Evan and Mark Dahl (ConocoPhillips) — who was new to Python on Friday — built some fascinating displays (right). The idea was to explore stratigraphic stacking patterns in scale space. It's a little like spectral decomposition for 1D data. They averaged a log at a range of window sizes, increasing exponentially (musicians and geophysicists know that scale is best thought of in octaves). Then they made a display that ranges from short windows on the left-hand side to long ones on the right. Once you get your head around what exactly you're looking at here, you naturally want to ask questions about what these gothic-window patterns mean geologically (if anything), and what we can do with them. Can we use them to help train a facies classifier, for example? [Get Evan's code]

Facies from logs

In between running for tacos, I worked on computing grey-level co-occurence matrices (GLCMs) for logs, which are a prerequisite for computing certain texture attributes. Why would anyone do this? We'd often like to predict facies from well logs; maybe log textures (spiky vs flat, upwards-fining vs barrel-shaped) can help us discriminate facies better. [Download my IPython Notebook]

Wassim Benhallam (of Lisa Stright's Rocks to Models lab at University of Utah) worked on machine learning algorithms for computing facies from core. He started pursuing self-organizing maps as an interesting line of attack, and plans to use MATLAB to get something working. I hope he tells us how it goes!

We didn't have a formal contest at this event, but our friend Maitri Erwin was kind enough to stop by with some excellent wine and her characteristically insightful and inquisitive demeanour. After two days rattling around with nothing but geeks and tacos for company, she provided some much-needed objectivity and gave us all good ideas about how to develop our efforts in the coming weeks. 

We'll be doing this again in Denver this autumn, some time around the SEG Annual Meeting. If it appeals to your creativity — maybe there's a tool you've always wished for — why not plan to join us?  

As I get around to it, I'll be dumping more info and pictures over on the wiki

Looking forward to AAPG

Today we're en route to the AAPG Annual Convention & Exhibition (the ACE) in Houston. We have various things going on before it and after it too, so we're in Houston for 10 days of geoscience. Epic!

The appetizers

On Friday we're hosting a 'learning geoscience programming' bootcamp at START, our favourite Houston coworking space. Then we roll straight into our weekend programming workshop — Rock Hack — also at START. Everyone is welcome — programming newbies, established hackers. We want to build tools for working with well logs. You don't need any special skills, just ideas. Bring whatever you have! We'll be there from 8 am on Saturday. (Want more info?)

At least come for the breakfast tacos.

Conference highlight forecast

Regular readers will know that I'm a bit of a jaded conference-goer. But I haven't been to AAPG since Calgary in 2005, and I am committed to reporting the latest in geoscience goodness — so I promise to go to some talks and report back on this very blog. I'm really looking forward to it since Brian Romans whet my appetite with a round-up of his group's research offerings last week. 

I thought I'd share what else I'll be trying to get to. I can't find a way to link to the abstracts — you'll have to hunt them down in the Itinerary Planner... 

  • Monday am. Communicating our science. Jim Reilly, Iain Stewart, and others.
  • Monday pm. Case Studies of Geological and Geophysical Integration sounds okay, but might under-deliver. And there's a talk called 3-D Printing Artificial Reservoir Rocks to Test Their Petrophysical Properties, by Sergey Ishutov that should be worth checking out.
  • Tuesday am.  Petroleum Geochemistry and Source Rock Characterization, in honour of Wally Dow
  • Tuesday pm. Turbidites and Contourites, Room 360, is the place to be. Zane Jobe is your host.
  • Wednesday am. I'll probably end up in Seismic Visualization of Hydrocarbon Play Fairways.
  • Wednesday pm. Who can resist Space and Energy Frontiers? Not me.

That's about it. I'm teaching my geoscience writing course at a client's offices on Friday, then heading home. Evan will be hanging out and hacking some more I expect. Expect some updates to modelr.io!

If you're reading this, and you will be at AAPG — look out for us! We'll be the ones sitting on the floor near electrical outlets, frantically typing blog posts.

Getting started with Modelr

Let's take a closer look at modelr.io, our new modeling tool. Just like real seismic experiments, there are four components:

  • Make a framework. Define the geometries of rock layers.
  • Make an earth. Assign a set of rock properties to each layer.
  • Make a kernel. Define the seismic survey.
  • Make a plot. Set the output parameters.

Modelr takes care of the physics of wave propagation and reflection, so you don't have to stick with normal incidence acoustic impedance models if you don't want to. You can explore the full range of possibilities.

3 ways to slice a wedge

To the uninitiated, the classic 3-layer wedge model may seem ridiculously trivial. Surely the earth looks more complicated than that! But we can leverage such geometric simplicity to systematically study how seismic waveforms change across spatial and non-spatial dimensions. 

Spatial domain. In cross-section (right), a seismic wedge model lets you analyse the resolving power of a given wavelet. In this display the onset of tuning is marked by the vertical red line, and the thickness at which maximum tuning occurs is shown in blue. Reflection profiles can be shown for any incidence angle, or range of incidence angles (offset stack).

Amplitude versus angle (AVA) domain. Maybe you are working on a seismic inversion problem so you might want to see what a CDP angle gather looks like above and below tuning thickness. Will a tuned AVA response change your quantitative analysis? This 3-layer model looks like a two-layer AVA gather except our original wavelet looks like it has undergone a 90 degree phase rotation. Looks can be deceiving. 

Amplitude versus frequency domain. If you are trying to design a seismic source for your next survey, and you want to ensure you've got sufficient bandwidth to resolve a thin bed, you can compute a frequency gather — right, bottom — and explore a swath of wavelets with regard to critical thickness in your prospect. The tuning frequency (blue) and resolving frequency (red) are revealed in this domain as well. 

Wedges are tools for seismic waveform classification. We aren't just interested in digitizing peaks and troughs, but the subtle interplay of amplitude tuning, and apparent phase rotation variations across the range of angles and bandwidths in the seismic experiment. We need to know what we can expect from the data, from our supposed geology. 

In a nutshell, all seismic models are about illustrating the band-limited nature of seismic data on specific geologic scenarios. They help us calibrate our intuition when bandwidth causes ambiguity in interpretation. Which is nearly all of the time.

How to load SEG-Y data

Yesterday I looked at the anatomy of SEG-Y files. But it's pathology we're really interested in. Three times in the last year, I've heard from frustrated people. In each case, the frustration stemmed from the same problem. The epic email trails led directly to these posts. Next time I can just send a URL!

In a nutshell, the specific problem these people experienced was missing or bad trace location data. Because I've run into this so many times before, I never trust location data in a SEG-Y file. You just don't know where it's been, or what has happened to it along the way — what's the datum? What are the units? And so on. So all you really want to get from the SEG-Y are the trace numbers, which you can then match to a trustworthy source for the geometry.

Easy as 1-2-3, er, 4

This is my standard approach to loading data. Your mileage will vary, depending on your software and your data. 

  1. Find the survey geometry information. For 2D data the geometry is usually in a separate navigation ('nav') file. For 3D you are just looking for cornerpoints, and something indicating how the lines and crosslines are numbered (they might not start at 1, and might not be oriented how you expect). This information may be in the processing report or, less reliably, in the EBCDIC text header of the SEG-Y file.
  2. Now define the survey geometry. You need a location for every trace for a 2D, and the survey's cornerpoints for a 3D. The geometry is a description of where the line goes on the earth, in surface coordinates, and where the starting trace is, how many traces there are, and what the trace spacing is. In other words, the geometry tells you where the traces go. It's variously called 'navigation', 'survey', or some other synonym.
  3. Finally, load the traces into their homes, one vintage (survey and processing cohort) at a time for 2D. The cross-reference between the geometry and the SEG-Y file is the trace or CDP number for a 2D, and the line and crossline numbers for a 3D.
  4. Check everything twice. Does the map look right? Is the survey the right shape and size? Is the line spacing right? Do timeslices look OK?

Where to get the geometry data?

So, where to find cornerpoints, line spacings, and so on? Sadly, the header cannot be trusted, even in newly-processed data. If you have it, the processing report is a better bet. It often helps to talk to someone involved in the acquisition and processing too. If you can corroborate with data from the acqusition planning (line spacings, station intervals, and so on), so much the better — but remember that some acquisition parameters may have changed during the job.

Of vital importance is some independent corroboration— a map, ideally —of the geometry and the shape and orientation of the survey. I can't count the number of back-to-front surveys I've seen. I even saw one upside-down (in the z dimension) once, but that's another story.

Next time, I'll break down the loading process a bit more, with some step-by-step for loading the data somewhere you can see it.

What is SEG-Y?

The confusion starts with the name, but whether you write SEGY, SEG Y, or SEG-Y, it's probably definitely pronounced 'segg why'. So what is this strange substance?

SEG-Y means seismic data. For many of us, it's the only type of seismic file we have much to do with — we might handle others, but for the most part they are closed, proprietary formats that 'just work' in the application they belong to (Landmark's brick files, say, or OpendTect's CBVS files). Processors care about other kinds of data — the SEG has defined formats for field data (SEG-D) and positional data (SEG-P), for example. But SEG-Y is the seismic file for everyone. Kind of.

The open SEG-Y "standard" (those air quotes are an important feature of the standard) was defined by SEG in 1975. The first revision, Rev 1, was published in 2002. The second revision, Rev 2, was announced by the SEG Technical Standards Committee at the SEG Annual Meeting in 2013 and I imagine we'll start to see people using it in 2014. 

What's in a SEG-Y file?

SEG-Y files have lots of parts:

The important bits are the EBCDIC header (green) and the traces (light and dark blue).

The EBCDIC text header is a rich source of accurate information that provides everything you need to load your data without problems. Yay standards!

Oh, wait. The EBCDIC header doesn't say what the coordinate system is. Oh, and the datum is different from the processing report. And the dates look wrong, and the trace length is definitely wrong, and... aargh, standards!

The other important bit — the point of the whole file really — is the traces themselves. They also have two parts: a header (light blue, above) and the actual data (darker blue). The data are stored on the file in (usually) 4-byte 'words'. Each word has its own address, or 'byte location' (a number), and a meaning. The headers map the meaning to the location, e.g. the crossline number is stored in byte 21. Usually. Well, sometimes. OK, it was one time.

According to the standard, here's where the important stuff is supposed to be:

I won't go into the unpleasantness of poking around in SEG-Y files right now — I'll save that for next time. Suffice to say that it's often messy, and if you have access to a data-loading guru, treat them exceptionally well. When they look sad — and they will look sad — give them hugs and hot tea. 

What's so great about Rev 2?

The big news in the seismic standards world is Revision 2. According to this useful presentation by Jill Lewis (Troika International) at the Standards Leadership Council last month, here are the main features:

  • Allow 240 byte trace header extensions.
  • Support up to 231 (that's 2.1 billion!) samples per trace and traces per ensemble.
  • Permit arbitrarily large and small sample intervals.
  • Support 3-byte and 8-byte sample formats.
  • Support microsecond date and time stamps.
  • Provide for additional precision in coordinates, depths, elevations.
  • Synchronize coordinate reference system specification with SEG-D Rev 3.
  • Backward compatible with Rev 1, as long as undefined fields were filled with binary zeros.

Two billion samples at µs intervals is over 30 minutes Clearly, the standard is aimed at <ahem> Big Data, and accommodating the massive amounts of data coming from techniques like variable timing acquisition, permanent 4D monitoring arrays, and microseismic. 

Next time, we'll look at loading one of these things. Not for the squeamish.

Calibrate your seismic intuition

On Tuesday we announced our new web app, modelr.io. Why are we so excited about it? 

  • We love the idea that subsurface software can cost dollars, not 1000's of dollars. 
  • We love the idea of subsurface software being online, not on the desktop.
  • We love the idea that subsurface software can be open source. Here's our code!
  • We love the idea of subsurface software that doesn't need a manual to master.
  • We love the idea of subsurface software that runs on a tablet or a phone.
  • We see software as an important way to share knowledge and connect people.

OK, that's enough reasons. There are more. Those are the main ones.

The point is: we love these ideas. And we hope that you, dear reader, at least like some of them a bit. Because we really want to keep developing modelr. We think it can be awesome. Imagine 3D earth models, imagine full waveform modeling, imagine gravity and magnetic models. We get very excited when we think about all the possiblities. There's no better way to calibrate your seismic intuition than modeling, and modelr is a great place to start modeling. 

Here's a challenge: take 3 minutes and see if you can generate...

 A wedge model & tuning curve An AVA gather for a Class 4 sand    A stochastic AVA crossplot          

 modelr seismic wedge modelmodelr seismic avo modelmodelr stochastic avo  model

The most important thing nobody does

A couple of weeks ago, we told you we were up to something. Today, we're excited to announce modelr.io — a new seismic forward modeling tool for interpreters and the seismically inclined.

Modelr is a web app, so it runs in the browser, on any device. You don't need permission to try it, and there's never anything to install. No licenses, no dongles, no not being able to run it at home, or on the train.

Later this week, we'll look at some of the things Modelr can do. In the meantime, please have a play with it.
Just go to modelr.io and hit Demo, or click on the screenshot below. If you like what you see, then think about signing up — the more support we get, the faster we can make it into the awesome tool we believe it can be. And tell your friends!

If you're intrigued but unconvinced, sign up for occasional news about Modelr:

This will add you to the email list for the modeling tool. We never share user details with anyone. You can unsubscribe any time.

A long weekend of creative geoscience computing

The Rock Hack is in three weeks. If you're in Houston, for AAPG or otherwise, this is going to be a great opportunity to learn some new computer skills, build some tools, or just get some serious coding done. The Agile guys — me, Evan, and Ben — will be hanging out at START Houston, laptops open, all say 5 and 6 April, about 8:30 till 5. The breakfast burritos and beers are on us.

Unlike the geophysics hackathon last September, this won't be a contest. We're going to try a more relaxed, unstructured event. So don't be shy! If you've always wanted to try building something but don't know where to start, or just want to chat about The Next Big Thing in geoscience or technology — please drop in for an hour, or a day.

Here are some ideas we're kicking around for projects to work on:

  • Sequence stratigraphy calibration app to tie events to absolute geologic time and to help interpret systems tracts.
  • Wireline log 'attributes'.
  • Automatic well-to-well correlation.
  • Facies recognition from core.
  • Automatic photomicrograph interpretation: grain size, porosity, sorting, and so on.
  • A mobile app for finding and capturing data about outcrops.
  • An open source basin modeling tool.

Short course

If you feel like a short course would get you started faster, then come along on Friday 4 April. Evan will be hosting a 1-day course, leading you through getting set up for learning Python, learning some syntax, and getting started on the path to scientific computing. You won't have super-powers by the end of the day, but you'll know how to get them.

Eventbrite - Agile Geocomputing

The course includes food and drink, and lots of code to go off and play with. If you've always wanted to get started programming, this is your chance!

Purposeful discussion in geoscience

Regular readers will remember the Unsolved Problems Unsession at the GeoConvention in Calgary last May. We think these experiments in collaboration are one possible way to get people more involved in progressing geoscience at conferences, and having something to show for it. We plan to do more — and are here to support you if you'd like to try one in your community.

Last Thursday was the 2014 CSEG Symposium. The organizers asked me for a short video to sum up what happened at the unsession for the crowd, and to help get them in the mood for some discussion. I hope it helped...

Getting better

Conferences seem so crammed with talks these days. No time for good conversation, in or out of the sessions. The only decent discussion I remember recently (apart from the unsession, obvsly) was at EAGE in 2012, when a talk finished early and the space filled with a fascinating discussion between two compressed sensing clever-clogs.

I think there are a few ways to get better at it:

  • Make more time for it, preferably at least 40 minutes.
  • Get people into smaller groups, about 4–12 people is good.
  • Facilitate with some ground rules, provocative questions, and conversation management.
  • Capture what was said, preferably in real time and using the participants' own words.
  • Use lots of methods: drawing, sticky notes, tweets, video, and so on.
  • Reflect the conversation back at the participants, and let them respond.
  • Read up on open space, knowledge café, charrettes, and other methods.
  • Don't shut it down with "I guess we're out of time..." — review or sum up first.

Think about when you have been part of a really good conversation. How it feels, how it flows, and how you remember it for days afterwards, and mention it to others later. I think we can have more of those about our work, and conferences are a great place to help them happen.

Stay tuned for details of the next unsession — again, at the Calgary GeoConvention.