Geothermal facies from seismic

Here is a condensed video of the talk I gave at the SEG IQ Earth Forum in Colorado. Much like the tea-towel mock-ups I blogged about in July, this method illuminates physical features in seismic by exposing hidden signals and textures. 

This approach is useful for photographs of rocks and core, for satellite photography, or any geophysical data set, when there is more information to be had than rectangular and isolated arrangements of pixel values.

Click to download slides with notes!Interpretation has become an empty word in geoscience. Like so many other buzzwords, instead of being descriptive and specific jargon, it seems that everyone has their own definition or (mis)use of the word. If interpretation is the art and process of making mindful leaps between unknowns in data, I say, let's quantify to the best of our ability the data we have. Your interpretation should be iteratable, it should be systematic, and it should be cast as an algorithm. It should be verifiable, it should be reproducible. In a word, scientific.  

You can download a copy of the presentation with speaking notes, and access the clustering and texture codes on GitHub

What technology?

This is my first contribution to the Accretionary Wedge geology themed community blog. Charles Carrigan over at Earth-like Planet is hosting this months topic where he posts the question, "how do you perceive technology impacting the work that you do?" My perception of technology has matured, and will likely continue to change, but here are a few ways in which technology works for us at Agile. 

My superpower

I was at a session in December where one of the activities was to come up with one (and only one) defining superpower. A comic-bookification of my identity. What is the thing that defines you? The thing that you are or will be known for? It was an awkward experience for most, a bold introspection to quickly pull out a memorable, but not too cheesy, superpower that fit our life. I contemplated my superhuman intelligence, and freakish strength... too immodest. The right choice was invisibility. That's my superpower. Transparency, WYSIWYG, nakedness, openness. And I realize now that my superpower is, not coincidentally, aligned with Agile's approach to technology. 

For some, technology is the conspicuous interface between us and our work. But conspicuous technology constrains your work, ordains it even. The real challenge is to use technology in a way that makes it invisible. Matt reminds me that how I did it isn't as important as what I did. Making the technology seem invisible means the user must be invisible as well. Ultimately, tools don't matter—they should slip away into the whitespace. Successful technology implementation is camouflaged. 

I is for iterate

Technology is not a source of ideas or insights, such as you'd find in the mind of an experienced explorationist or in a detailed cross-section or map. I'm sure you could draw a better map by hand. Technology is only a vehicle that can deliver the mind's inner constructs; it's not a replacement for vision or wisdom. Language or vocabulary has nothing to do with it. Technology is the enabler of iteration. 

So why don't we iterate more in our scientific work? Because it takes too long? Maybe that's true for a hand-drawn contour map, but technology is reducing the burden of iteration. Because we have never been taught humility? Maybe that stems from the way we learned to learn: homework assignments have exact solutions (and are done only once), and re-writing an exam is unheard of (unless you flunked it the first time around).

What about writing an exam twice to demonstrate mastery? What about reading a book twice, in two different ways? Once passively in your head, and once actively—at a slower pace, taking notes. I believe the more ways you can interact with your media, data, or content, the better work will be done. Students assume that the cost required to iterate outweighs the benefits, but that is no longer the case with digital workflows. Embracing technology's capacity to iterate seemlessly and reliably is what a makes a grand impact in our work.

What do we use?

Agile strives to be open as a matter of principle, so when it comes to software we go for open source by default. Matt wrote recently about the applications and workstations that we use. 

Checklists for everyone

Avoidable failures are common and persistent, not to mention demoralizing and frustrating, across many fields — from medicine to finance, business to government. And the reason is increasingly evident: the volume and complexity of what we know has exceeded our individual ability to deliver its benefits correctly, safely, or reliably. Knowledge has both saved and burdened us.

I first learned about Atul Gawande from Bill Murphy's talk at the 1IWRP conference last August, where he offered the surgeon's research model for all imperfect sciences; casting the spectrum of problems in a simple–complicated–complex ternary space. In The Checklist Manifesto, Gawande writes about a topic that is relevant to all all geoscience: the problem of extreme complexity. And I have been batting around the related ideas of cookbooks, flowcharts, recipes, and to-do lists for maximizing professional excellence ever since. After all, it is challenging and takes a great deal of wisdom to cut through the chaff, and reduce a problem to its irreducible and essential bits. Then I finally read this book.

The creation of the now heralded 19-item surgical checklist found its roots in three places — the aviation industry, restaurant kitchens, and building construction:

Thinking about averting plane crashes in 1935, or stopping infections in central lines in 2003, or rescuing drowning victims today, I realized that the key problem in each instance was essentially a simple one, despite the number of contributing factors. One needed only to focus attention on the rudder and elevator controls in the first case, to maintain sterility in the second, and to be prepared for cardiac bypass in the third. All were amenable, as a result, to what engineers call "forcing functions": relatively straightforward solutions that force the necessary behavior — solutions like checklists.

What is amazing is that it took more than two years, and a global project sponsored by the World Health Organization, to devise such a seemingly simple piece of paper. But what a change it has had. Major complications fell by 36%, and deaths fells by 47%. Would you adopt a technology that had a 36% improvement in outcomes, or a 36% reduction in complications? Most would without batting an eye.

But the checklist paradigm is not without skeptics. There is resistance to the introduction of the checklist because it threatens our autonomy as professionals, our ego and intelligence that we have trained hard to attain. An individual must surrender being the virtuoso. It enables teamwork and communication, which engages subordinates and empowers them at crucial points in the activity. The secret is that a checklist, done right, is more than just tick marks on a piece of paper — it is a vehicle for delivering behavioural change.

I can imagine huge potential for checklists in the problems we face in petroleum geoscience. But what would such checklists look like? Do you know of any in use today?

Polarity cartoons

...it is good practice to state polarity explicitly in discussions of seismic data, with some phrase such as: 'increase in impedance downwards gives rise to a red (or blue, or black) loop.'

Bacon, Simm & Redshaw (2003), 3D Seismic Interpretation, Cambridge

Good practice, but what a mouthful. And perhaps because it is such a cumbersome convention, it is often ignored, assumed, or skipped. We'd like to change that. Polarity is important, everyone needs to know what the wiggles (or colours) of seismic data mean.

Two important things

    Click the image to find your favorite colorbarSeismic data is about contrasts. The data are an abstraction of geological contrasts in the earth. To connect the data to the geology, there are two important things you need to know about your data:

  1. What do the colours mean in terms of digits?

  2. What do the digits mean in terms of impedance?

So whenever you show seismic to someone, you need to answers these questions for them. Show the colourbar, and the sign of the digits (the magnitude of the digits is not very important; amplitude are relative). Show the relationship between the sign of the digits and impedance.

Really useful

To help you show these things, we have created a table of polarity cartoons for some common colour scales.

  1. Decide if you want to use the American–Canadian convention of a downwards increase in impedance resulting in a positive amplitude, or the opposite European–Australian convention. Sometimes people talk about SEG Normal polarity — the de facto SEG standard is the American convention.

  2. Choose whether you want to show a high impedance layer sandwiched between low impedance ones, or vice versa. To make this decision, inspect your well ties or plot impedance against lithology. For example, if your sands are relatively fast and dense, you may want to choose the hard layer option.

  3. Select a colourmap that matches your displays. If you need another, you can download and edit the SVG file, or email us and we'll add it for you.

  4. Right-click on a thumbnail, copy it to your clipboard, and paste it into the corner of your section or timeslice in PowerPoint, Word, or wherever. If the thumbnail is too small or pixelated, click the thumbnail for higher resolution.

With so many options to choose from, we hope this little tool can help make your seismic discussions a little more transparent. What's more, if you see a seismic section without a legend like this, then are you sure the presenter knows about the polarity of their data? Perhaps they do, but it is an oversight to assume that you should know as well. 

What do you make your audience assume?


UPDATE [2020] — we built a small web app to serve up fresh polarity cartoons, whenever you need them! Check it out.

Source rocks from seismic

A couple of years ago, Statoil's head of exploration research, Ole Martinsen, told AAPG Explorer magazine about a new seismic analysis method. Not just another way to discriminate between sand and shale, or water and gas, this was a way to assess source rock potential. Very useful in under-explored basins, and Statoil developed it for that purpose, but only the very last sentence of the Explorer article hints at its real utility today: shale gas exploration.

Calling the method Source Rocks from Seismic, Martinsen was cagey about details, but the article made it clear that it's not rocket surgery: “We’re using technology that would normally be used, say, to predict sandstone and fluid content in sandstone,” said Marita Gading, a Statoil researcher. Last October Helge Løseth, along with Gading and others, published a complete account of the method (Løseth et al, 2011).

Because they are actively generating hydrocarbons, source rocks are usually overpressured. Geophysicists have used this fact to explore for overpressured zones and even shale before. For example, Mukerji et al (2002) outlined the rock physics basis for low velocities in overpressured zones. Applying the physics to shales, Liu et al (2007) suggested a three-step process for evaluating source rock potential in new basins: 1 Sequence stratigraphic interpretation; 2 Seismic velocity analysis to determine source rock thickness; 3 Source rock maturity prediction from seismic. Their method is also a little hazy, but the point is that people are looking for ways to get at source rock potential via seismic data. 

The Løseth et al article was exciting to see because it was the first explanation of the method that Statoil had offered. This was exciting enough that the publication was even covered by Greenwire, by Paul Voosen (@voooos on Twitter). It turns out to be fairly straightforward: acoustic impedance (AI) is inversely and non-linearly correlated with total organic carbon (TOC) in shales, though the relationship is rather noisy in the paper's examples (Kimmeridge Clay and Hekkingen Shale). This means that an AI inversion can be transformed to TOC, if the local relationship is known—local calibration is a must. This is similar to how companies estimate bitumen potential in the Athabasca oil sands (e.g. Dumitrescu 2009). 

Figure 6 from Løseth et al (2011). A Seismic section. B Acoustic impedance. C Inverted seismic section where source rock interval is converted to total organic carbon (TOC) percent. Seismically derived TOC percent values in source rock intervals can be imported to basin modeling software to evaluate hydrocarbon generation potential of a basin. Click for full size..The result is that thick rich source rocks tend to have strong negative amplitude at the top, at least in subsiding mud-rich basins like the North Sea and the Gulf of Mexico. Of course, amplitudes also depend on stratigraphy, tuning, and so on. The authors expect amplitudes to dim with offset, because of elastic and anisotropic effects, giving a Class 4 AVO response.

This is a nice piece of work and should find application worldwide. There's a twist though: if you're interested in trying it out yourself, you might be interested to know that it is patent-pending: 

WO/2011/026996
INVENTORS:  Løseth,  H;  Wensaas, L; Gading, M; Duffaut, K; Springer, HM
Method of assessing hydrocarbon source rock candidate
A method of assessing a hydrocarbon source rock candidate uses seismic data for a region of the Earth. The data are analysed to determine the presence, thickness and lateral extent of candidate source rock based on the knowledge of the seismic behaviour of hydrocarbon source rocks. An estimate is provided of the organic content of the candidate source rock from acoustic impedance. An estimate of the hydrocarbon generation potential of the candidate source rock is then provided from the thickness and lateral extent of the candidate source rock and from the estimate of the organic content.

References

Dumitrescu, C (2009). Case study of a heavy oil reservoir interpretation using Vp/Vs ratio and other seismic attributes. Proceedings of SEG Annual Meeting, Houston. Abstract is online

Liu, Z, M Chang, Y Zhang, Y Li, and H Shen (2007). Method of early prediction on source rocks in basins with low exploration activity. Earth Science Frontiers 14 (4), p 159–167. DOI 10.1016/S1872-5791(07)60031-1

Løseth, H, L Wensaas, M Gading, K Duffaut, and M Springer (2011). Can hydrocarbon source rocks be identified on seismic data? Geology 39 (12) p 1167–1170. First published online 21 October 2011. DOI 10.1130/​G32328.1

Mukerji, T, Dutta, M Prasad, J Dvorkin (2002). Seismic detection and estimation of overpressures. CSEG Recorder, September 2002. Part 1 and Part 2 (Dutta et al, same issue). 

The figure is reproduced from Løseth et al (2011) according to The Geological Society of America's fair use guidelines. Thank you GSA! The flaming Kimmeridge Clay photograph is public domain. 

Please sir, may I have some processing products?

Just like your petrophysicist, your seismic processor has some awesome stuff that you want for your interpretation. She has velocities, fold maps, and loads of data. For some reason, processors almost never offer them up — you have to ask. Here is my processing product checklist:

A beautiful seismic volume to interpret. Of course you need a volume to tie to wells and pick horizons on. These days, you usually want a prestack time migration. Depth migration may or may not be something you want to pay for. But there's little point in stopping at poststack migration because if you ever want to do seismic analysis (like AVO for example), you're going to need a prestack time migration. The processor can smooth or enhance this volume if they want to (with your input, of course). 

Unfiltered, attribute-friendly data. Processors like to smooth things with filters like fxy and fk. They can make your data look nicer, and easier to pick. But they mix traces and smooth potentially important information out—they are filters after all. So always ask for the unfiltered data, and use it for attributes, especially for computing semblance and any kind of frequency-based attribute. You can always smooth the output if you want.

Limited-angle stacks. You may or may not want the migrated gathers too—sometimes these are noisy, and they can be cumbersome for non-specialists to manipulate. But limited-angle stacks are just like the full stack, except with fewer traces. If you did prestack migration they won't be expensive, get them exported while you have the processor's attention and your wallet open. Which angle ranges you ask for depends on your data and your needs, but get at least three volumes, and be careful when you get past about 35˚ of offset. 

Rich, informative headers. Ask to see the SEG-Y file header before the final files are generated. Ensure it contains all the information you need: acquisition basics, processing flow and parameters, replacement velocity, time datum, geometry details, and geographic coordinates and datums of the dataset. You will not regret this and the data loader will thank you.

Processing report. Often, they don't write this until they are finished, which is a shame. You might consider asking them to write up a shared Google Docs or a private wiki as they go. That way, you can ensure you stay engaged and informed, and can even help with the documentation. Make sure it includes all the acquisition parameters as well as all the processing decisions. Those who come after you need this information!

Parameter volumes. If you used any adaptive or spatially varying parameters, like anisotropy coefficients for example, make sure you have maps or volumes of these. Don't forget time-varying filters. Even if it was a simple function, get it exported as a volume. You can visualize it with the stacked data as part of your QC. Other parameters to ask for are offset and azimuth diversity.

Migration velocity field (get to know velocities). Ask for a SEG-Y volume, because then you can visualize it right away. It's a good idea to get the actual velocity functions as well, since they are just small text files. You may or may not use these for anything, but they can be helpful as part of an integrated velocity modeling effort, and for flagging potential overpressure. Use with care—these velocities are processing velocities, not earth measurements.

The SEG's salt model, with velocities. Image:Sandia National Labs.Surface elevation map. If you're on land, or the sea floor, this comes from the survey and should be very reliable. It's a nice thing to add to fancy 3D displays of your data. Ask for it in depth and in time. The elevations are often tucked away in the SEG-Y headers too—you may already have them.

Fold data. Ask for fold or trace density maps at important depths, or just get a cube of all the fold data. While not as illuminating as illumination maps, fold is nevertheless a useful thing to know and can help you make some nice displays. You should use this as part of your uncertainty analysis, especially if you are sending difficult interpretations on to geomodelers, for example. 

I bet I have missed something... is there anything you always ask for, or forget and then have to extract or generate yourself? What's on your checklist?

Bad Best Practice

Applied scientists get excited about Best Practice. New professionals and new hires often ask where 'the manual' is, and senior technical management or chiefs often want to see such documentation being spread and used by their staff. The problem is that the scientists in the middle strata of skill and influence think Best Practice is a difficult, perhaps even ludicrous, concept in applied geoscience. It's too interpretive, too creative.

But promoting good ideas and methods is important for continuous improvement. At the 3P Arctic Conference in Halifax last week, I saw an interesting talk about good seismic acquisiton practice in the Arctic of Canada. The presenter was Michael Enachescu of MGM Energy, well known in the industry for his intuitive and integrated approach to petroleum geoscience. He gave some problems with the term best practice, advocating instead phrases like good practice:

  • There's a strong connotation that it is definitively superlative
  • The corollary to this is that other practices are worse
  • Its existence suggests that there is an infallible authority on the subject (an expert)
  • Therefore the concept stifles innovation and even small steps towards improvement

All this is reinforced by the way Best Practice is usually written and distributed:

  • Out of frustration, a chief commissions a document
  • One or two people build a tour de force, taking 6 months to do it
  • The read-only document is published on the corporate intranet alongside other such documents
  • Its existence is announced and its digestion mandated

Unfortunately, the next part of the story is where things go wrong:

  • Professionals look at the document and find that it doesn't quite apply to their situation
  • Even if it does apply, they are slightly affronted at being told how to do their job
  • People know about it but lack the technology or motivation to change how they were already working
  • Within 3 years there is enough new business, new staff, and new technology that the document is forgotten about and obselete, until a high-up commissions a document...

So the next time you think to yourself, "We need a Best Practice for this", think about trying something different:

  • Forget top-down publishing, and instead seed editable, link-rich documents like wiki pages
  • Encourage discussion and ownership by the technical community, not by management
  • Request case studies, which emphasize practical adaptability, not theory and methodology
  • Focus first on the anti-pattern: common practice that is downright wrong

How do you spread good ideas and methods in your organization? Does it work? How would you improve it?

How to cheat at spot the difference

Yesterday I left you, dear reader, with a spot the difference puzzle. Here it is again, with my answer:

SpotTheDiff_result.png

Notice how my answer (made with GIMP) is not just a list of differences or a squiggly circle around each one. It's an exact map of the location and nature of every difference. I like the effect of seeing which 'direction' the difference goes in: blue things are in the left image but not the right. One flaw in this method is that I have reduced the image to a monochrome image; changes in colour only would not show up. 

Another way to do it, a way that would catch even a subtle colour change, is to simply difference the images. Let's look at a detail from the image—the yellow box; the difference is the centre image:

SpotDiff_More_examples.png

The right-hand image here is a further processing of the difference, using a process in ImageJ that inverts the pixels' values, making dark things bright and viceversa. This reveals a difference we would probably never have otherwise noticed: the footprint of the lossy JPEG compression kernel. Even though the two input images were compressed with 98% fidelity, we have introduced a subtle, but pervasive, artifact.

So what? Is this just an image processing gimmick? It depends how much you care about finding these differences. Not only was it easier to find all the differences this way, but now I know for certain that I have not missed any. We even see one or two very tiny differences that were surely unintentional (there's one just next to the cat's right paw). If differences (or similarities) mean a lot to you, because a medical prognosis or well location depends on their identification, the small ones might be very important!

Here's a small tutorial showing how I made the line difference, in case you are interested →

Visual crossplotting

To clarify, add detail
Edward Tufte

Pyroclastic flow on Nabro, Eritrea. Click for a larger image. NASA.Recently, the prolific geoblogger Brian Romans posted a pair of satellite images of a pyroclastic flow on Nabro in Eritrea. One image was in the visible spectrum, the other was a thermal image. Correlating them by looking back and forth at the images is unsatisying, so I spent 10 minutes merging the data into a single view, making the correlation immediate and intuitive. 

Maps like this are always better than abstractions of data like graphs or crossplots (or scatter plots, if you prefer). Plots get unwieldy with more than three dimensions, and there are almost always more dimensions to the data, especially in geoscience. In the image above there are at least half a dozen dimensions to the data: x and y position, elevation, slope, rugosity, vegetation (none!), heat intensity, heat distribution,... And these other dimensions, however tenuous or qualitative, might actually be important—they provide context, circumstantial evidence, if you will.

When I review papers, one of the comments I almost always make is: get all your data into one view—help your reader make the comparison. Instead of two maps showing slightly different seismic attributes, make one view and force the comparison. Be careful with colours: don't use them all up for one of the attributes, leaving nothing for the other. Using greys and blues for one leaves reds and yellows for the other. This approach is much more effective than a polygon around your anomaly, say, because then you have indelibly overlain your interpretation too early in the story: wait until you have unequivocally demonstrated the uncanny correlation.

If you're still not convinced that the richer image conveys more information, see how long it takes you to do this Spot The Difference. Come back tomorrow for the answer (and the point!)...

Creative Commons licensed image from Wikimedia Commons, work of User Muband (Japan)

GIMP is your friend!

Pair picking

Even the Lone Ranger didn't work alone all of the timeImagine that you are totally entrained in what you are doing: focused, dedicated, and productive. If you've lost track of time, you are probably feeling flow. It's an awesome experience when one person gets it, imagine the power when teams get it. Because there are so many interruptions that can cause turbulence, it can be especially difficult to establish coherent flow for the subsurface team. But if you learn how to harness and hold onto it, it's totally worth it.

Seismic interpreters can seek out flow by partnering up and practising pair picking. Having a partner in the passenger seat is not only ideal for training, but it is a superior way to get real work done. In other industries, this has become routine because it works. Software developers sometimes code in pairs, and airline pilots share control of an aircraft. When one person is in charge of the controls, the other is monitoring, reviewing, and navigating. One person for tactical jobs, one for strategic surveillance.

Here are some reasons to try pair picking:

Solve problems efficiently — If you routinely have an affiliate, you will have someone to talk to when you run into a challenging problem. Mundane or sticky workarounds become less tenuous when you have a partner. You'll adopt more sensible solutions to your fit-for-purpose hacks.

Integrate smoothly — There's a time for hand-over, and there will be times when you must call upon other people's previous work to get your job done. 'No! Don't use Top_Cretaceous_candidate_final... use Evan_K_temp_DO-NOT-USE.' Pairing with the predecessors and successors of your role will get you better-aligned.

Minimize interruptionitis — if you have to run to a meeting, or the phone rings, your partner can keep plugging away. When you return you will quickly rejoin. It is best to get into a visualization room, or some other distraction-free room with a large screen, so as to keep your attention and minimize the effect of interruptions.

Mutual accountability — build allies based on science, technology, and critical thinking, not gossip or politics. Your team will have no one to blame, and you'll feel more connected around the office. Is knowledge hoarded and privileged or is it open and shared? If you pick in pairs, there is always someone who can vouch for your actions.

Mentoring and training — by pair picking, newcomers quickly get to watch the flow of work, not just a schematic flow-chart. Instead of just an end-product, they see the clicks, the indecision, the iteration, and the pace at which tasks unfold.

Practicing pair picking is not just about sharing tasks, it is about channeling our natural social energies in the pursuit of excellence. It may not be practical all of the time, and it may make you feel vulnerable, but pairing up for seismic interpretation might bring more flow to your workflow.

If you give it a try, please let us know how it goes!