Must-read geophysics blogs

Tuesday's must-read list was all about traditional publishing channels. Today, it's all about new media.

If you're anything like me before Agile, you don't read a lot of blogs. At least, not ones about geophysics. But they do exist! Get these in your browser favourites, or use a reader like Google Reader (anywhere) or Flipboard (on iPad).

Seismos

Chris Liner, a geophysics professor at the University of Arkansas, recently moved from the University of Houston. He's been writing Seismos, a parallel universe to his occasional Leading Edge column, since 2008.

MyCarta

Matteo Niccoli (@My_Carta on Twitter) is an exploration geoscientist in Stavanger, Norway, and he recently moved from Calgary, Canada. He's had MyCarta: Geophysics, visualization, image processing and planetary science, since 2011. This blog is a must-read for MATLAB hackers and image processing nuts. Matteo was one of our 52 Things authors.

GeoMika

Mika McKinnon (@mikamckinnon), a geophysicist in British Columbia, Canada, has been writing GeoMika: Fluid dynamics, diasters, geophysics, and fieldwork since 2008. She's also into education outreach and the maker-hacker scene.

The Way of the Geophysicist

Jesper Dramsch (@JesperDramsch), a geophysicist in Hamburg, Germany has written the wonderfully personal and philosophical The Way of The Geophysicist since 2011. His tales of internships at Fugro and Schlumberger provide great insights for students.

VatulBlog

Maitri Erwin (@maitri), an exploration geoscientist in Texas, USA. She has been blogging since 2001 (surely some kind of record), and both she and her unique VatulBlog: From Kuwait to Katrina and beyond defy categorization. Maitri was also one of our 52 Things authors. 

There are other blogs on topics around seismology and exploration geophysics — shout outs go to Hypocentre in the UK, the Laboratoire d'imagerie et acquisition des mesures géophysiques in Quebec, occasional seismicky posts from sedimentologists like @zzsylvester, and the panoply of bloggery at the AGU. Stick those in your reader!

Must-read geophysics

If you had to choose your three favourite, most revisited, best remembered papers in all of exploration geophysics, what would you choose? Are they short? Long? Full of math? Well illustrated? 

Keep it honest

Barnes, A (2007). Redundant and useless seismic attributes. Geophysics 72 (3). DOI:10.1190/1.2716717
Rarely do we see engaging papers, but they do crop up occasionally. I love Art Barnes's Redundant and useless seismic attributes paper. In this business, I sometimes feel like our opinions — at least our public ones — have been worn down by secrecy and marketing. So Barnes's directness is doubly refreshing:

There are too many duplicate attributes, too many attributes with obscure meaning, and too many unstable and unreliable attributes. This surfeit breeds confusion and makes it hard to apply seismic attributes effectively. You do not need them all.

And keep it honest

Blau, L (1936). Black magic in geophysical prospecting. Geophysics 1 (1). DOI:10.1190/1.1437076
I can't resist Ludwig Blau's wonderful Black magic geophysics, published 77 years ago this month in the very first issue of Geophysics. The language is a little dated, and the technology mostly sounds rather creaky, but the point, like Blau's wit, is as fresh as ever. You might not learn a lot of geophysics from this paper, but it's an enlightening history lesson, and a study in engaging writing the likes of which we rarely see in Geophysics today...

And also keep it honest

Bond, C, A Gibbs, Z Shipton, and S Jones (2007), What do you think this is? "Conceptual uncertainty" in geoscience interpretation. GSA Today 17 (11), DOI: 10.1130/GSAT01711A.1
I like to remind myself that interpreters are subjective and biased. I think we have to recognize this to get better at it. There was a wonderful reaction on Twitter yesterday to a recent photo from Mars Curiosity (right) — a volcanologist thought it looked like a basalt, while a generalist thought it more like a sandstone. This terrific paper by Clare Bond and others will help you remember your biases!

My full list is right here. I hope you think there's something missing... please edit the wiki, or put your personal favourites in the comments. 

The attribute figure is adapted from from Barnes (2007) is copyright of SEG. It may only be used in accordance with their Permissions guidelines. The Mars Curiosity figure is public domain. 

O is for Offset

Offset is one of those jargon words that geophysicists kick around without a second thought, but which might bewilder more geological interpreters. Like most jargon words, offset can mean a couple of different things: 

  • Offset distance, which is usually what is meant by simply 'offset'.
  • Offset angle, which is often what we really care about.
  • We are not talking about offset wells, or fault offset.

What is offset?

Sherriff's Encyclopedic Dictionary is characteristically terse:

Offset: The distance from the source point to a geophone or to the center of a geophone group.

The concept of offset only really makes sense in the pre-stack world — to field data and gathers. The traces in stacked data (everyday seismic volumes) combine data from many offsets. So let's look at the geometry of seismic acquisition. A map shows the layout of shots (red) and receivers (blue). We can define offset and azimuth A at the midpoint of every shot–receiver pair, on a map (centre) and in section (right):

Offset distance applies to traces. The offset distance is the straight-line distance from the vibrator, shot-hole or air-gun (or any other source) to the particular receiver that recorded the trace in question. If we know the geometry of the acquisition, and the size of the recording patch or length of the streamers, then we can calculate offset distance exactly. 

Offset angle applies to specific samples on a trace. The offset angle is the incident angle of the reflected ray that that a given sample represents. Samples at the top of a trace have larger offset angles than those at the bottom, even though they have the same offset distance. To compute these angles, we need to know the vertical distances, and this requires knowledge of the velocity field, which is mostly unknown. So offset angle is not objective, but a partly interpreted quantity.

Why do we care?

Acquiring longer offsets can help undershoot gaps in a survey, or image beneath salt canopies and other recumbent features. Longer offsets also helps with velocity estimation, because we see more moveout.

Looking at how the amplitude of a reflection changes with offset is the basis of AVO analysis. AVO analysis, in turn, is the basis of many fluid and lithology prediction techniques.

Offset is one of the five canonical dimensions of pre-stack seismic data, along with inline, crossline, azimuth, and frequency. As such, it is a key part of the search for sparsity in the 5D interpolation method perfected by Daniel Trad at CGGVeritas. 

Recently, geophysicists have become interested not just in the angle of a reflection, but in the orientation of a reflection too. This is because, in some geological circumstances, the amplitude of a reflection depends on the orientation with respect to the compass, as well as the incidence angle. For example, looking at data in both of these dimensions can help us understand the earth's stress field.

Offset is the characteristic attribute of pre-stack seismic data. Seismic data would be nothing without it.

News of the month

The last news of the year. Here's what caught our eye in December.

Online learning, at a price

There was an online university revolution in 2012 — look for Udacity (our favourite), Coursera, edX, and others. Paradigm, often early to market with good new ideas, launched the Paradigm Online University this month. It's a great idea — but the access arrangement is the usual boring oil-patch story: only customers have access, and they must pay $150/hour — more than most classroom- and field-based courses! Imagine the value-add if it was open to all, or free to customers.

Android apps on your PC

BlueStacks is a remarkable new app for Windows and Mac that allows you to run Google's Android operating system on the desktop. This is potentially awesome news — there are over 500,000 apps on this platform. But it's only potentially awesome because it's still a bit... quirky. I tried running our Volume* and AVO* apps on my Mac and they do work, but they look rubbish. Doubtless the technology will evolve rapidly — watch this space. 

2PFLOPS HPC 4 BP

In March, we mentioned Total's new supercomputer, delivering 2.3 petaflops (quadrillion floating point operations per second). Now BP is building something comparable in Houston, aiming for 2 petaflops and 536 terabytes of RAM. To build it, the company has allocated 0.1 gigadollars to high-performance computing over the next 5 years.

Haralick textures for everyone

Matt wrote about OpendTect's new texture attributes just before Christmas, but the news is so exciting that we wanted to mention it again. It's exciting because Haralick textures are among the most interesting and powerful of multi-trace attributes — right up there with coherency and curvature. Their appearance in the free and open-source core of OpendTect is great news for interpreters.

That's it for 2012... see you in 2013! Happy New Year.

This regular news feature is for information only. We aren't connected with any of these organizations, and don't necessarily endorse their products or services. Except OpendTect, which we definitely do endorse.

Cope don't fix

Some things genuinely are broken. International financial practices. Intellectual property law. Most well tie software. 

But some things are the way they are because that's how people like them. People don't like sharing files, so they stash their own. Result: shared-drive cancer — no, it's not just your shared drive that looks that way. The internet is similarly wild, chaotic, and wonderful — but no-one uses Yahoo! Directory to find stuff. When chaos is inevitable, the only way to cope is fast, effective search

So how shall we deal with the chaos of well log names? There are tens of thousands — someone at Schlumberger told me last week that they alone have over 50,000 curve and tool names. But these names weren't dreamt up to confound the geologist and petrophysicist — they reflect decades of tool development and innovation. There is meaning in the morasse.

Standards are doomed

Twelve years ago POSC had a go at organizing everything. I don't know for sure what became of the effort, but I think it died. Most attempts at standardization are doomed. Standards are awash with compromise, so they aren't perfect for anything. And they can't keep up with changes in technology, because they take years to change. Doomed.

Instead of trying to fix the chaos, cope with it.

A search tool for log names

We need a search tool for log names. Here are some features it should have:

  • It should be free, easy to use, and fast
  • It should contain every log and every tool from every formation evaluation company
  • It should provide human- and machine-readable output to make it more versatile
  • You should get a result for every search, never drawing a blank
  • Results should include lots of information about the curve or tool, and links to more details
  • Users should be able to flag or even fix problems, errors, and missing entries in the database

To my knowledge, there are only two tools a little like this: Schlumberger's Curve Mnemonic Dictionary, and the SPWLA's Mnemonics Data Search. Schlumberger's widget only includes their tools, naturally. The SPWLA database does at least include curves from Baker Hughes and Halliburton, but it's at least 10 years out of date. Both fail if the search term is not found. And they don't provide machine-readable output, only HTML tables, so it's difficult to build a service on them.

Introducing fuzzyLAS

We don't know how to solve this problem, but we're making a start. We have compiled a database containing 31,000 curve names, and a simple interface and web API for fuzzily searching it. Our tool is called fuzzyLAS. If you'd like to try it out, please get in touch. We'd especially like to hear from you if you often struggle with rogue curve mnemonics. Help us build something useful for our community.

Seismic texture attributes — in the open at last

I read Brian West's paper on seismic facies a shade over ten years ago (West et al., 2002, right). It's a very nice story of automatic facies classification in seismic — in a deep-water setting, presumably in the Gulf of Mexico. I have re-read it, and handed it to others, countless times.

Ever since, for over a decade, I've wanted to be able to reproduce this workflow. It's one of the frustrations of the non-programming geophysicist that such reproduction is so hard (or expensive!). So hard that you may never quite manage it. Indeed, it took until this year, when Evan implemented the workflow in MATLAB, for a geothermal project. Phew!

But now we're moving to SciPy for our scientific programming, so Evan was looking at building the workflow again... until Paul de Groot told me he was building texture attributes into OpendTect, dGB's awesome, free, open source seismic interpretation tool. And this morning, the news came: OpendTect 4.4.0e is out, and it has Haralick textures! Happy Christmas, indeed. Thank you, dGB.

Parameters

There are 4 parameters to set, other than selecting an attribute. Choose a time gate and a kernel size, and the number of grey levels to reduce the image to (either 16 or 32 — more options might be nice here). You also have to choose the dynamic range of the data — don't go too wide with only 16 grey levels, or you'll throw almost all your data into one or two levels. Only the time gate and kernel size affect the run time substantially, and you'll want them to be big enough to capture your textures. 

Reference
West, B, S May, J Eastwood, and C Rossen (2002). Interactive seismic faces classification using textural attributes and neural networks. The Leading Edge, October 2002. DOI: 10.1190/1.1518444

The seismic dataest is the F3 offshore Netherlands volume from the Open Seismic Repository, licensed CC-BY-SA.

2012 retrospective

The end of the year is nigh — time for our self-indulgent look-back at 2012. The most popular posts, not counting appearances on the main page. Remarkably, Shale vs tight has about twice the number of hits of the second place post. 

  1. Shale vs tight, 1984 visits

  2. G is for Gather, 1090 visits (to permalink)

  3. What do you mean by average?, 1008 visits (to permalink)

The most commented-on posts are not necessarily the most-read. This is partly because posts get read for months after they're written, but comments tend to come right away. 

  1. Are conferences failing you too? (16 comments)

  2. Your best work(space) (13 comments)

  3. The Agile toolbox (13 comments)

Personal favourites

Evan

Matt

Where our readers come from

The distribution of readers is global, but has a power law distribution. About 75% of our readers this year were from one of nine countries: USA, Canada, UK, Australia, Norway, India, Germany, Indonesia, and Russia. Some of those are big countries, so we should correct for population—let's look at the number of Agile blog readers per million citizens:

2012_blog_readers_logscale.png
  1. Norway — 292

  2. Canada — 283

  3. Australia — 108

  4. UK — 78

  5. Qatar — 72

  6. Brunei — 67

  7. Ireland — 57

  8. Iceland — 56

  9. Denmark — 46

  10. Netherlands — 46

So we're kind of a big deal in Norway. Hei hei Norge! Kansje vi skulle skrive på norsk herifra.

Google Analytics tells us when people visit too. The busiest days are Tuesday, Wednesday, and Thursday, then Monday and Friday. Weekends are just crickets. Not surprisingly, the average reading time rises monotonically from Monday to Friday — reaching a massive 2:48 on Fridays. (Don't worry, dear manager, those are minutes!)

What we actually do

We don't write much about our work on this blog. In brief, here's what we've been up to:

  • Volume interpretation and rock physics for a geothermal field in southern California

  • Helping the Government of Canada get some of its subsurface data together

  • Curating subsurface content in a global oil & gas company's corporate wiki

  • Getting knowledge sharing off the ground at a Canadian oil & gas company

Oh yeah, we did launch this awesome little book too. That was a proud moment. 

We're looking forward to a fun-filled, idea-jammed, bee-busy 2013 — and wish the same for you. Thank you for your support and encouragement this year. Have a fantastic Yuletide.

Ten ways to spot pseudogeophysics

Geophysicists often try to predict rock properties using seismic attributes — an inverse problem. It is difficult and can be complicated. It can seem like black magic, or at least a black box. They can pull the wool over their own eyes in the process, so don’t be surprised if it seems like they are trying to pull the wool over yours. Instead, ask a lot of questions.

Questions to ask

  1. What is the reliability of the logs that are inputs to the prediction? Ask about hole quality and log editing.
  2. What about the the seismic data? Ask about signal:noise, multiples, bandwidth, resolution limits, polarity, maximum offset angle (for AVO studies), and processing flow (e.g. Emsley, 2012).
  3. What is the quality of the well ties? Is the correlation good enough for the proposed application?
  4. Is there any physical reason why the seismic attribute should predict the proposed rock property? Was this explained to you? Were you convinced?
  5. Is the proposed attribute redundant (sensu Barnes, 2007)? Does it really give better results than a less sexy approach? I’ve seen 5-minute trace integration outperform month-long AVO inversions (Hall et al. 2006).
  6. What are the caveats and uncertainties in the analysis? Is there a quantitative, preferably Bayesian, treatment of the reliability of the predictions being made? Ask about the probability of a prediction being wrong.
  7. Is there a convincing relationship between the rock property (shear impedance, say) and some geologically interesting characteristic that you actually make decisions with, e.g. frackability.
  8. Is there a convincing relationship between the rock property and the seismic attribute at the wells? In other words, does the attribute actually correlate with the property where we have data?
  9. What does the low-frequency model look like? How was it made? Its maximum frequency should be about the same as the seismic data's minimum, no more.
  10. Does the geophysicist compute errors from the training error or the validation error? Training errors are not helpful because they beg the question by comparing the input training data to the result you get when you use those very data in the model. Funnily enough, most geophysicists like to show the training error (right), but if the model is over-fit then of course it will predict very nicely at the well! But it's the reliability away from the wells we are interested in, so we should examine the error we get when we pretend the well isn't there. I prefer this to witholding 'blind' wells from the modeling — you should use all the data. 

Lastly, it might seem harsh but we could also ask if the geophysicist has a direct financial interest in convincing you that their attribute is sound, as well as the normal direct professional interest. It’s not a problem if they do, but be on your guard — people who are selling things are especially prone to bias. It's unavoidable.

What do you think? Are you bamboozled by the way geophysicists describe their predictions?

References
Barnes, A (2007). Redundant and useless seismic attributes. Geophysics 72 (3), p P33–P38. DOI: 10.1190/1.2370420.
Emsley, D. Know your processing flow. In: Hall & Bianco, eds, 52 Things You Should Know About Geophysics. Agile Libre, 2012. 
Hall, M, B Roy, and P Anno (2006). Assessing the success of pre-stack inversion in a heavy oil reservoir: Lower Cretaceous McMurray Formation at Surmont. Canadian Society of Exploration Geophysicists National Convention, Calgary, Canada, May 2006. 

The image of the training error plot — showing predicted logs in red against input logs — is from Hampson–Russell's excellent EMERGE software. I'm claiming the use of the copyrighted image is fair use.  

The digital well scorecard

In my last post, I ranted about the soup of acronyms that refer to well log curves; a too-frequent book-keeping debacle. This pain, along with others before it, has motivated me to design a solution. At this point all I have is this sketch, a wireframe of should-be software that allows you visualize every bit of borehole data you can think of:

The goal is, show me where the data is in the domain of the wellbore. I don't want to see the data explicitly (yet), just its whereabouts in relation to all other data. Data from many disaggregated files, reports, and so on. It is part inventory, part book-keeping, part content management system. Clear the fog before the real work can begin. Because not even experienced folks can see clearly in a fog.

The scorecard doesn't yield a number or a grade point like a multiple choice test. Instead, you build up a quantitative display of your data extents. With the example shown above, I don't even have to look at the well log to tell you that you are in for a challenging well tie, with the absence of sonic measurements in the top half of the well. 

The people that I showed this to immediately undestood what was being expressed. They got it right away, so that bodes well for my preliminary sketch. Can you imagine using a tool like this, and if so, what features would you need? 

Swimming in acronym soup

In a few rare instances, an abbreviation can become so well-known that it is adopted into everyday language; more familar than the words it used to stand for. It's embarrasing, but I needed to actually look up LASER, and you might feel the same way with SONAR. These acronyms are the exception. Most are obscure barriers to entry in technical conversations. They can be constructs for wielding authority and exclusivity. Welcome to the club, if you know the password.

No domain of subsurface technology is riddled with more acronyms than well log analysis and formation evaluation. This is a big part of — perhaps too much of a part of — why petrophysics is hard. Last week, I came across a well. It has an extended suite of logs, and I wanted make a synthetic. Have a glance at the image and see which curve names you recognize (the size represents the frequency the names are encountered across many files of the same well).

I felt like I was being spoken to by some earlier deliquent: I got yer well logs right here buddy. Have fun sorting this mess out.

The log ASCII standard (*.LAS file) file format goes a long way to exposing descriptive information in the header. But this information is often incomplete, missing, and says nothing about the quality or completeness of the data. I had to scan 5 files to compile this soup. A micro-travesty and a failure, in my opinion. How does one turn this into meaningful information for geoscience?

Whose job is it to sort this out? The service company that collected the data? The operator that paid for it? A third party down the road?

What I need is not only an acronym look-up table, but also a data range tool to show me what I've got in the file (or files), and at which locations and depths I've got it. A database to give me more information about these acronyms would be nice too, and a feature that allows me to compare multiple files, wells, and directories at once. It would be like a life preserver. Maybe we should build it.

I made the word cloud by pasting text into wordle.net. I extracted the text from the data files using the wonderful LASReader written by Warren Weckesser. Yay, open source!