Are conferences failing you too?

I recently asked a big software company executive if big exhibitions are good marketing value. The reply:

It's not a waste of money. It's a colossal waste of money.

So that's a 'no'.

Is there a problem here?

Next week I'll be at the biggest exhibition (and conference) in our sector: the SEG Annual Meeting. Thousands of others will be there, but far more won’t. Clearly it’s not indispensable or unmissable. Indeed, it’s patently missable — I did just fine in my career as a geophysicist without ever going. Last year was my first time.

Is this just the nature of mass market conferences? Is the traditional academic format necessarily unremarkable? Do the technical societies try too hard to be all things to all people, and thereby miss the mark for everyone? 

I don't know the answer to any of these questions, I can only speak for myself. I'm getting tired of conferences. Perhaps I've reached some new loop in the meandering of my career, or perhaps I'm just grumpy. But as I've started to whine, I'm finding more and more allies in my conviction that conferences aren't awesome.

What are conferences for?

  • They make lots of money for the technical societies that organize them.
  • A good way to do this is to provide marketing and sales opportunities for the exhibiting vendors.
  • A good way to do this is to attract lots of scientists there, baiting with talks by all the awesomest ones.
  • A good way to do this, apparently, is to hold it in Las Vegas.

But I don't think the conference format is great at any of these things, except possibly the first one. The vendors get prospects (that's what sales folk call people) that are only interested in toys and beer — they might be users, but they aren't really customers. The talks are samey and mostly not memorable (and you can only see 5% of them). Even the socializing is limited by the fact that the conference is gigantic and run on a tight schedule. And don't get me started on Las Vegas. 

If we're going to take the trouble of flying 8000 people to Las Vegas, we had better have something remarkable to show for it. Do we? What do we get from this giant conference? By my conservative back-of-the-envelope calculation, we will burn through about 210 person-years of productivity in Las Vegas next week. That's about 6 careers' worth. Six! Are we as a community satisfied that we will produce 6 careers' worth of insight, creativity, and benefit?

You can probably tell that I am not convinced. Tomorrow, I will put away the wrecking ball of bellyaching, and offer some constructive ideas, and a promise. Meanwhile, if you have been to an amazing conference, or can describe one from your imagination, or think I'm just being a grouch — please use the comments below.

Map data ©2012 Google, INEGI, MapLink, Tele Atlas. 

N is for Nyquist

In yesterday's post, I covered a few ideas from Fourier analysis for synthesizing and processing information. It serves as a primer for the next letter in our A to Z blog series: N is for Nyquist.

In seismology, the goal is to propagate a broadband impulse into the subsurface, and measure the reflected wavetrain that returns from the series of rock boundaries. A question that concerns the seismic experiment is: What sample rate should I choose to adequately capture the information from all the sinusoids that comprise the waveform? Sampling is the capturing of discrete data points from the continuous analog signal — a necessary step in recording digital data. Oversample it, using too high a sample rate, and you might run out of disk space. Undersample it and your recording will suffer from aliasing.

What is aliasing?

Aliasing is a phenomenon observed when the sample interval is not sufficiently brief to capture the higher range of frequencies in a signal. In order to avoid aliasing, each constituent frequency has to be sampled at least two times per wavelength. So the term Nyquist frequency is defined as half of the sampling frequency of a digital recording system. Nyquist has to be higher than all of the frequencies in the observed signal to allow perfect recontstruction of the signal from the samples.

Above Nyquist, the signal frequencies are not sampled twice per wavelength, and will experience a folding about Nyquist to low frequencies. So not obeying Nyquist gives a double blow, not only does it fail to record all the frequencies, the frequencies that you leave out actually destroy part of the frequencies you do record. Can you see this happening in the seismic reflection trace shown below? You may need to traverse back and forth between the time domain and frequency domain representation of this signal.

Nyquist_trace.png

Seismic data is usually acquired with either a 4 millisecond sample interval (250 Hz sample rate) if you are offshore, or 2 millisecond sample interval (500 Hz) if you are on land. A recording system with a 250 Hz sample rate has a Nyquist frequency of 125 Hz. So information coming in above 150 Hz will wrap around or fold to 100 Hz, and so on. 

It's important to note that the sampling rate of the recording system has nothing to do the native frequencies being observed. It turns out that most seismic acquisition systems are safe with Nyquist at 125 Hz, because seismic sources such as Vibroseis and dynamite don't send high frequencies very far; the earth filters and attenuates them out before they arrive at the receiver.

Space alias

Aliasing can happen in space, as well as in time. When the pixels in this image are larger than half the width of the bricks, we see these beautiful curved artifacts. In this case, the aliasing patterns are created by the very subtle perspective warping of the curved bricks across a regularly sampled grid of pixels. It creates a powerful illusion, a wonderful distortion of reality. The observations were not sampled at a high enough rate to adequately capture the nature of reality. Watch for this kind of thing on seismic records and sections. Spatial alaising. 

Click for the full demonstration (or adjust your screen resolution).You may also have seen this dizzying illusion of an accelerating wheel that suddenly appears to change direction after it rotates faster than the sample rate of the video frames captured. The classic example is the wagon whel effect in old Western movies.

Aliasing is just one phenomenon to worry about when transmitting and processing geophysical signals. After-the-fact tricks like anti-aliasing filters are sometimes employed, but if you really care about recovering all the information that the earth is spitting out at you, you probably need to oversample. At least two times for the shortest wavelengths.

The blind geoscientist

Last time I wrote about using randomized, blind, controlled tests in geoscience. Today, I want to look a bit closer at what such a test or experiment might look like. But before we do anything else, it's worth taking 20 minutes, or at least 4, to watch Ben Goldacre's talk on the subject at Strata in London recently:

How would blind testing work?

It doesn't have to be complicated, or much different from what you already do. Here’s how it could work for the biostrat study I mentioned last time:

  1. Collect the samples as normal. There is plenty of nuance here too: do you sample regularly, or do you target ‘interesting’ zones? Only regular sampling is free from bias, but it’s expensive.
  2. Label the samples with unique identifiers, perhaps well name and depth.
  3. Give the samples to a disinterested, competent person. They repackage the samples and assign different identifiers randomly to the samples.
  4. Send the samples for analysis. Provide no other data. Ask for the most objective analysis possible, without guesswork about sample identification or origin. The samples should all be treated in the same way.
  5. When you get the results, analyse the data for quality issues. Perform any analysis that does not depend on depth or well location — for example, cluster analysis.
  6. If you want to be really thorough, the disinterested party to provide depths only, allowing you to sort by well and by depth but without knowing which wells are which. Perform any analysis that doesn’t depend on spatial location.
  7. Finally, ask for the key that reveals well names. Hopefully, any problems with the data have already revealed themselves. At this point, if something doesn’t fit your expectations, maybe your expectations need adjusting!

Where else could we apply these ideas?

  1. Random selection of some locations in a drilling program, perhaps in contraindicated locations
  2. Blinded, randomized inspection of gathers, for example with different processing parameters
  3. Random selection of wells as blind control for a seismic inversion or attribute analysis
  4. Random selection of realizations from geomodel simulation, for example for flow simulation
  5. Blinded inspection of the results of a 'turkey shoot' or vendor competition (e.g. Hayles et al, 2011)

It strikes me that we often see some of this — one or two wells held back for blind testing, or one well in a program that targets a non-optimal location. But I bet they are rarely selected randomly (more like grudgingly), and blind samples are often peeked at ('just to be sure'). It's easy to argue that "this is a business, not a science experiment", but that's fallacious. It's because it's a business that we must get the science right. Scientific rigour serves the business.

I'm sure there are dozens of other ways to push in this direction. Think about the science you're doing right now. How could you make it a little less prone to bias? How can you make it a shade less likely that you'll pull the wool over your own eyes?

Experimental good practice

Like hitting piñatas, scientific experiments need blindfolds. Image: Juergen. CC-BY.I once sent some samples to a biostratigrapher, who immediately asked for the logs to go with the well. 'Fair enough,' I thought, 'he wants to see where the samples are from'. Later, when we went over the results, I asked about a particular organism. I was surprised it was completely absent from one of the samples. He said, 'oh, it’s in there, it’s just not important in that facies, so I don’t count it.' I was stunned. The data had been interpreted before it had even been collected.

I made up my mind to do a blind test next time, but moved to another project before I got the chance. I haven’t ordered lab analyses since, so haven't put my plan into action. To find out if others already do it, I asked my Twitter friends:

Randomized, blinded, controlled testing should be standard practice in geoscience. I mean, if you can randomize trials of government policy, then rocks should be no problem. If there are multiple experimenters involved, like me and the biostrat guy in the story above, perhaps there’s an argument for double-blinding too.

Designing a good experiment

What should we be doing to make geoscience experiments, and the reported results, less prone to bias and error? I'm no expert on lab procedure, but for what it's worth, here are my seven Rs:

  • Randomized blinding or double-blinding. Look for opportunities to fight confirmation bias. There’s some anecdotal evidence that geochronologists do this, at least informally — can you do it too, or can you do more?
  • Regular instrument calibration, per manufacturer instructions. You should be doing this more often than you think you need to do it.
  • Repeatability tests. Does your method give you the same answer today as yesterday? Does an almost identical sample give you the same answer? Of course it does! Right? Right??
  • Report errors. Error estimates should be based on known problems with the method or the instrument, and on the outcomes of calibration and repeatability tests. What is the expected variance in your result?
  • Report all the data. Unless you know there was an operational problem that invalidated an experiment, report all your data. Don’t weed it, report it. 
  • Report precedents. How do your results compare to others’ work on the same stuff? Most academics do this well, but industrial scientists should report this rigorously too. If your results disagree, why is this? Can you prove it?
  • Release your data. Follow Hjalmar Gislason's advice — use CSV and earn at least 3 Berners-Lee stars. And state the license clearly, preferably a copyfree one. Open data is not altruistic — it's scientific.

Why go to all this trouble? Listen to Richard Feynman:

The first principle is that you must not fool yourself, and you are the easiest person to fool.

Thank you to @ToriHerridge@mammathus@volcan01010 and @ZeticaLtd for the stories about blinded experiments in geoscience. There are at least a few out there. Do you know of others? Have you tried blinding? We'd love to hear from you in the comments! 

M is for Migration

One of my favourite phrases in geophysics is the seismic experiment. I think we call it that to remind everyone, especially ourselves, that this is science: it's an experiment, it will yield results, and we must interpret those results. We are not observing anything, or remote sensing, or otherwise peering into the earth. When seismic processors talk about imaging, they mean image construction, not image capture

The classic cartoon of the seismic experiment shows flat geology. Rays go down, rays refract and reflect, rays come back up. Simple. If you know the acoustic properties of the medium—the speed of sound—and you know the locations of the source and receiver, then you know where a given reflection came from. Easy!

But... some geologists think that the rocks beneath the earth's surface are not flat. Some geologists think there are tilted beds and faults and big folds all over the place. And, more devastating still, we just don't know what the geometries are. All of this means trouble for the geophysicist, because now the reflection could have come from an infinite number of places. This makes choosing a finite number of well locations more of a challenge. 

What to do? This is a hard problem. Our solution is arm-wavingly called imaging. We wish to reconstruct an image of the subsurface, using only our data and our sharp intellects. And computers. Lots of those.

Imaging with geometry

Agile's good friend Brian Russell wrote one of my favourite papers (Russell, 1998) — an imaging tutorial. Please read it (grab some graph paper first). He walks us through a simple problem: imaging a single dipping reflector.

Remember that in the seismic experiment, all we know is the location of the shots and receivers, and the travel time of a sound wave from one to the other. We do not know the reflection points in the earth. If we assume dipping geology, we can use the NMO equation to compute the locus of all possible reflection points, because we know the travel time from shot to receiver. Solutions to the NMO equation — given source–receiver distance, travel time, and the speed of sound — thus give the ellipse of possible reflection points, shown here in blue:

Clearly, knowing all possible reflection points is interesting, but not very useful. We want to know which reflection point our recorded echo came from. It turns out we can do something quite easy, if we have plenty of data. Fortunately, we geophysicists always bring lots and lots of receivers along to the seismic experiment. Thousands usually. So we got data.

Now for the magic. Remember Huygens' principle? It says we can imagine a wavefront as a series of little secondary waves, the sum of which shows us what happens to the wavefront. We can apply this idea to the problem of the tilted bed. We have lots of little wavefronts — one for each receiver. Instead of trying to figure out the location of each reflection point, we just compute all possible reflection points, for all receivers, then add them all up. The wavefronts add constructively at the reflector, and we get the solution to the imaging problem. It's kind of a miracle. 

Try it yourself. Brian Russell's little exercise is (geeky) fun. It will take you about an hour. If you're not a geophysicist, and even if you are, I guarantee you will learn something about how the miracle of the seismic experiment. 

Reference
Russell, B (1998). A simple seismic imaging exercise. The Leading Edge 17 (7), 885–889. DOI: 10.1190/1.1438059

L is for Lambda

Hooke's law says that the force F exerted by a spring depends only on its displacement x from equilibrium, and the spring constant k of the spring:

.

We can think of k—and experience it—as stiffness. The spring constant is a property of the spring. In a sense, it is the spring. Rocks are like springs, in that they have some elasticity. We'd like to know the spring constant of our rocks, because it can help us predict useful things like porosity. 

Hooke's law is the basis for elasticity theory, in which we express the law as

stress [force per unit area] is equal to strain [deformation] times a constant

This time the constant of proportionality is called the elastic modulus. And there isn't just one of them. Why more complicated? Well, rocks are like springs, but they are three dimensional.

In three dimensions, assuming isotropy, the shear modulus μ plays the role of the spring constant for shear waves. But for compressional waves we need λ+2μ, a quantity called the P-wave modulus. So λ is one part of the term that tells us how rocks get squished by P-waves.

These mysterious quantities λ and µ are Lamé's first and second parameters. They are intrinsic properties of all materials, including rocks. Like all elastic moduli, they have units of force per unit area, or pascals [Pa].

So what is λ?

Matt and I have spent several hours discussing how to describe lambda. Unlike Young's modulus E, or Poisson's ratio ν, our friend λ does not have a simple physical description. Young's modulus just determines how much longer something gets when I stretch it. Poisson's ratio tells how much fatter something gets if I squeeze it. But lambda... what is lambda?

  • λ is sometimes called incompressibility, a name best avoided because it's sometimes also used for the bulk modulus, K.  
  • If we apply stress σ1 along the 1 direction to this linearly elastic isotropic cube (right), then λ represents the 'spring constant' that scales the strain ε along the directions perpendicular to the applied stress.
  • The derivation of Hooke's law in 3D requires tensors, which we're not getting into here. The point is that λ and μ help give the simplest form of the equations (right, shown for one dimension).

The significance of elastic properties is that they determine how a material is temporarily deformed by a passing seismic wave. Shear waves propagate by orthogonal displacements relative to the propagation direction—this deformation is determined by µ. In contrast, P-waves propagate by displacements parallel to the propagation direction, and this deformation is inversely proportional to M, which is 2µ + λ

Lambda rears its head in seismic petrophysics, AVO inversion, and is the first letter in the acronym of Bill Goodway's popular LMR inversion method (Goodway, 2001). Even though it is fundamental to seismic, there's no doubt that λ is not intuitively understood by most geoscientists. Have you ever tried to explain lambda to someone? What description of λ do you find useful? I'm open to suggestions. 

Goodway, B., 2001, AVO and Lame' constants for rock parameterization and fluid detection: CSEG Recorder, 26, no. 6, 39-60.

Cross plots: a non-answer

On Monday I asked whether we should make crossplots according to statistical rules or natural rules. There was some fun discussion, and some awesome computation from Henry Herrera, and a couple of gems:

Physics likes math, but math doesn't care about physics — @jeffersonite

But... when I consider the intercept point I cannot possibly imagine a rock that has high porosity and zero impedance — Matteo Niccoli, aka @My_Carta

I tried asking on Stack Overflow once, but didn’t really get to the bottom of it, or perhaps I just wasn't convinced. The consensus seems to be that the statistical answer is to put porosity on y-axis, because that way you minimize the prediction error on porosity. But I feel—and this is just my flaky intuition talking—like this fails to represent nature (whatever that means) and so maybe that error reduction is spurious somehow.

Reversing the plot to what I think of as the natural, causation-respecting plot may not be that unreasonable. It's effectively the same as reducing the error on what was x (that is, impedance), instead of y. Since impedance is our measured data, we could say this regression respects the measured data more than the statistical, non-causation-respecting plot.

So must we choose? Minimize the error on the prediction, or minimize the error on the predictor. Let's see. In the plot on the right, I used the two methods to predict porosity at the red points from the blue. That is, I did the regression on the blue points; the red points are my blind data (new wells, perhaps). Surprisingly, the statistical method gives an RMS error of 0.034, the natural method 0.023. So my intuition is vindicated! 

Unfortunately if I reverse the datasets and instead model the red points, then predict the blue, the effect is also reversed: the statistical method does better with 0.029 instead of 0.034. So my intuition is wounded once more, and limps off for an early bath.

Irreducible error?

Here's what I think: there's an irreducible error of prediction. We can beg, borrow or steal error from one variable, but then it goes on the other. It's reminiscent of Heisenberg's uncertainty principle, but in this case, we can't have arbitrarily precise forecasts from imperfectly correlated data. So what can we do? Pick a method, justify it to yourself, test your assumptions, and then be consistent. And report your errors at every step. 

I'm reminded of the adage 'Correlation does not equal causation.' Indeed. And, to borrow @jeffersonite's phrase, it seems correlation also does not care about causation.

Cross plot or plot cross?

I am stumped. About once a year, for the last nine years or so, I have failed to figure this out.

What could be simpler than predicting porosity from acoustic impedance? Well, lots of things, but let’s pretend for a minute that it’s easy. Here’s what you do:

1.   Measure impedance at a bunch of wells
2.   Measure the porosity — at seismic scale of course — at those wells
3.   Make a crossplot with porosity on the y-axis and amplitude on the x-axis
4.   Plot the data points and plot the regression line (let’s keep it linear)
5.   Find the equation of the line, which is of the form y = ax + b, or porosity = gradient × impedance + constant
6.   Apply the equation to a map (or volume, if you like) of amplitude, and Bob's your uncle.

Easy!

But, wait a minute. Is Bob your uncle after all? The parameter on the y-axis is also called the dependent variable, and that on the x-axis the independent. In other words, the crossplot represents a relationship of dependency, or causation. Well, porosity certainly does not depend on impedance — it’s the other way around. To put it another way, impedance is not the cause of porosity. So the natural relationship should put impedance, not porosity, on the y-axis. Right?

Therefore we should change some steps:

3.   Make a crossplot with impedance on the y-axis and porosity on the x-axis
4.   Plot the data points and plot the regression line
5a. Find the equation of the line, which is of the form y = ax + b, or impedance = gradient × porosity + constant
5b. Rearrange the equation for what we really want:
porosity = (impedance – constant)/gradient

Not quite as easy! But still easy.

More importantly, this gives a different answer. Bob is not your uncle after all. Bob is your aunt. To be clear: you will compute different porosities with these two approaches. So then we have to ask: which is correct? Or rather, since neither going to give us the ‘correct’ porosity, which is better? Which is more physical? Do we care about physicality?

I genuinely do not know the answer to this question. Do you?

If you're interested in playing with this problem, the data I used are from Imaging reservoir quality seismic signatures of geologic effects, report number DE-FC26-04NT15506 for the US Department of Energy by Gary Mavko et al. at Stanford University. I digitized their figure D-8; you can download the data as a CSV here. I have only plotted half of the data points, so I can use the rest as a blind test. 

Great geophysicists #5: Huygens

Christiaan Huygens was a Dutch physicist. He was born in The Hague on 14 April 1629, and died there on 8 July 1695. It's fun to imagine these times: he was a little older than Newton (born 1643), a little younger than Fermat (1601), and about the same age as Hooke (1635). He lived in England and France and must have met these men.

It's also fun to imagine the intellectual wonder life must have held for a wealthy, educated person in these protolithic Enlightenment years. Everyone, it seems, was a polymath: Huygens made substantial contributions to probability, mechanics, astronomy, optics, and horology. He was the first to describe Saturn's rings. He invented the pendulum clock. 

Then again, he also tried to build a combustion engine that ran on gunpowder. 

Geophysicists (and most other physicists) know him for his work on wave theory, which prevailed over Newton's corpuscles—at least until quantum theory. In his Treatise on Light, Huygens described a model for light waves that predicted the effects of reflection and refraction. Interference has to wait 38 years till Fresnel. He even explained birefringence, the anisotropy that gives rise to the double-refraction in calcite.

The model that we call the Huygens–Fresnel principle consists of spherical waves emanating from every point in a light source, such as a candle's flame. The sum of these manifold wavefronts predicts the distribution of the wave everywhere and at all times in the future. It's a sort of infinitesimal calculus for waves. I bet Newton secretly wished he'd thought of it.

Fold for sale

A few weeks ago I wrote a bit about seismic fold, and why it's important for seeing through noise. But how do you figure out the fold of a seismic survey?

The first thing you need to read is Norm Cooper's terrific two-part land seismic tutorial. One of his main points is that it's not really fold we should worry about, it's trace density. Essentially, this normalizes the fold by the area of the natural bins (the areal patches into which we will gather traces for the stack). Computing trace density, given effective maximum offset Xmax (or depth, in a pinch), source and receiver line spacings S and R, and source and receiver station intervals s and r:

Cooper helpfully gave ballpark ranges for increasingly hard imaging problems. I've augmented it, based on my own experience. Your mileage may vary! (Edit this table)

Traces cost money

So we want more traces. The trouble is, traces cost money. The chart below reflects my experiences in the bitumen sands of northern Alberta (as related in Hall 2007). The model I'm using is a square land 3D with an orthogonal geometry and no overlaps (that is, a single swath), and 2007 prices. A trace density of 50 traces/km2 is equivalent to a fold of 5 at 500 m depth. As you see, the cost of seismic increases as we buy more traces for the stack. Fun fact: at a density of about 160 000 traces/km2, the cost is exactly $1 per trace. The good news is that it increases with the square root (more or less), so the incremental cost of adding more traces gets progressively cheaper:

Given that you have limited resources, your best strategy for hitting the 'sweet spot'—if there is one—is lots and lots of testing. Keep careful track of what things cost, so you can compute the probable cost benefit of, say, halving the trace density. With good processing, you'll be amazed what you can get away with, but of course you risk coping badly with unexpected problems in the near surface.

What do you think? How do you make decisions about seismic geometry and trace density?

References

Cooper, N (2004). A world of reality—Designing land 3D programs for signal, noise, and prestack migration, Parts 1 and 2. The Leading Edge. October and December, 2004. 

Hall, M (2007). Cost-effective, fit-for-purpose, lease-wide 3D seismic at Surmont. SEG Development and Production Forum, Edmonton, Canada, July 2007.