Touring vs tunnel vision

My experience with software started, and still largely sits, at the user end. More often than not, interacting with another's design. One thing I have learned from the user experience is that truly great interfaces are engineered to stay out of the way. The interface is only a skin atop the real work that software does underneath — taking inputs, applying operations, producing outputs. I'd say most users of computers don't know how to compute without an interface. I'm trying to break free from that camp. 

In The dangers of default disdain, I wrote about the power and control that the technology designer has over his users. A kind of tunnel is imposed that restricts the choices for interacting with data. And for me, maybe for you as well, the tunnel has been a welcome structure, directing my focus towards that distant point; the narrow aperture invokes at least some forward motion. I've unknowingly embraced the tunnel vision as a means of interacting without substantial choices, without risk, without wavering digressions. I think it's fair to say that without this tunnel, most travellers would find themselves stuck, incapacitated by the hard graft of touring over or around the mountain.

Tour guides instead of tunnels

But there is nothing to do inside the tunnel, no scenery to observe, just a black void between input and output. For some tasks, taking the tunnel is the only obvious and economic choice — all you want is to get stuff done. But choosing the tunnel means you will be missing things along the way. It's a trade off.

For getting from A to B, there are engineers to build tunnels, there are travellers to travel the tunnels, and there is a third kind of person altogether: tour guides take the scenic route. Building your own tunnel is a grand task, only worthwhile if you can find enough passengers to use it. The scenic route isn't just a casual lackadaisical approach. It's necessary for understanding the landscape; by taking it the traveler becomes connected with the territory. The challenge for software and technology companies is to expose people to the richness of their environment while moving them through at an acceptable pace. Is it possible to have a tunnel with windows?

Oil and gas operating companies are good at purchasing the tunnel access pass, but are not very good at building a robust set of tools to navigate the landscape of their data environment. After all, that is the thing that we travellers need to be in constant contact with. Touring or tunneling? The two approaches may or may not arrive at the same destination and they have different costs along the way, making it different business.

Segmentation and decomposition

Day 4 of the SEG Annual Meeting in Las Vegas was a game of two halves: talks in the morning and workshops in the afternoon. I caught two signal processing talks, two image processing talks, and two automatic interpretation talks, then spent the afternoon in a new kind of workshop for students. My highlights:

Anne Solberg, DSB, University of Oslo

Evan and I have been thinking about image segmentation recently, so I'm drawn to those talks (remember Halpert on Day 2?). Angélique Berthelot et al. have been doing interesting work on salt body detection. Solberg (Berthelot's supervisor) showed some remarkable results. Their algorithm:

  1. Compute texture attributes, including Haralick and wavenumber textures (Solberg 2011)
  2. Supervised Bayesian classification (we've been using fuzzy c-means)
  3. 3D regularization and segmentation (okay, I got a bit lost at this point)

The results are excellent, echoing human interpretation well (right) — but having the advantage of being objective and repeatable. I was especially interested in the wavenumber textures, and think they'll help us in our geothermal work. 

Jiajun Han, BLISS, University of Alberta

The first talk of the day was that classic oil industry: a patented technique with an obscure relationship to theory. But Jiajun Han and Mirko van der Baan of the University of Alberta gave us the real deal — a special implementation of empirical mode decomposition, which is a way to analyse time scales (frequencies, essentially), without leaving the time domain. The result is a set of intrinsic mode functions (IMFs), a bit like Fourier components, from which Han extracts instantaneous frequency. It's a clever idea, and the results are impressive. Time–frequency displays usually show smearing in either the time or frequency domain, but Han's method pinpoints the signals precisely:

That's it from me for SEG — I fly home tomorrow. It's tempting to stay for the IQ Earth workshop tomorrow, but I miss my family, and I'm not sure I can crank out another post. If you were in Vegas and saw something amazing (at SEG I mean), please let us know in the comments below. If you weren't, I hope you've enjoyed these posts. Maybe we'll see you in Houston next year!

More posts from SEG 2012.

The images adapted from Berthelot and Han are from the 2012 Annual Meeting proceedings. They are copyright of SEG, and used here in accordance with their permissions guidelines.

Brittleness and robovibes

SEG2012_logo.png

Day 3 of the SEG Annual Meeting was just as rammed with geophysics as the previous two days. I missed this morning's technical program, however, as I've taken on the chairpersonship (if that's a word) of the SEG Online Committee. So I had fun today getting to grips with that business. Aside: if you have opinion's about SEG's online presence, please feel free to send them my way.

Here are my highlights from the rest of the day — both were footnotes in their respective talks:

Brittleness — Lev Vernick, Marathon

Evan and I have had a What is brittleness? post in our Drafts folder for almost two years. We're skeptical of the prevailing view that a shale's brittleness is (a) a tangible rock property and (b) a function of Young's modulus and Poisson's ratio, as proposed by Rickman et al. 2008, SPE 115258. To hear such an intellect as Lev declare the same today convinced me that we need to finish that post — stay tuned for that. Bottom line: computing shale brittleness from elastic properties is not physically meaningful. We need to find more appropriate measures of frackability, [Edit, May 2015; Vernik tells me the following bit is the opposite of what he said, apologies for my cloth ears...] which Lev pointed out is, generally speaking, inversely proportional to organic content. This poses a basic conflict for those exploiting shale plays. [End of public service announcement.]

Robovibes — Guus Berkhout, TU Delft

At least 75% of Berkhout's talk went by me today, mostly over my head. I stopped writing notes, which I only do when I'm defeated. But once he'd got his blended source stuff out of the way, he went rogue and asked the following questions:

  1. Why do we combine all seismic frequencies into the device? Audio got over this years ago (right).
  2. Why do we put all the frequencies at the same location? Viz 7.1 surround sound.
  3. Why don't we try more crazy things in acquisition?

I've wondered the same thing myself — thinking more about the receiver side than the sources — after hearing about the brilliant sampling strategy the Square Kilometer Array is using at a PIMS Lunchbox Lecture once. But Berkhout didn't stop at just spreading a few low-frequency vibrators around the place. No, he wants robots. He wants an autonomous army of flying and/or floating narrow-band sources, each on its own grid, each with its own ghost matching, each with its own deblending code. This might be the cheapest million-channel acquisition system possible. Berkhout's aeronautical vibrator project starts in January. Seriously.

More posts from SEG 2012.

Speaker image is licensed CC-BY-SA by Tobias Rütten, Wikipedia user Metoc.

Smoothing, unsmoothness, and stuff

Day 2 at the SEG Annual Meeting in Las Vegas continued with 191 talks and dozens more posters. People are rushing around all over the place — there are absolutely no breaks, other than lunch, so it's easy to get frazzled. Here are my highlights:

Adam Halpert, Stanford

Image segmentation is an important class of problems in computer vision. An application to seismic data is to automatically pick a contiguous cloud of voxels from the 3D seismic image — a salt body, perhaps. Before trying to do this, it is common to reduce noise (e.g. roughness and jitter) by smoothing the image. The trick is to do this without blurring geologically important edges. Halpert did the hard work and assessed a number of smoothers for both efficacy and efficiency: median (easy), Kuwahara, maximum homogeneity median, Hale's bilateral [PDF], and AlBinHassan's filter. You can read all about his research in his paper online [PDF]. 

Dave Hale, Colorado School of Mines

Automatic fault detection is a long-standing problem in interpretation. Methods tend to focus on optimizing a dissimilarity image of some kind (e.g. Bø 2012 and Dorn 2012), or on detecting planar discontinuities in that image. Hale's method is, I think, a new approach. And it seems to work well, finding fault planes and their throw (right).

Fear not, it's not complete automation — the method can't organize fault planes, interpret their meaning, or discriminate artifacts. But it is undoubtedly faster, more accurate, and more objective than a human. His test dataset is the F3 dataset from dGB's Open Seismic Repository. The shallow section, which resembles the famous polygonally faulted Eocene of the North Sea and elsewhere, contains point-up conical faults that no human would have picked. He is open to explanations of this geometry. 

Other good bits

John Etgen and Chandan Kumar of BP made a very useful tutorial poster about the differences and similarities between pre-stack time and depth migration. They busted some myths about PreSTM:

  • Time migration is actually not always more amplitude-friendly than depth migration.
  • Time migration does not necessarily produce less noisy images.
  • Time migration does not necessarily produce higher frequency images.
  • Time migration is not necessarily less sensitive to velocity errors.
  • Time migration images do not necessarily have time units.
  • Time migrations can use the wave equation.
  • But time migration is definitely less expensive than depth migration. That's not a myth.

Brian Frehner of Oklahoma State presented his research [PDF] to the Historical Preservation Committee, which I happened to be in this morning. Check out his interesting-looking book, Finding Oil: The Nature of Petroleum Geology

Jon Claerbout of Stanford gave his first talk in several years. I missed it unfortunately, but Sergey Fomel said it was his highlight of the day, and that's good enough for me. Jon is a big proponent of openness in geophysics, so no surprise that he put his talk on YouTube days ago:

The image from Hale is copyright of SEG, from the 2012 Annual Meeting proceedings, and used here in accordance with their permissions guidelines. The DOI links in this post don't work at the time of writing — SEG is on it. 

Resolution, anisotropy, and brains

Day 1 of the SEG Annual Meeting continued with the start of the regular program — 96 talks and 71 posters, not to mention the 323 booths on the exhibition floor. Instead of deciding where to start, I wandered around the bookstore and bought Don Herron's nice-looking new book, First Steps in Seismic Interpretation, which we will review some time soon.

Here are my highlights from the rest of the day.

Chuck Ursenbach, Arcis

Calgary is the home of seismic geophysics. There's a deep tradition of signal processing, and getting the basics right. Sometimes there's snake oil too, but mostly it's good, honest science. And mathematics. So when Jim Gaiser suggested last year at SEG that PS data might offer as good resolution as SS or PP — as good, and possibly better — you know someone in Calgary will jump on it with MATLAB. Ursenbach, Cary, and Perz [PDF] did some jumping, and conclude: PP-to-PS mapping can indeed increase bandwidth, but the resolution is unchanged, because the wavelength is unchanged — 'conservation of resolution', as Ursenbach put it. Resolution isn't everything. 

Gabriel Chao, Total E&P

Chao showed a real-world case study starting with a PreSTM gather with a decent Class 2p AVO anomaly at the top of the reservoir interval (TTI Kirchhoff with 450–4350 m offset). There was residual NMO in the gather, as Leon Thomsen himself later forced Chao to admit, but there did seem to be a phase reversal at about 25°. The authors compared the gather with three synthetics: isotropic convolutional, anisotropic convolutional, and full waveform. The isotropic model was fair, but the phase reversal was out at 33°. The anisotropic convolutional model matched well right up to about 42°, beyond which only the full waveform model was close (right). Anisotropy made a similar difference to wavelet extraction, especially beyond about 25°.

Canada prevails

With no hockey to divert them, Canadians are focusing on geophysical contests this year. With the Canadian champions Keneth Silva and Abdolnaser Yousetz Zadeh denied the chance to go for the world title by circumstances beyond their control, Canada fielded a scratch team of Adrian Smith (U of C) and Darragh O'Connor (Dalhousie). So much depth is there in the boreal Americas that the pair stormed home with the trophy, the cash, and the glory.

The Challenge Bowl event was a delight — live music, semi-raucous cheering, and who can resist MC Peter Duncan's cheesy jests? If you weren't there, promise yourself you'll go next year. 

The image from Chao is copyright of SEG, from the 2012 Annual Meeting proceedings, and used here in accordance with their permissions guidelines. The image of Herron's book is also copyright of SEG; its use here is proposed to be fair use.

The tepidity of social responsibility

Like last year, the 2012 SEG Forum was the only organized event on the morning of Day 1. And like last year, it was thinly attended. The title wasn't exactly enticing — Corporate and Academic Social Responsibility: Engagement or Estrangement — and to be honest I had no idea what we were in for. This stuff borders on sociology, and there's plenty of unfamiliar jargon. Some highlights:  

  • Part of our responsibility to society is professional excellence — Isabelle Lambert
  • At least one company now speaks of a 'privilege', not 'license', to operate — Isabelle Lambert
  • Over-regulation is harmful, but we need them to promote disclosure and transparency — Steve Silliman
  • The cheapest, easiest way to look like you care is to actually care

What they said

Mary Lou Zoback of Stanford moderated graciously throughout, despite being clearly perturbed by the thin audience. Jonathan Nyquist of Temple University was first up, and told how he is trying to get things done with $77k/year grad students using $50k grants when most donors want results not research.

Isabelle Lambert of CGGVeritas (above) eloquently described the company's principles. They actually seem to walk the walk: they were the only corporation to reply to the invitation to this forum, they seem very self-aware and open on the issue, and they have a policy of 'no political donations' — something that undermines a lot of what certain companies say about the environment, according to one questioner. 

Steve Silliman of Gonzaga University, a hydrologist, stressed the importance of the long-term view. One of his most successful projects has taken 14 years to reach its most impactful work, and has required funding from a wide range of sources — he had a terrific display of exactly when and how all this funding came in. 

Finally Michael Oxman, of Business for Social Responsibility, highlighted some interesting questions about stakeholder engagement, such as 'What constitues informed consultation?', and 'What constritutes consent?'. He was on the jargony end of things, so I got a bit lost after that.

What do you think, is social responsibility part of the culture where you work? Should it be? 

A footnote about the forum

"Social responsibility has become a popular topic these days", proclaimed the program. Not that popular, it turned out, with less than 2% of delegates showing up. Perhaps this is just the wrong venue for this particular conversation — Oxman pointed out that there is plenty of engagement in more specific venues. But maybe there's another reason for the dearth — this expert-centric, presentation-driven format felt dated somehow. Important people on stage, the unwashed, unnamed masses asking questions at the end. There was a nod to modernity: you could submit questions via Twitter or email, as well as on cards. But is this format, this approach to engagement, dead?

There's nothing to lose: let's declare it dead right now and promise ourselves that the opening morning of SEG in 2013 will be something to get our teeth into.

Ways to experiment with conferences

Yesterday I wrote about why I think technical conferences underdeliver. Coincidentally, Evan sent me this quote from Seth Godin's blog yesterday:

We've all been offered access to so many tools, so many valuable connections, so many committed people. What an opportunity.

What should we do about it? 

If we are collectively spending 6 careers at the SEG Annual Meeting every autumn, as I asserted yesterday, let's put some of that cognitive surplus to work!

I suggest starting to experiment with our conferences. There are so many tools: unconferences, idea jams, hackdays, wikithons, and other participative activities. Anything to break up sitting in the dark watching 16 lectures a day, slamming coffee and cramming posters in between. Anything to get people not just talking and drinking, but working together. What a way to build collaborations, friendships, and trust. Connecting with humans, not business cards. 

Unconvinced? consider which of these groups of people looks like they're learning, being productive, and having fun:

This year I've been to some random (for me) conferences — Science Online, Wikimania, and Strata. Here are some engaging, fun, and inspiring things happening in meetings of those communities:

  • Speaker 'office hours' during the breaks so you can find them and ask questions. 
  • Self-selected topical discussion tables at lunch. 
  • Actual time for actual discussion after talks (no, really!).
  • Cool giveaways: tattoos and stickers, funky notebooks, useful mobile apps, books, scientific toys.
  • A chance to sit down and work with others — hackathons, co-writing, idea jams, and so on. 
  • Engaged, relevant, grounded social media presence, not more marketing.
  • An art gallery, including graphics captured during sessions
  • No posters! Those things epitomize the churn of one-way communication.

Come to our experiment!

Clearly there's no shortage of things to try. Converting a session here, a workshop there — it's easy to do something in a sandbox, alongside the traditional. And by 'easy', I mean uncertain, risky and uncomfortable. It will require a new kind of openness. I'm not certain of the outcome, but I am certain that it's worth doing. 

On this note, a wonderful thing happened to us recently. We were — and still are — planning an unconference of our own (stay tuned for that). Then, quite unprovoked, Carmen Dumitrescu asked Evan if we'd like to chair a session at the Canada GeoConvention in May. And she invited us to 'do something different'. Perfect timing!

So — mark your calendar! GeoConvention, Calgary, May 2013. Something different.

The photo of the lecture, from the depressing point of view of the speaker, is licensed CC-BY-SA by Flickr user Pierre-Alain Dorange. The one of the unconference is licensed CC-BY-SA-NC by Flickr user aforgrave.

Are conferences failing you too?

I recently asked a big software company executive if big exhibitions are good marketing value. The reply:

It's not a waste of money. It's a colossal waste of money.

So that's a 'no'.

Is there a problem here?

Next week I'll be at the biggest exhibition (and conference) in our sector: the SEG Annual Meeting. Thousands of others will be there, but far more won’t. Clearly it’s not indispensable or unmissable. Indeed, it’s patently missable — I did just fine in my career as a geophysicist without ever going. Last year was my first time.

Is this just the nature of mass market conferences? Is the traditional academic format necessarily unremarkable? Do the technical societies try too hard to be all things to all people, and thereby miss the mark for everyone? 

I don't know the answer to any of these questions, I can only speak for myself. I'm getting tired of conferences. Perhaps I've reached some new loop in the meandering of my career, or perhaps I'm just grumpy. But as I've started to whine, I'm finding more and more allies in my conviction that conferences aren't awesome.

What are conferences for?

  • They make lots of money for the technical societies that organize them.
  • A good way to do this is to provide marketing and sales opportunities for the exhibiting vendors.
  • A good way to do this is to attract lots of scientists there, baiting with talks by all the awesomest ones.
  • A good way to do this, apparently, is to hold it in Las Vegas.

But I don't think the conference format is great at any of these things, except possibly the first one. The vendors get prospects (that's what sales folk call people) that are only interested in toys and beer — they might be users, but they aren't really customers. The talks are samey and mostly not memorable (and you can only see 5% of them). Even the socializing is limited by the fact that the conference is gigantic and run on a tight schedule. And don't get me started on Las Vegas. 

If we're going to take the trouble of flying 8000 people to Las Vegas, we had better have something remarkable to show for it. Do we? What do we get from this giant conference? By my conservative back-of-the-envelope calculation, we will burn through about 210 person-years of productivity in Las Vegas next week. That's about 6 careers' worth. Six! Are we as a community satisfied that we will produce 6 careers' worth of insight, creativity, and benefit?

You can probably tell that I am not convinced. Tomorrow, I will put away the wrecking ball of bellyaching, and offer some constructive ideas, and a promise. Meanwhile, if you have been to an amazing conference, or can describe one from your imagination, or think I'm just being a grouch — please use the comments below.

Map data ©2012 Google, INEGI, MapLink, Tele Atlas. 

News of the month

Another month flies by, and it's time for our regular news round-up! News tips, anyone?

Knowledge sharing

At the start of the month, SPE launched PetroWiki. The wiki has been seeded with one part of the 7-volume Petroleum Engineering Handbook, a tome that normally costs over $600. They started with Volume 2, Drilling Engineering, which includes lots of hot topics, like fracking (right). Agile was involved in the early design of the wiki, which is being built by Knowledge Reservoir

Agile stuff

Our cheatsheets are consistenly some of the most popular things on our site. We love them too, so we've been doing a little gardening — there are new, updated editions of the rock physics and geophysics cheatsheets.

Thank you so much to the readers who've let us know about typos! 

Wavelets

Nothing else really hit the headlines this month — perhaps people are waiting for SEG. Here are some nibbles...

  • We just upgraded a machine from Windows to Linux, sadly losing Spotfire in the process. So we're on the lookout for another awesome analytics tool. VISAGE isn't quite what we need, but you might like these nice graphs for oil and gas.
  • Last month we missed the newly awarded exploration licenses in the inhospitable Beaufort Sea [link opens a PDF]. Franklin Petroleum of the UK might have been surprised by the fact that they don't seem to have been bidding against anyone, as they picked up all six blocks for little more than the minimum bid.
  • It's the SEG Annual Meeting next week... and Matt will be there. Look out for daily updates from the technical sessions and the exhibition floor. There's at least one cool new thing this year: an app!

This regular news feature is for information only. We aren't connected with any of these organizations, and don't necessarily endorse their products or services. 

N is for Nyquist

In yesterday's post, I covered a few ideas from Fourier analysis for synthesizing and processing information. It serves as a primer for the next letter in our A to Z blog series: N is for Nyquist.

In seismology, the goal is to propagate a broadband impulse into the subsurface, and measure the reflected wavetrain that returns from the series of rock boundaries. A question that concerns the seismic experiment is: What sample rate should I choose to adequately capture the information from all the sinusoids that comprise the waveform? Sampling is the capturing of discrete data points from the continuous analog signal — a necessary step in recording digital data. Oversample it, using too high a sample rate, and you might run out of disk space. Undersample it and your recording will suffer from aliasing.

What is aliasing?

Aliasing is a phenomenon observed when the sample interval is not sufficiently brief to capture the higher range of frequencies in a signal. In order to avoid aliasing, each constituent frequency has to be sampled at least two times per wavelength. So the term Nyquist frequency is defined as half of the sampling frequency of a digital recording system. Nyquist has to be higher than all of the frequencies in the observed signal to allow perfect recontstruction of the signal from the samples.

Above Nyquist, the signal frequencies are not sampled twice per wavelength, and will experience a folding about Nyquist to low frequencies. So not obeying Nyquist gives a double blow, not only does it fail to record all the frequencies, the frequencies that you leave out actually destroy part of the frequencies you do record. Can you see this happening in the seismic reflection trace shown below? You may need to traverse back and forth between the time domain and frequency domain representation of this signal.

Nyquist_trace.png

Seismic data is usually acquired with either a 4 millisecond sample interval (250 Hz sample rate) if you are offshore, or 2 millisecond sample interval (500 Hz) if you are on land. A recording system with a 250 Hz sample rate has a Nyquist frequency of 125 Hz. So information coming in above 150 Hz will wrap around or fold to 100 Hz, and so on. 

It's important to note that the sampling rate of the recording system has nothing to do the native frequencies being observed. It turns out that most seismic acquisition systems are safe with Nyquist at 125 Hz, because seismic sources such as Vibroseis and dynamite don't send high frequencies very far; the earth filters and attenuates them out before they arrive at the receiver.

Space alias

Aliasing can happen in space, as well as in time. When the pixels in this image are larger than half the width of the bricks, we see these beautiful curved artifacts. In this case, the aliasing patterns are created by the very subtle perspective warping of the curved bricks across a regularly sampled grid of pixels. It creates a powerful illusion, a wonderful distortion of reality. The observations were not sampled at a high enough rate to adequately capture the nature of reality. Watch for this kind of thing on seismic records and sections. Spatial alaising. 

Click for the full demonstration (or adjust your screen resolution).You may also have seen this dizzying illusion of an accelerating wheel that suddenly appears to change direction after it rotates faster than the sample rate of the video frames captured. The classic example is the wagon whel effect in old Western movies.

Aliasing is just one phenomenon to worry about when transmitting and processing geophysical signals. After-the-fact tricks like anti-aliasing filters are sometimes employed, but if you really care about recovering all the information that the earth is spitting out at you, you probably need to oversample. At least two times for the shortest wavelengths.