Petrophysics cheatsheet

Geophysical logging is magic. After drilling, a set of high-tech sensors is lowered to the bottom of the hole on a cable, then slowly pulled up collecting data as it goes. A sort of geological endoscope, the tool string can measure manifold characteristics of the rocks the drillbit has penetrated: temperature, density, radioactivity, acoustic properties, electrical properties, fluid content, porosity, to name a few. The result is a set of well logs or wireline logs.

The trouble is there are a lot of different logs, each with its own idiosyncracies. The tools have different spatial resolutions, for example, and are used for different geological interpretations. Most exploration and production companies have specialists, called petrophysicists, to interpret logs. But these individuals are sometimes (usually, in my experience) thinly spread, and besides all geologists and geophysicists are sometimes faced with interpreting logs alone.

We wanted to make something to help the non-specialist. Like our previous efforts, our new cheatsheet is a small contribution, but we hope that you will want to stick it into the back of your notebook. We have simplified things quite a bit: almost every single entry in this table needs a lengthy footnote. But we're confident we're giving you the 80% solution. Or 70% anyway. 

Please let us know if and how you use this. We love hearing from our users, especially if you have enhancements or comments about usability. You can use the contact form, or leave a comment here

News of the week

We hope you're having a great summer. Our website has been quieter than usual this week, but we're busy building things—stay tuned. And we haven't done a news post for a few weeks, so here are some things that have caught our eye.

A new imaging paradigm

Lytro has begun what may be a revolution for photography with the light field camera, putting the choice of the focal point and depth of field in the hands of the viewer, not the photographer. Try it yourself: click on these examples to change the focal point of the images.

The radical new sensor works by not only capturing the intensity of light, but also its direction. This means the full visual field can be reconstructed. You can view the inspiring gallery of dynamic images or read more about the methods behind computational photography from Ian Hopkinson's blog post. The analogy to full wavefield imaging is obvious, but perhaps the most exciting story is not the technology, but the shift of control from imager (processor) to viewer (interpreter). 

Don't compress the data, expand the medium

Wolfram, makers of Mathematica among other things, are a deeply innovative bunch. This week they launched the Computable Document Format, or CDF, for interactive documents. These new documents could make reports, presentations, e-textbooks, and journal articles much more interesting. 

INT releases Geo Toolkit 4.2

Interactive Network Technologies, makers of the INTViewer interpretation software, have released a new version of its GeoToolkit, version 4.2. It's a proprietary C++ library for developers of geoscience software, and is used by many of the major exploration companies. New features include

  • Improved Seismic display with support for anti-aliasing, transparency, and image rotation
  • New indexed seismic data support for rapid access of large datasets
  • Enhancements to Chart libraries, including multiple selection within charts and ability to link charts.

TimeScale Creator gets a major upgrade

We have written before about this handy application from a Purdue consortium; it should be in every geoscientist's toolbox. Keep an eye out over the summer and fall for new datapacks (including Arctic Canada, Australia, NE Russia), and an all-new web version. Version 5 has some great enhancements:

  • A new data input format, and some limits on user data in the free version
  • Database and display improvements for humanoids, dinocycsts, and passive margins, plus new datapacks
  • Improved geographic interface, now with index maps

This regular news feature is for information only. We aren't connected with any of these organizations, and don't necessarily endorse their products or services.

Geophysical stamps 3: Geophone

Back in May I bought some stamps on eBay. I'm not really a stamp collector, but when I saw these in all their geophysical glory, I couldn't resist them. They are East German stamps from 1980, and they are unusual because they aren't schematic illustrations so much as precise, technical drawings. I have already written about the the gravimeter and the sonic tool stamps; today I thought I'd tell a bit about the most basic seismic sensor, the geophone.

← The 35 pfennig stamp in the series of four shows a surface geophone, with a schematic cross-section and cartoon of the seismic acquisition process, complete with ray-paths and a recording truck. Erdöl and Erdgas are oil and gas, Erkundung translates as surveying or exploration. The actual size of the stamp is 43 × 26 mm.

There are four basic types of seismic sensor (sometimes generically referred to as receivers in exploration geophysics):

Seismometers — precision instruments not used in exploration seismology because they are usually quite bulky and require careful set-up and calibration. [Most modern models] are accelerometers, much like relative gravimeters, measuring ground acceleration from the force on a proof mass. Seismometers can detect frequencies in a very broad band, on the order of 0.001 Hz to 500 Hz: that's 19 octaves!

Geophones — are small, cheap, and intended for rapid deployment in large numbers. The one illustrated on the stamp, like the modern cut-away example shown here, would be about 4 × 20 cm, with a total mass of about 400 g. The design has barely changed in decades. The mean-looking spike is to try to ensure good contact with the ground (coupling). A frame-mounted magnet is surrounded by a proof mass affixed to a copper coil. This analog instrument measures particle [velocity], not acceleration, as the differential motion induces a current in the coil. Because of the small proof mass, the lower practical frequency limit is usually only about 6 Hz, the upper about 250 Hz (5 octaves). Geophones are used on land, and on the sea-floor. If repeatability over time is important, as with a time-lapse survey, phones like this may be buried in the ground and cemented in place.

Hydrophones — as the name suggests, are for deployment in the water column. Naturally, there is a lot of non-seismic motion in water, so measuring displacement will not do. Instead, hydrophones contain two piezoelectric components, which generates a current when deformed by pressure, and use cunning physics to mute spurious, non-seismic pressure changes. Hydrophones are usually towed in streamers behind a boat. They have a similar response band to geophones.

MEMS accelerometers — exactly like the accelerometer chip in your laptop or cellphone, these tiny mechanical systems can be housed in a robust casing and used to record seismic waves. Response frequencies range from 4–1000 Hz (8 octaves; theoretically they will measure down to 0 Hz, or DC in geophysish, but not in my experience). These are sometimes referred to as digital receivers, but they are really micro-analog devices with built-in digital conversion. 

I think the geophone is the single most important remote sensing device in geoscience. Is that justified hyperbole? A couple of recent stories from Scotland and Spain have highlighted the incredible clarity of seismic images, which can be awe-inspiring as well as scientifically and economically important.

Next time I'll look at the 50 pfennig stamp, which depicts deep seismic tomography. 

Building Tune*

Last Friday, I wrote a post on tuning effects in seismic, which serves as the motivation behind our latest app for Android™ devices, Tune*. I have done technical and scientific computing in the past, but I am a newcomer to 'consumer' software programming, so like Matt in a previous post about the back of the digital envelope, I thought I would share some of my experiences trying to put geo-computing on a mobile, tactile, always-handy platform like a phone.

Google's App Inventor tool has two parts: the interface designer and the blocks editor. Programming with the blocks involves defining and assembling a series of procedures and variables that respond to the user interface. I made very little progress doing the introductory demos online, and only made real progress when I programmed the tuning equation itself—the science. The equation only accounts for about 10% of the blocks. But the logic, control elements, and defaults that (I hope) result in a pleasant design and user experience, take up the remainder of the work. This supporting architecture, enabling someone else to pick it up and use it, is where most of the sweat and tears go. I must admit, I found it an intimidating mindset to design for somebody else, but perhaps being a novice means I can think more like a user? 

This screenshot shows the blocks that build the tuning equation I showed in last week's post. It makes a text block out of an equation with variables, and the result is passed to a graph to be plotted. We are making text because the plot is actually built by Google's Charts API, which is called by passing this equation for the tuning curve in a long URL. 

Agile Tune app screenshotUpcoming versions of this app will include handling the 3-layer case, whereby the acoustic properties above and below the wedge can be different. In the future, I would like to incorporate a third dimension into the wedge space, so that the acoustic properties or wavelet can vary in the third dimension, so that seismic response and sensitivity can be tested dynamically.

Even though the Ricker wavelet is the most commonly used, I am working on extending this to include other wavelets like Klauder, Ormsby, and Butterworth filters. I would like build a wavelet toolbox where any type of wavelet can be defined based on frequency and phase spectra. 

Please let me know if you have had a chance to play with this app and if there are other features you would like to see. You can read more about the science in this app on the wiki, or get it from the Android Market. At the risk (and fun) of nakedly exposing my lack of programming prowess to the world, I have put a copy of the package on the DOWNLOAD page, so you can grab Tune.zip, load it into App Inventor and check it out for yourself. It's a little messy; I am learning more elegant and parsimonious ways to build these blocks. But hey, it works!

Tuning geology

It's summer! We will be blogging a little less often over July and August, but have lots of great posts lined up so check back often, or subscribe by email to be sure not to miss anything. Our regular news feature will be a little less regular too, until the industry gets going again in September. But for today... here's the motivation behind our latest app for Android devices, Tune*.

Geophysicists like wedges. But why? I can think of only a few geological settings with a triangular shape; a stratigraphic pinchout or an angular unconformity. Is there more behind the ubiquitous geophysicist's wedge than first appears?

Seismic interpretation is partly the craft of interpreting artifacts, and a wedge model illustrates several examples of artifacts found in seismic data. In Widess' famous paper, How thin is a thin bed? he set out a formula for vertical seismic resolution, and constructed the wedge as an aid for quantitative seismic interpretation. Taken literally, a synthetic seismic wedge has only a few real-world equivalents. But as a purely quantitative model, it can be used to calibrate seismic waveforms and interpret data in any geological environment. In particular, seismic wedge models allow us to study how the seismic response changes as a function of layer thickness. For fans of simplicity, most of the important information from a wedge model can be represented by a single function called a tuning curve.

In this figure, a seismic wedge model is shown for a 25 Hz Ricker wavelet. The effects of tuning (or interference) are clearly seen as variations in shape, amplitude, and travel time along the top and base of the wedge. The tuning curve shows the amplitude along the top of the wedge (thin black lines). Interestingly, the apex of the wedge straddles the top and base reflections, an apparent mis-timing of the boundaries.

On a tuning curve there are (at least) two values worth noting; the onset of tuning, and the tuning thickness. The onset of tuning (marked by the green line) is the thickness at which the bottom of the wedge begins to interfere with the top of the wedge, perturbing the amplitude of the reflections, and the tuning thickness (blue line) is the thickness at which amplitude interference is a maximum.

For a Ricker wavelet the amplitude along the top of the wedge is given by:

where R is the reflection coefficient at the boundary, f is the dominant frequency and t is the wedge thickness (in seconds). Building the seismic expression of the wedge helps to verify this analytic solution.

Wedge artifacts

The synthetic seismogram and the tuning curve reveal some important artifacts that the seismic interpreter needs to know about, because they could be pitfalls, or they could provide geological information:

Bright (and dim) spots: A bed thickness equal to the tuning thickness (in this case 15.6 ms) has considerably more reflective power than any other thickness, even though the acoustic properties are constant along the wedge. Below the tuning thickness, the amplitude is approximately proportional to thickness.

Mis-timed events: Below 15 ms the apparent wedge top changes elevation: for a bed below the tuning thickness, and with this wavelet, the apparent elevation of the top of the wedge is actually higher by about 7 ms. If you picked the blue event as the top of the structure, you'd be picking it erroneously too high at the thinnest part of the wedge. Tuning can make it challenging to account for amplitude changes and time shifts simultaneously when picking seismic horizons.

Limit of resolution: For a bed thinner than about 10 ms, the travel time between the absolute reflection maxima—where you would pick the bed boundaries—is not proportional to bed thickness. The bed appears thicker than it actually is.

Bottom line: if you interpret seismic data, and you are mapping beds around 10–20 ms thick, you should take time to study the effects of thin beds. We want to help! On Monday, I'll write about our new app for Android mobile devices, Tune*. 

Reference

Widess, M (1973). How thin is a thin bed? Geophysics, 38, 1176–1180. 

Well worth showing off

Have you ever had difficulty displaying a well log in a presentation? Now, instead of cycling through slides, you can fluidly move across a digital, zoomable canvas using Prezi. I think it could be a powerful visual tool and presentation aid for geoscientists. Prezi allows users to to construct intuitive, animated visualizations, using size to denote emphasis or scale, and proximity to convey relevance. You navigate through the content simply by moving the field of view and zooming in and out through scale space. In geoscience, scale isn't just a concept for presentation design, it is a fundamental property that can now be properly tied-in and shown in a dynamic way.

I built this example to illustrate how geoscience images, spread across several orders of magnitude, can be traversed seamlessly for a better presentation. In a matter of seconds, one can navigate a complete petrophysical analysis, a raw FMI log, a segment of core, and thin section microscopy embedded at its true location. Explore heterogeniety and interpret geology with scale in context. How could you use a tool like this in your work?

Clicking on the play button will steer the viewer step by step through a predefined set of animations, but you can break off and roam around freely at any time (click and drag with your mouse, try it!). Prezi could be very handy for workshops, working meetings, or any place where it is appropriate to be transparent and thorough in your visualizations.

You can also try roaming Prezi by clicking on the image of this cheatsheet. Let us know what you think!

Thanks to Burns Cheadle for Prezi enthusiasm, and to Neil Watson for sharing the petrophysical analysis he built from public data in Alberta.

News of the week

Happy Canada Day! Here is the news.

Scotian basin revivial?

Geologist–reporter Susan Eaton has a nice piece in the AAPG Explorer this month, explaining why some operators still see promise in the Scotian Basin, on Canada's Atlantic margin. The recent play fairway analysis mentioned in the report, however, is long overdue and still not forthcoming. When it is, we hope the CNSOPB and government promoters fully embrace openess and get more data into the public domain.

Yet another social network!

In the wake of LinkedIn's IPO, in which the first day of trading was over 500 times its net earnings in 2010, many other social networks are starting to pop up. Last month we mentioned SEG's new Communities. Finding Petroleum is a new social network, supported by the publishers of the Digital Energy Journal, aimed at oil and gas professionals. These sites are an anti-trust anomaly, since they almost have to be monopolies to succeed, and with so much momemtum carried by LinkedIn and Facebook, new entrants will struggle for attention. Most of the Commmunities in SEG seem to be essentially committee-based and closed, and LinkedIn micro-networks are getting chaotic, so maybe there's a gap here. Our guess is that there isn't.

The oil & gas blogosphere

Companies are increasingly turning to blogging and social media tools to expand their reach and promote their pursuits. Here are a couple of industry blogs that have caught our eye recently. If you are looking to read more about what's happening in subsurface oil and gas technology, these blogs are a good place to start.

If you use a microblogging service like Yammer, you may not know that you can also follow Twitter feeds. For example, here's a Twitter list of various companies in oil & gas.

Job security in geoscience

Historically, the oil and gas industry follows hot and cold (or, if you prefer, boom and bust) cycles, but the US Bureau of Labor Statistics predicts geoscience jobs will be increasingly in demand. A recent article from The Street reports on these statistics suggesting that the earth science sector is shaping up to be genuinely recession proof. If there is such a thing.

Agile* apps update

We're happy to report that all of Agile's apps have been updated in the last week, and we have a brand new app in the Android Market! The newest app, called Tune*, is a simple calculator for wedge modeling and estimating the amplitude tuning response of thin-beds, as shown here.

In our other apps, the biggest new feature is the ability to save cases or scenarios to a database on the device, so you can pull them up later.

Read more on our Apps page.

This regular news feature is for information only. Apart from Agile*, obviously, we aren't connected with any of these organizations, and don't necessarily endorse their products or services.

Reliable predictions of unlikely geology

A puzzle

Imagine you are working in a newly-accessible and under-explored area of an otherwise mature basin. Statistics show that on average 10% of structures are filled with gas; the rest are dry. Fortunately, you have some seismic analysis technology that allows you to predict the presence of gas with 80% reliability. In other words, four out of five gas-filled structures test positive with the technique, and when it is applied to water-filled structures, it gives a negative result four times out of five.

It is thought that 10% of the structures in this play are gas-filled. Your seismic attribute test is thought to be 80% reliable, because four out of five times it has indicated gas correctly. You acquire the undrilled acreage shown by the grey polygon.

You acquire some undrilled acreage—the grey polygon— then delineate some structures and perform the analysis. One of the structures tests positive. If this is the only information you have, what is the probability that it is gas-filled?

This is a classic problem of embracing Bayesian likelihood and ignoring your built-in 'representativeness heuristic' (Kahneman et al, 1982, Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press). Bayesian probability combination does not come very naturally to most people but, once understood, can at least help you see the way to approach similar problems in the future. The way the problem is framed here, it is identical to the original formulation of Kahneman et al, the Taxicab Problem. This takes place in a town with 90 yellow cabs and 10 blue ones. A taxi is involved in a hit-and-run, witnessed by a passer-by. Eye witness reliability is shown to be 80%, so if the witness says the taxi was blue, what is the probability that the cab was indeed blue? Most people go with 80%, but in fact the witness is probably wrong. To see why, let's go back to the exploration problem and look at 100 test cases.

Break it down

Looking at the rows in this table of outcomes, we see that there are 90 water cases and 10 gas cases. Eighty percent of the water cases test negative, and 80% of the gas cases test positive. The table shows that when we get a positive test, the probability that the test is true is not 0.80, but much less: 8/(8+18) = 0.31. In other words, a test that is mostly reliable is probably wrong when applied to an event that doesn't happen very often (a structure being gas charged). It's still good news for us, though, because a probability of discovery of 0.31 is much better than the 0.10 that we started with.

Here is Bayes' Theorem for calculating the probability P of event A (say, a gas discovery) given event B (say, a positive test in our seismic analysis):

So we can express our problem in these terms:

Are you sure about that?

This result is so counter-intuitive, for me at least, that I can't resist illustrating it with another well-known example that takes it to extremes. Imagine you test positive for a very rare disease, seismitis. The test is 99% reliable. But the disease affects only 1 person in 10 000. What is the probability that you do indeed have seismitis?

Notice that the unreliability (1%) of the test is much greater than the rate of occurrence of the disease (0.01%). This is a red flag. It's not hard to see that there will be many false positives: only 1 person in 10 000 are ill, and that person tests positive 99% of the time (almost always). The problem is that 1% of the 9 999 healthy people, 100 people, will test positive too. So for every 10 000 people tested, 101 test positive even though only 1 is ill. So the probability of being ill, given a positive test, is only about 1/101!

Lessons learned

Predictive power (in Bayesian jargon, the posterior probability) as a function of test reliability and the base rate of occurrence (also called the prior probability of the event of phenomenon in question). The position of the scenario in the exploration problem is shown by the white square.

Thanks to UBC Bioinformatics for the heatmap software, heatmap.notlong.com.


Next time you confidently predict something with a seismic attribute, stop to think not only about the reliability of the test you have made, but the rate of occurrence of the thing you're trying to predict. The heatmap shows how prediction power depends on both test reliability and the occurrence rate of the event. You may be doing worse (or better!) than you think.

Fortunately, in most real cases, there is a simple mitigation: use other, independent, methods of prediction. Mutually uncorrelated seismic attributes, well data, engineering test results, if applied diligently, can improve the odds of a correct prediction. But computing the posterior probability of event A given independent observations B, C, D, E, and F, is beyond the scope of this article (not to mention this author!).

This post is a version of part of my article The rational geoscientist, The Leading Edge, May 2010

News of the week

CCGVeritas moves towards a million channels

DSU1 receiverSercel, a subsidiary of CGGVeritas, has introduced new data transmission technology, Giga Transverse, an add-on to the 428XL land acquisition system. The technology increases the maximum channels per line from 10 000 to 100 000, and brings them a big step closer to the possiblity of one million channels on a single job. It will immediately benefit their UltraSeis offering for high-density point-receiver land acquisition. They also refreshed the DSU1 receiver (left), making it smaller and sharper. Young geophysicists must be salivating over the data they will be processing and interpreting in the decades to come.

Petrophysics coming to OpendTect

dGB has a built a comprehensive software suite for the seismic world, but OpendTect is a little light on petrophysics and log analysis. Not anymore! There's a new plugin coming to OpendTect, from Argentinian company Geoinfo: CLAS, or Computer Log Analysis Software. This will make the software attractive to a wider spread of the subsurface spectrum. dGB are on a clear path to creating a full-featured, deeply integrated platform. And OpendTect is open source, so petrophysicists may enjoy creating their own programs and plugins for working with well log data.

Petrel 2011 incorporates knowledge sharing

In Petrel, Schlumberger is introducing a multi-faceted knowledge environment for the entire spectrum of subsurface specialists. The announced improvements for the 2011 version include coordinate conversion for seismic data, better seismic flattening, more interpretation functions, and, most interesting of all, introduces the Studio™ environment. Geoscientists and engineers can search and browse projects, select data, and customize their screens by creating personal collections of often-used processes. It doesn't sound as interactive or social as the awaited Convofy for GeoGraphix, but it is good to see software companies thinking about large-scale, long-term knowledge issues, and it already exists!

Open source vizualization virtualization

High-end visualizaiton performance on a laptop... perhaps even a tablet! TurboVNC in action in the US government. Image: US Data Analysis & Assessment Center wiki.Australian E&P company Santos Ltd recently won the 2011 Red Hat Innovator of the Year award. From the award submission: "Santos has been burnt in the past by hanging its hat on proprietary solutions only to have them rendered uneconomical through being acquired by bigger fish. So for Santos, the move to open source—and to Red Hat—also proved to be a security blanket, as they could be assured that no one could walk in and take its solution away".  Borne out of an explosion of geo-computing costs, and their desire to push the limits of technology, the company sponsored the TurboVNC and VirtualGL projects. The result: users can interpret from anywhere using a standard issue laptop (with dual 24" monitors when at their desks), achieving better performance than traditional workstations. Great foresight! What are you doing about your geo-computing problems?

This regular news feature is for information only. We aren't connected with any of these organizations, and don't necessarily endorse their products or services. Petrel and Studio are trademarks of Schlumberger. Giga Transverse is a trademark of Sercel. Low res DSU1 image from Sercel marketing material.

Can you do science on a phone?

Mobile geo-computing presentationClick the image to download the PDF (3.5M) in a new window. The PDF includes slides and notes.Yes! Perhaps the real question should be: Would you want to? Isn't the very idea just an extension of the curse of mobility, never being away from your email, work, commitments? That's the glass half-empty view; it takes discipline to use your cellphone on your own terms, picking it up when it's convenient. And there's no doubt that sometimes it is convenient, like when your car breaks down, or you're out shopping for groceries and you can't remember if it was Winnie-the-Pooh or Disney Princess toothpaste you were supposed to get.

So smartphones are convenient. And everywhere. And most people seem to have a data plan or ready access to WiFi. And these devices are getting very powerful. So there's every reason to embrace the fact that these little computers will be around the office and lab, and get on with putting some handy, maybe even fun, geoscience on them. 

My talk, the last one of the meeting I blogged about last week, was a bit of an anomaly in the hardcore computational geophysics agenda. But maybe it was a nice digestif. You can read something resembling the talk by clicking on the image (above), or if you like, you can listen to me in this 13-minute video version:

So get involved, learn to program, or simply help and inspire a developer to build something awesome. Perhaps the next killer app for geologists, whatever that might be. What can you imagine...?

Just one small note to geoscience developers out there: we don't need any more seismographs or compass-clinometers!