Try an outernship

In my experience, consortiums under-deliver. We can get the best of both worlds by making the industry–academia interface more permeable.

At one of my clients, I have the pleasure of working with two smart, energetic young geologists. One recently finished, and the other recently started, a 14-month super-internship. Neither one had more than a BSc in geology when they started, and both are going on to do a postgraduate degree after they finish with this multinational petroleum company.

This is 100% brilliant — for them and for the company. After this gap-year-on-steroids, what they accomplish in their postgraduate studies will be that much more relevant, to them, to industry, and to the science. And corporate life, the good bits anyway, can teach smart and energetic people about time management, communication, and collaboration. So by holding back for a year, I think they've actually got a head-start.

The academia–industry interface

Chatting to these young professionals, it struck me that there's a bigger picture. Industry could get much better at interfacing with academia. Today, it tends to happen at a few key relationships, in recruitment, and in a few long-lasting joint industry projects (often referred to as JIPs or consortiums). Most of these interactions happen on an annual timescale, and strictly via presentations and research reports. In a distributed company, most of the relationships are through R&D or corporate headquarters, so the benefits to the other 75% or more of the company are quite limited.

Less secrecy, free the data! This worksheet is from the Unsolved Problems Unsession in 2013.Instead, I think the interface should be more permeable and dynamic. I've sat through several JIP meetings as researchers have shown work of dubious relevance, using poor or incomplete data, with little understanding of the implications or practical possibilities of their insights. This isn't their fault — the petroleum industry sucks at sharing its goals, methods, uncertainties, and data (a great unsolved problem!).

Increasing permeability

Here's my solution: ordinary human collaboration. Send researchers to intern alongside industry scientists for a month or two. Let them experience the incredible data and the difficult problems first hand. But don't stop there. Send the industry scientists to outern (yes, that is probably a word) alongside the academics, even if only for a week or two. Let them experience the freedom of sitting in a laboratory playground all day, working on problems with brilliant researchers. Let's help  people help each other with real side-by-side collaboration, building trust and understanding in the process. A boring JIP meeting once a year is not knowledge sharing.

Have you seen good examples of industry, government, or academia striving for more permeability? How do the high-functioning JIPs do it? Let us know in the comments.


If you liked this, check out some of my other posts on collaboration and knowledge sharing...

Ternary diagrams

I like spectrums (or spectra, if you must). It's not just because I like signals and Fourier transforms, or because I think frequency content is the most under-appreciated attribute of seismic data. They're also an important thinking tool. They represent a continuum between two end-member states, both rare or unlikely; in between there are shades of ambiguity, and this is usually where nature lives.

Take the sport–game continuum. Sports are pure competition — a test of strength and endurance, with few rules and unequivocal outcomes. Surely marathon running is pure sport. Contrast that with a pure game, like darts: no fitness, pure technique. (Establishing where various pastimes lie on this continuum is a good way to start an argument in a pub.)

There's a science purity continuum too, with mathematics at one end and social sciences somewhere near the other. I wonder where geology and geophysics lie...

Degrees of freedom 

The thing about a spectrum is that it's two-dimensional, like a scatter plot, but it has only one degree of freedom, so we can map it onto one dimension: a line.

The three-dimensional equivalent of the spectrum is the ternary diagram: 3-parameter space mapped onto 2D. Not a projection, like a 3D scatter plot, because there are only two degrees of freedom — the parameters of a ternary diagram cannot be independent. This works well for volume fractions, which must sum to one. Hence their popularity for the results of point-count data, like this Folk classification from Hulk & Heubeck (2010).

We can go a step further, natch. You can always go a step further. How about four parameters with three degrees of freedom mapped onto a tetrahedron? Fun to make, not so fun to look at. But not as bad as a pentachoron.

How to make one

The only tools I've used on the battlefield, so to speak are Trinity, for ternary plots, and TetLab, for tetrahedrons (yes, I went there), both Mac OS X only, and both from Peter Appel of Christian-Albrechts-Universität zu Kiel. But there are more...

Do you use ternary plots, or are they nothing more than a cute way to show some boring data? How do you make them? Care to share any? 

The cartoon is from xkcd.com, licensed CC-BY-NC. The example diagram and example data are from Hulka, C and C Heubeck (2010). Composition and provenance history of Late Cenozoic sediments in southeastern Bolivia: Implications for Chaco foreland basin evolution and Andean uplift. Journal of Sedimentary Research 80, 288–299. DOI: 10.2110/jsr.2010.029 and available online from the authors. 

To make up microseismic

I am not a proponent of making up fictitious data, but for the purposes of demonstrating technology, why not? This post is the third in a three-part follow-up from the private beta I did in Calgary a few weeks ago. You can check out the IPython Notebook version too. If you want more of this in person, sign up at the bottom or drop us a line. We want these examples to be easily readable, especially if you aren't a coder, so please let us know how we are doing.

Start by importing some packages that you'll need into the workspace,

%pylab inline
import numpy as np
from scipy.interpolate import splprep, splev
import matplotlib.pyplot as plt
import mayavi.mlab as mplt
from mpl_toolkits.mplot3d import Axes3D

Define a borehole path

We define the trajectory of a borehole, using a series of x, y, z points, and make each component of the borehole an array. If we had a real well, we load the numbers from the deviation survey just the same.

trajectory = np.array([[   0,   0,    0],
                       [   0,   0, -100],
                       [   0,   0, -200],
                       [   5,   0, -300],
                       [  10,  10, -400],
                       [  20,  20, -500],
                       [  40,  80, -650],
                       [ 160, 160, -700],
                       [ 600, 400, -800],
                       [1500, 960, -800]])
x = trajectory[:,0]
y = trajectory[:,1]
z = trajectory[:,2]

But since we want the borehole to be continuous and smoothly shaped, we can up-sample the borehole by finding the B-spline representation of the well path,

smoothness = 3.0
spline_order = 3
nest = -1 # estimate of number of knots needed (-1 = maximal)
knot_points, u = splprep([x,y,z], s=smoothness, k=spline_order, nest=-1)

# Evaluate spline, including interpolated points
x_int, y_int, z_int = splev(np.linspace(0, 1, 400), knot_points)

plt.gca(projection='3d')
plt.plot(x_int, y_int, z_int, color='grey', lw=3, alpha=0.75)
plt.show()

Define frac ports

Let's define a completion program so that our wellbore has 6 frac stages,

number_of_fracs = 6

and let's make it so that each one emanates from equally spaced frac ports spanning the bottom two-thirds of the well.

x_frac, y_frac, z_frac = splev(np.linspace(0.33, 1, number_of_fracs), knot_points)

Make a set of 3D axes, so we can plot the well path and the frac ports.

ax = plt.axes(projection='3d')
ax.plot(x_int, y_int, z_int, color='grey',
        lw=3, alpha=0.75)
ax.scatter(x_frac, y_frac, z_frac,
        s=100, c='grey')
plt.show()

Set a colour for each stage by cycling through red, green, and blue,

stage_color = []
for i in np.arange(number_of_fracs):
    color = (1.0, 0.1, 0.1)
    stage_color.append(np.roll(color, i))
stage_color = tuple(map(tuple, stage_color))

Define microseismic points

One approach is to create some dimensions for each frac stage and generate 100 points randomly within each zone. Each frac has an x half-length, y half-length, and z half-length. Let's also vary these randomly for each of the 6 stages. Define the dimensions for each stage:

frac_dims = []
half_extents = [500, 1000, 250]
for i in range(number_of_fracs):
    for j in range(len(half_extents)):
        dim = np.random.rand(3)[j] * half_extents[j]
        frac_dims.append(dim)  
frac_dims = np.reshape(frac_dims, (number_of_fracs, 3))

Plot microseismic point clouds with 100 points for each stage. The following code should launch a 3D viewer scene in its own window:

size_scalar = 100000
mplt.plot3d(x_int, y_int, z_int, tube_radius=10)
for i in range(number_of_fracs):
    x_cloud = frac_dims[i,0] * (np.random.rand(100) - 0.5)
    y_cloud = frac_dims[i,1] * (np.random.rand(100) - 0.5)
    z_cloud = frac_dims[i,2] * (np.random.rand(100) - 0.5)

    x_event = x_frac[i] + x_cloud
    y_event = y_frac[i] + y_cloud     
    z_event = z_frac[i] + z_cloud
    
    # Let's make the size of each point inversely proportional 
    # to the distance from the frac port
    size = size_scalar / ((x_cloud**2 + y_cloud**2 + z_cloud**2)**0.002)
    
    mplt.points3d(x_event, y_event, z_event, size, mode='sphere', colormap='jet')

You can swap out the last line in the code block above with mplt.points3d(x_event, y_event, z_event, size, mode='sphere', color=stage_color[i]) to colour each event by its corresponding stage.

A day of geocomputing

I will be in Calgary in the new year and running a one-day version of this new course. To start building your own tools, pick a date and sign up:

Eventbrite - Agile Geocomputing    Eventbrite - Agile Geocomputing

To make a wedge

We'll need a wavelet like the one we made last time. We could import it, if we've made one, but SciPy also has one so we can save ourselves the trouble. Remember to put %pylab inline at the top if using IPython notebook.

import numpy as np
from scipy.signal import ricker
import matplotlib.pyplot as plt

Now we need to make a physical earth model with three rock layers. In this example, let's make an acoustic impedance earth model. To keep it simple, let's define the earth model with two-way-travel time along the vertical axis (as opposed to depth). There are number of ways you could describe a wedge using math, and you could probably come up with a way that is better than mine. Here's a way:

nsamps, ntraces = [600, 500]
rock_names = ['shale 1', 'sand', 'shale 2']
rock_grid = np.zeros((n_samples, n_traces))

def make_wedge(n_samples, n_traces, layer_1_thickness, start_wedge, end_wedge):
    for j in np.arange(n_traces): 
        for i in np.arange(n_samples):      
            if i <= layer_1_thickness:      
rock_grid[i][j] = 1 if i > layer_1_thickness:
rock_grid[i][j] = 3 if j >= start_wedge and i - layer_1_thickness < j-start_wedge:
rock_grid[i][j] = 2 if j >= end_wedge and i > layer_1_thickness+(end_wedge-start_wedge):
rock_grid[i][j] = 3 return rock_grid

Let's insert some numbers into our wedge function and make a particular geometry.

layer_1_thickness = 200
start_wedge = 50
end_wedge = 250
rock_grid = make_wedge(n_samples, n_traces, 
            layer_1_thickness, start_wedge, 
            end_wedge)

plt.imshow(rock_grid, cmap='copper_r')

Now we can give each layer in the wedge properties.

vp = np.array([3300., 3200., 3300.]) 
rho = np.array([2600., 2550., 2650.]) 
AI = vp*rho
AI = AI / 10e6 # re-scale (optional step)

Then assign values assign them accordingly to every sample in the rock model.

model = np.copy(rock_grid)
model[rock_grid == 1] = AI[0]
model[rock_grid == 2] = AI[1]
model[rock_grid == 3] = AI[2]
plt.imshow(model, cmap='Spectral')
plt.colorbar()
plt.title('Impedances')

Now we can compute the reflection coefficients. I have left out a plot of the reflection coefficients, but you can check it out in the full version in the nbviewer

upper = model[:-1][:]
lower = model[1:][:]
rc = (lower - upper) / (lower + upper)
maxrc = abs(np.amax(rc))

Now we make the wavelet interact with the model using convolution. The convolution function already exists in the SciPy signal library, so we can just import it.

from scipy.signal import convolve
def make_synth(f):
    synth = np.zeros((n_samples+len(t)-2, n_traces))
    wavelet = ricker(512, 1e3/(4.*f))
    wavelet = wavelet / max(wavelet)   # normalize
    for k in range(n_traces):
        synth[:,k] = convolve(rc[:,k], wavelet)
    synth = synth[ np.ceil(len(wavelet))/2 : -np.ceil(len(wavelet))/2, : ]
    return synth

Finally, we plot the results.

frequencies = array([5, 10, 15]) plt.figure(figsize = (15, 4)) for i in np.arange(len(frequencies)): this_plot = make_synth(frequencies[i]) plt.subplot(1, len(frequencies), i+1) plt.imshow(this_plot, cmap='RdBu', vmax=maxrc, vmin=-maxrc, aspect=1) plt.title( '%d Hz wavelet' % freqs[i] ) plt.grid() plt.axis('tight') # Add some labels for i, names in enumerate(rock_names): plt.text(400, 100+((end_wedge-start_wedge)*i+1), names, fontsize=14, color='gray', horizontalalignment='center', verticalalignment='center')

 

That's it. As you can see, the marriage of building mathematical functions and plotting them can be a really powerful tool you can apply to almost any physical problem you happen to find yourself working on.

You can access the full version in the nbviewer. It has a few more figures than what is shown in this post.

A day of geocomputing

I will be in Calgary in the new year and running a one-day version of this new course. To start building your own tools, pick a date and sign up:

Eventbrite - Agile Geocomputing    Eventbrite - Agile Geocomputing

To plot a wavelet

As I mentioned last time, a good starting point for geophysical computing is to write a mathematical function describing a seismic pulse. The IPython Notebook is designed to be used seamlessly with Matplotlib, which is nice because we can throw our function on graph and see if we were right. When you start your own notebook, type

ipython notebook --pylab inline

We'll make use of a few functions within NumPy, a workhorse to do the computational heavy-lifting, and Matplotlib, a plotting library.

import numpy as np
import matplotlib.pyplot as plt

Next, we can write some code that defines a function called ricker. It computes a Ricker wavelet for a range of discrete time-values t and dominant frequencies, f:

def ricker(f, length=0.512, dt=0.001):
    t = np.linspace(-length/2, (length-dt)/2, length/dt)
    y = (1.-2.*(np.pi**2)*(f**2)*(t**2))*np.exp(-(np.pi**2)*(f**2)*(t**2))
    return t, y

Here the function needs 3 input parameters; frequency, f, the length of time over which we want it to be defined, and the sample rate of the signal, dt. Calling the function returns two arrays, the time axis t, and the value of the function, y.

To create a 5 Hz Ricker wavelet, assign the value of 5 to the variable f, and pass it into the function like so,

f = 5
t, y = ricker (f)

To plot the result,

plt.plot(t, y)

But with a few more commands, we can improve the cosmetics,

plt.figure(figsize=(7,4))
plt.plot( t, y, lw=2, color='black', alpha=0.5)
plt.fill_between(t, y, 0,  y > 0.0, interpolate=False, hold=True, color='blue', alpha = 0.5)
plt.fill_between(t, y, 0, y < 0.0, interpolate=False, hold=True, color='red', alpha = 0.5)

# Axes configuration and settings (optional)
plt.title('%d Hz Ricker wavelet' %f, fontsize = 16 )
plt.xlabel( 'two-way time (s)', fontsize = 14)
plt.ylabel('amplitude', fontsize = 14)
plt.ylim((-1.1,1.1))
plt.xlim((min(t),max(t)))
plt.grid()
plt.show()

Next up, we'll make this wavelet interact with a model of the earth using some math. Let me know if you get this up and running on your own.

Let's do it

It's short notice, but I'll be in Calgary again early in the new year, and I will be running a one-day version of this new course. To start building your own tools, pick a date and sign up:

Eventbrite - Agile Geocomputing    Eventbrite - Agile Geocomputing

Coding to tell stories

Last week, I was in Calgary on family business, but I took an afternoon to host a 'private beta' for a short course that I am creating for geoscience computing. I invited about twelve familiar faces who would be provide gentle and constuctive feedback. In the end, thirteen geophysicists turned up, seven of whom I hadn't met before. So much for familiarity.

I spent about two and half hours stepping through the basics of the Python programming language, which I consider essential material — getting set up with Python via Enthought Canopy, basic syntax, and so on. In the last hour of the afternoon, I steamed through a number of geoscientific examples to showcase exercises for this would-be course. 

Here are three that went over well. Next week, I'll reveal the code for making these images. I might even have a go at converting some of my teaching materials from IPython Notebook to HTML:

To plot a wavelet

The Ricker wavelet is a simple analytic function that is used throughout seismology. This curvaceous waveform is easily described by a single variable, the dominant frequency of its many contituents frequencies. Every geophysicist and their cat should know how to plot one: 

To make a wedge

Once you can build a wavelet, the next step is to make that wavelet interact with the earth. The convolution of the wavelet with this 3-layer impedance model yields a synthetic seismogram suitable for calibrating seismic signals to subtle stratigraphic geometries. Every interpreter should know how to build a wedge, with site-specific estimates of wavelet shape and impedance contrasts. Wedge models are important in all instances of dipping and truncated layers at or below the limit of seismic resolution. So basically they are useful all of the time. 

To make a 3D viewer

The capacity of Python to create stunning graphical displays with merely a few (thoughtful) lines of code seemed to resonate with people. But make no mistake, it is not easy to wade through the hundreds of function arguments to access this power and richness. It takes practice. It appears to me that practicing and training to search for and then read documentation, is the bridge that carries people from the mundane to the empowered.

This dry-run suggested to me that there are at least two markets for training here. One is a place for showing what's possible — "Here's what we can do, now let’s go and build it". The other, more arduous path is the coaching, support, and resources to motivate students through the hard graft that follows. The former is centered on problem solving, the latter is on problem finding, where the work and creativity and sweat is. 

Would you take this course? What would you want to learn? What problem would you bring to solve?

Which brittleness index?

A few weeks ago I looked at the concept — or concepts — of brittleness. There turned out to be lots of ways of looking at it. We decided to call it a rock behaviour rather than a property. And we determined to look more closely at some different ways to define it. Here they are...

Some brittleness indices

There are lots of 'definitions' of brittleness in the literature. Several of them capture the relationship between compressive and tensile strength, σC and σT respectively. This is potentially useful, because we measure uniaxial compressive strength in the standard triaxial rig tests that have become routine in shale studies... but we don't usually find the tensile strength, because it's much harder to measure. This is unfortunate, because hydraulic fracturing is initially a tensile failure (though reactivation and other failure modes do occur — see Williams-Stroud et al. 2012).

Altindag (2003) gave the following three examples of different brittleness indices. In turn, they are the strength ratio, a sort of relative strength contrast, and the mean strength (his favourite):

This is just the start, once you start digging, you'll find lots of others. Like Hucka & Das's (1974) round-up I wrote about last time, one thing they have in common is that they capture some characteristic of rock failure. That is, they do not rely on implicit rock properties.

Another point to note. Bažant & Kazemi (1990) gave a way to de-scale empirical brittleness measures to account for sample size — not surprisingly, this sort of 'real world adjustment' starts to make things quite complicated. Not so linear after all.

What not to do

The prevailing view among many interpreters is that brittleness is proportional to Young's modulus and/or Poisson's ratio, and/or a linear combination of these. We've reported a couple of times on what Lev Vernik (Marathon) thinks of the prevailing view: we need to question our assumptions about isotropy and linear strain, and computing shale brittleness from elastic properties is not physically meaningful. For one thing, you'll note that elastic moduli don't have anything to do with rock failure.

The Young–Poisson brittleness myth started with Rickman et al. 2008, SPE 115258, who presented a rather ugly representation of a linear relationship (I gather this is how petrophysicists like to write equations). You can see the tightness of the relationship for yourself in the data.

If I understand  the notation, this is the same as writing B = 7.14E – 200ν + 72.9, where E is (static) Young's modulus and ν is (static) Poisson's ratio. It's an empirical relationship, based on the data shown, and is perhaps useful in the Barnett (or wherever the data are from, we aren't told). But, as with any kind of inversion, the onus is on you to check the quality of the calibration in your rocks. 

What's left?

Here's Altindag (2003) again:

Brittleness, defined differently from author to author, is an important mechanical property of rocks, but there is no universally accepted brittleness concept or measurement method...

This leaves us free to worry less about brittleness, whatever it is, and focus on things we really care about, like organic matter content or frackability (not unrelated). The thing is to collect good data, examine it carefully with proper tools (Spotfire, Tableau, R, Python...) and find relationships you can use, and prove, in your rocks.

References

Altindag, R (2003). Correlation of specific energy with rock brittleness concepts on rock cutting. The Journal of The South African Institute of Mining and Metallurgy. April 2003, p 163ff. Available online.

Hucka V, B Das (1974). Brittleness determination of rocks by different methods. Int J Rock Mech Min Sci Geomech Abstr 10 (11), 389–92. DOI:10.1016/0148-9062(74)91109-7.

Rickman, R, M Mullen, E Petre, B Grieser, and D Kundert (2008). A practical use of shale petrophysics for stimulation design optimization: all shale plays are not clones of the Barnett Shale. SPE 115258, DOI: 10.2118/115258-MS.

Williams-Stroud, S, W Barker, and K Smith (2012). Induced hydraulic fractures or reactivated natural fractures? Modeling the response of natural fracture networks to stimulation treatments. American Rock Mechanics Association 12–667. Available online.

Great geophysicists #10: Joseph Fourier

Joseph Fourier, the great mathematician, was born on 21 March 1768 in Auxerre, France, and died in Paris on 16 May 1830, aged 62. He's the reason I didn't get to study geophysics as an undergraduate: Fourier analysis was the first thing that I ever struggled with in mathematics.

Fourier was one of 12 children of a tailor, and had lost both parents by the age of 9. After studying under Lagrange at the École Normale Supérieure, Fourier taught at the École Polytechnique. At the age of 30, he was an invited scientist on Napoleon's Egyptian campaign, along with 55,000 other men, mostly soldiers:

Citizen, the executive directory having in the present circumstances a particular need of your talents and of your zeal has just disposed of you for the sake of public service. You should prepare yourself and be ready to depart at the first order.
Herivel, J (1975). Joseph Fourier: The Man and the Physicist, Oxford Univ. Press.

He stayed in Egypt for two years, helping found the modern era of Egyptology. He must have liked the weather because his next major work, and the one that made him famous, was Théorie analytique de la chaleur (1822), on the physics of heat. The topic was incidental though, because it was really his analytical methods that changed the world. His approach of decomposing arbitrary functions into trignometric series was novel and profoundly useful, and not just for solving the heat equation

Fourier as a geophysicist

Late last year, Evan wrote about the reason Fourier's work is so important in geophysical signal processing in Hooray for Fourier! He showed how we can decompose time-based signals like seismic traces into their frequency components. And I touched the topic in K is for Wavenumber (decomposing space) and The spectrum of the spectrum (decomposing frequency itself, which is even weirder than it sounds). But this GIF (below) is almost all you need to see both the simplicity and the utility of the Fourier transform. 

In this example, we start with something approaching a square wave (red), and let's assume it's in the time domain. This wave can be approximated by summing the series of sine waves shown in blue. The amplitudes of the sine waves required are the Fourier 'coefficients'. Notice that we needed lots of time samples to represent this signal smoothly, but require only 6 Fourier coefficients to carry the same information. Mathematicians call this a 'sparse' representation. Sparsity is a handy property because we can do clever things with sparse signals. For example, we can compress them (the basis of the JPEG scheme), or interpolate them (as in CGG's REVIVE processing). Hooray for Fourier indeed.

The watercolour caricature of Fourier is by Julien-Leopold Boilly from his work Album de 73 Portraits-Charge Aquarelle’s des Membres de I’Institute (1820); it is in the public domain.

Read more about Fourier on his Wikipedia page — and listen to this excellent mini-biography by Marcus de Sautoy. And check out Mostafa Naghizadeh's chapter in 52 Things You Should Know About Geophysics. Download the chapter for free!

What is brittleness?

Brittleness is an important rock characteristic, but impossible to define formally because there are so many different ways of looking at it. For this reason, Tiryaki (2006) suggests we call it a rock behaviour, not a rock property.

Indeed, we're not really interested in brittleness, per se, because it's not very practical information on its own. Mining engineers are concerned with a property called cuttability — and we can think of this as conceptually analogous to the property that interests lots of geologsts, geophysicists, and engineers in petroleum, geothermal, and hydrology: frackability. In materials science, the inverse property — the ability of a rock to resist fracture — is called fracture toughness. 

What is brittleness not?

  • It's not the same as frackability, or other things you might be interested in.
  • It's not a simple rock property like, say, density or velocity. Those properties are condition-dependent too, but we agree on how to measure them.
  • It's not proportional to any elastic moduli, or a linear combination of Young's modulus and Poisson's ratio, despite what you might have heard.

So what is it then?

It depends a bit what you care about. How the rock deforms under stress? How much energy it takes to break it? What happens when it breaks? Hucka and Das (1974) rounded up lots of ways of looking at it. Here are a few:

  • Brittle rocks undergo little to no permanent deformation before failure and, depending on the test conditions, may occur suddenly and catastrophically.
  • Brittle rocks undergo little or no ductile deformation past the yield point (or elastic limit) of the rock. Note that some materials, including many rocks, have no well-defined yield point because they have non-linear elasticity.
  • Brittle rocks absorb relatively little energy before fracturing. The energy absorbed is equal to the area under the stress-strain curve (see figure).
  • Brittle rocks have a strong tendency to fracture under stress.
  • Brittle rocks break with a high ratio of fine to coarse fragments.

All of this is only made more complicated by the fact that there are lots of kinds of stress: compression, tension, shear, torsion, bending, and impact... and all of these can operate in multiple dimensions, and on multiple time scales. Suddenly a uniaxial rig doesn't quite seem like enough kit.

It will take a few posts to really get at brittleness and frackability. In future posts we'll look at relevant rock properties and how to measure them, the difference between static and dynamic measurements, and the multitude of brittleness indices. Eventually, we'll get on to what all this means for seismic waves, and ask whether frackability is something we can reasonably estimate from seismic data.

Meanwhile, if you have observations or questions to share, hit us in the comments. 

References and further reading
Hucka V, B Das (1974). Brittleness determination of rocks by different methods. Int J Rock Mech Min Sci Geomech Abstr 10 (11), 389–92. DOI:10.1016/0148-9062(74)91109-7

Tiryaki (2006). Evaluation of the indirect measures of rock brittleness and fracture toughness in rock cutting. The Journal of The South African Institute of Mining and Metallurgy 106, June 2006. Available online.

P is for Phase

Seismic is about acoustic vibration. The archetypal oscillation, the sine wave, describes the displacement y of a point around a circle. You only need three pieces of information to describe it perfectly: the size of the circle, the speed at which it rotates around the circle, and where it starts from expressed as an angle. These quantities are better known as the amplitude, frequency, and phase respectively. These figures show how varying each of them affects the waveform:

So phase describes the starting point as an angle, but notice that this manifests itself as an apparent lateral shift in the waveform. For seismic data, this means a time shift. More on this later. 

What about seismic?

We know seismic signals are not so simple — they are not repetitive oscillations — so why do the words amplitudefrequency and phase show up so often? Aren't these words horribly inadequate?

Not exactly. Fourier's methods allow us to construct (and deconstruct) more complicated signals by adding up a series of sine waves, as long as we get the amplitude, frequency and phase values right for each one of them. The tricky part, and where much of where the confusion lies, is that even though you can place your finger on any point along a seismic trace and read off a value for amplitude, you can't do that for frequency or phase. The information for those are only unlocked through spectroscopy.

Phase shifts or time shifts?

The Ricker wavelet is popular because it can easily be written analytically, and it is comprised of a considerable number of sinusoids of varying amplitudes and frequencies. We might refer to a '20 Hz Ricker wavelet' but really it contains a range of frequencies. The blue curve shows the wavelet with phase = 0°, the purple curve shows the wavelet with a phase shift of π/3 = 60° (across all frequencies). Notice how the frequency content remains unchanged.

So for a seismic reflection event (below), phase takes on a new meaning. It expresses a time offset between the reflection and the maximum value on the waveform. When the amplitude maximum is centered at the reflecting point, it is equally shaped on either side — we call this zero phase. Notice how variations in the phase of the event alter the relative position of the peak and sidelobes. The maximum amplitude of the event at 90° is only about 80% of the amplitude at zero phase. This is why I like to plot traces along with their envelope (the grey lines). The envelope contains all possible phase rotations. Any event whose maximum value does not align with the maximum on the envelope is not zero phase.

Understanding the role of phase in time series analysis is crucial for both data processors aiming to create reliable data, and interpreters who operate under the assertion that subtle variations in waveform shape can be attributed to underlying geology. Waveform classification is a powerful attribute... but how reliable is it?

In a future post, I will cover the concept of instantaneous phase on maps and sections, and some other practical interpretation tips. If you have any of your own, share them in the comments.

Additional reading
Liner, C (2002). Phase, phase, phase. The Leading Edge 21, p 456–7. Abstract online.