Lots of news!

I can't believe it's been a month since my last post! But I've now recovered from the craziness of the spring — with its two hackathons, two conferences, two new experiments, as well as the usual courses and client projects — and am ready to start getting back to normal. My goal with this post is to tell you all the exciting stuff that's happened in the last few weeks.

Meet our newest team member

There's a new Agilist! Robert Leckenby is a British–Swiss geologist with technology tendencies. Rob has a PhD in Dynamic characterisation and fluid flow modelling of fractured reservoirs, and has worked in various geoscience roles in large and small oil & gas companies. We're stoked to have him in the team!

Rob lives near Geneva, Switzerland, and speaks French and several other human languages, as well as Python and JavaScript. He'll be helping us develop and teach our famous Geocomputing course, among other things. Reach him at robert@agilescientific.com.

Rob.png

Geocomputing Summer School

We have trained over 120 geoscientists in Python so far this year, but most of our training is in private classes. We wanted to fix that, and offer the Geocomputing class back for anyone to take. Well, anyone in the Houston area :) It's called Summer School, it's happening the week of 13 August, and it's a 5-day crash course in scientific Python and the rudiments of machine learning. It's designed to get you a long way up the learning curve. Read more and enroll. 


A new kind of event

We have several more events happening this year, including hackathons in Norway and in the UK. But the event in Anaheim, right before the SEG Annual Meeting, is going to be a bit different. Instead of the usual Geophysics Hackathon, we're going to try a sprint around open source projects in geophysics. The event is called the Open Geophysics Sprint, and you can find out more here on events.agilescientific.com.

That site — events.agilescientific.com — is our new events portal, and our attempt to stay on top of the community events we are running. Soon, you'll be able to sign up for events on there too (right now, most of them are still handled through Eventbrite), but for now it's at least a place to see everything that's going on. Thanks to Diego for putting it together!

Murphy's Law for Excel

Where would scientists and engineers be without Excel? Far, far behind where they are now, I reckon. Whether it's a quick calculation, or making charts for a thesis, or building elaborate numerical models, Microsoft Excel is there for you. And it has been there for 32 years, since Douglas Klunder — now a lawyer at ACLU — gave it to us (well, some of us: the first version was Mac only!).

We can speculate about reasons for its popularity:

  • It's relatively easy to use, and most people started long enough ago that they don't have to think too hard about it.
  • You have access to it, and you know that your collaborators (boss, colleagues, future self) have access to it.
  • It's flexible enough that it can do almost anything.
Figure 1 from 'Predicting bed thickness with cepstral decomposition'.

Figure 1 from 'Predicting bed thickness with cepstral decomposition'.

For instance, all the computation and graphics for my two 2006 articles on signal processing were done in Excel (plus the FFT add-on). I've seen reservoir simulators, complete with elaborate user interfaces, in Excel. An infinity of business-critical documents are stored in Excel (I just filled out a vendor registration form for a gigantic multinational in an Excel spreadsheet). John Nelson at ESRI made a heatmap in Excel. You can even play Pac Man.

Maybe it's gone too far:


So what's wrong with Excel?

Nothing is wrong with it, but it's not the best tool for every number-crunching task. Why?

  • Excel files are just that — files. Sometimes you want to do analysis across datasets, and a pool of data (a database) becomes more useful. And sometimes you wish nine different people didn't have nine different versions of your spreadsheet, each emailing their version to nine other people...
  • The charts are rather clunky and static. They don't do well with large datasets, or in data you'd like to filter or slice dynamically.
  • In large datasets, scrolling around a spreadsheet gets old pretty quickly.
  • The tool is so flexible that people get carried away with pretty tables, annotating their sheets in ways that make the printed page look nice, but analysis impossible.

What are the alternatives?

Excel is a wonder-tool, but it's not the only tool. There are alternatives, and you should at least know about them.

For everyday spreadsheeting needs, I now use Google Sheets. Collaboration is built-in. Being able to view and edit a sheet at the same time as someone else is a must-have (probably Office 365 does this now too, so if you're stuck with Excel I urge you to check). Version control — another thing I'm not sure I can live without — is built in. For real nerds, there's even a complete API. I also really like the native 'webbiness' of Google Docs, for example being able to use web API calls natively, for example getting the current CAD–USD exchange rate with GoogleFinance("CURRENCY:CADUSD").

If it's graphical analysis you want, try Tableau or Spotfire. I'm especially looking at you, reservoir engineers — you are seriously missing out if you're stuck in Excel, especially if you have a lot of columns of different types (time series, categories and continuous variables for example). The good news is that the fastest way to get data into Spotfire is... Excel. So it's easy to get started.

If you're gathering information from people, like registering the financial details of vendors for instance, then a web form is your best bet. You can set one up in Google Forms in minutes, and there are lots of similar services. If you want to use your own servers, no problem: any dev worth their wages can throw one together in a few hours.

If you're doing geoscience in Excel, like my 2006 self — filtering logs, or generating synthetics, or computing spectrums — your mind will be blown by spending a few hours learning a programming language. Your first day in Python (or Julia or Octave or R) will change your quantitative life forever.

Excel is great at some things, but for most things, there's a better way. Take some time to explore them the next time you have some slack in your schedule.

References

Hall, M (2006). Resolution and uncertainty in spectral decomposition. First Break 24, December 2006, p 43–47.

Hall, M (2006). Predicting stratigraphy with cepstral decomposition. The Leading Edge 25 (2, Special Issue on Spectral Decomposition). doi:10.1190/1.2172313


UPDATE

As a follow-up example, I couldn't resist sharing this recent story about an artist that draws anime characters in Excel.

The sound of the Software Underground

If you are a geoscientist or subsurface engineer, and you like computery things — in other words, if you read this blog — I have a treat for you. In fact, I have two! Don't eat them all at once.

Software Underground

Sometimes (usually) we need more diversity in our lives. Other times we just want a soul mate. Or at least someone friendly to ask about that weird new seismic attribute, where to find a Python library for seismic imaging, or how to spell Kirchhoff. Chat rooms are great for those occasions, Slack is where all the cool kids go to chat, and the Software Underground is the Slack chat room for you. 

It's free to join, and everyone is welcome. There are over 130 of us in there right now — you probably know some of us already (apart from me, obvsly). Just go to http://swung.rocks/ to sign up, and we will welcome you at the door with your choice of beverage.

To give you a flavour of what goes on in there, here's a listing of the active channels:

  • #python — for people developing in Python
  • #sharp-rocks — for people developing in C# or .NET
  • #open-geoscience — for chat about open access content, open data, and open source software
  • #machinelearning — for those who are into artificial intelligence
  • #busdev — collaboration, subcontracting, and other business opportunities 
  • #general — chat about anything to do with geoscience and/or computers
  • #random — everything else

Undersampled Radio

If you have a long commute, or occasionally enjoy being trapped in an aeroplane while it flies around, you might have discovered the joy of audiobooks and podcasts. You've probably wished many times for a geosciencey sort of podcast, the kind where two ill-qualified buffoons interview hyper-intelligent mega-geoscientists about their exploits. I know I have.

Well, wish no more because Undersampled Radio is here! Well, here:

The show is hosted by New Orleans-based geophysicist Graham Ganssle and me. Don't worry, it's usually not just us — we talk to awesome guests like geophysicists Mika McKinnon and Maitri Erwin, geologist Chris Jackson, and geopressure guy Mark Tingay. The podcast is recorded live every week or three in Google Hangouts on Air — the link to that, and to show notes and everything else — is posted by Gram in the #undersampled Software Underground channel. You see? All these things are connected, albeit in a nonlinear, organic, highly improbable way. Pseudoconnection: the best kind of connection.

Indeed, there is another podcast pseudoconnected to Software Underground: the wonderful Don't Panic Geocast — hosted by John Leeman and Shannon Dulin — also has a channel: #dontpanic. Give their show a listen too! In fact, here's a show we recorded together!

Don't have an hour right now? OK, you asked for it, here's a clip from that show to get you started. It starts with John Leeman explaining what Fun Paper Friday is, and moves on to one of my regular rants about conferences...

In case you're wondering, neither of these projects is explicitly connected to Agile — I am just involved in both of them. I just wanted to clear up any confusion. Agile is not a podcast company, for the time being anyway.

Toolbox wishlist

Earlier this week, the conversation on Software Underground* turned to well-tie software.

Someone was complaining that, despite having several well-tie tools at their disposal, none of them was quite right. I've written about this phenomenon before. We, as a discipline, do not know how to tie wells. I don't mean that you don't know, I know you know, but I bet if you compared the workflows of ten geoscientists, they would all be different. That's why every legacy well in every project has thirty time-depth tables, including at least three endearingly hopeful ones called final, and the one everyone uses, called test.

As a result of all this, the topic of "what tools do people need?" came up. Leo Uieda, a researcher in Brazil, asked:

I just about remembered that I had put up this very question on Tricider some time ago. Tricider is not a website about apple-based beverages, but a site for sharing and voting on ideas. You can start with a few ideas, get votes and comments on them, and even get new ideas. Here's the top idea as of right now: an open-source petrophysics tool.

Do check out the list, and vote or comment if you like. It might help someone find a project to work on, or spark an idea for a new app or even a new company.

Another result of the well-tie software conversation was, "What are the features of the one well-tie app to rule them all?" I'll leave you to stew on that one for a while. Meanwhile, please share your thoughts in the comments.


* Software Underground is an open Slack team. In essence, it's a chat room for geocomputing geeks: software, underground, geddit? It's completely free and open to anyone — pop along to http://swung.rocks/ to sign up.

It even has its own radio station!

Tools for drawing geoscientific figures

This is a response to Boyan Vakarelov's useful post on LinkedIn about tools for creating geological figures. I especially liked his SketchUp tip.

It's a while since we wrote about our toolset, so I thought I'd document what we're currently using for making figures. You won't be surprised to hear that they're mostly open source. 

Our figure creation toolbox

  • QGIS — if it's a map, you should make it in a GIS, it's as simple as that.
  • Inkscape — for most drawing and figure creation tasks. It's just as good as Illustrator.
  • GIMP — for raster editing tasks. Rasters are no good for editable figures or line art though.
  • TimeScale Creator — a little-known tool for making editable chronostratigraphic columns. Here's an example from way back on this very blog. The best thing: you can export SVG files, then edit them in Inkscape.
  • Python, R, etc. — the best way to make reproducible scientific figures is not to draw them at all. Instead, create data visualizations programmatically.

To really appreciate how fantastic the programmatic approach is, check out Sergey Fomel's treasure trove of reproducible documents, in which every figure is really just the output of a little program that anyone can run. Here's one of my own, adapted from a previous post and a sneak peek of an upcoming Leading Edge tutorial:

Different sample interpolation styles give different amplitudes for inter-sample positions, as shown at the red 'horizon' time pick. From upcoming tutorial in the April edition of The Leading Edge

Everything you wanted to know about images

Screenshots often form part of a figure, because they're so much easier than trying to figure out how to export an image, or trying to wrangle the data from scratch. If you find yourself grabbing a screenshot, and any time you're providing an image for someone else — especially if it's destined for print — you need to know all about image resolution. Read my post Save the samples for my advice. 

If you still save your images as JPEG, you also need to read my post about How to choose an image format. One day you might need the fidelity you are throwing away! Here's the short version: save everything as a PNG.

Last thing: know the difference between vector and raster graphics. Make vectors when you can.

Stop using PowerPoint!

The only bit of Boyan's post I didn't like was the bit about PowerPoint. I admit, fifteen years ago I was a bit of a slave to PowerPoint. I'd have preferred to use Illustrator at the time, but it was well beyond corporate IT's ken, and I hadn't yet discovered Inkscape. But I'm over it now — and just as well because it's a horrible drawing tool. The main limitation is not having layers, which is a show-stopper for me, but there's also the generic typography, simplistic spline editing, the inability to handle standard formats like SVG, and no scripting or plug-ins.

Getting good

If you want to learn about making effective scientific figures, I strongly recommend reading anything you can by Edward Tufte, Robert Kosara, Alberto Cairo, and Mike Bostock. For some quick inspiration check out the #dataviz hashtag on Twitter, or feast your eyes on this amazing collection of graphics, or Mike Bostock's interactive examples, or... there are too many resources to choose from.

How about you? Share your favourite tools in the comments or on Boyan's post.

Is subsurface software too pricey?

Amy Fox of Enlighten Geoscience in Calgary wrote a LinkedIn post about software pricing a couple of weeks ago. I started typing a comment... and it turned into a blog post.


I have no idea if software is 'too' expensive. Some of it probably is. But I know one thing for sure: we subsurface professionals are the only ones who can do anything about the technology culture in our industry.

Certainly most technical software is expensive. As someone who makes software, I can see why it got that way: good software is really hard to make. The market is small, compared to consumer apps, games, etc. Good software takes awesome developers (who can name their price these days), and it takes testers, scientists, managers.

But all is not lost. There are alternatives to the expensive software. We — practitioners in industry — just do not fully explore them. OpendTect is a great seismic interpretation tool, but many people don't take it seriously because it's free. QGIS is an awesome GIS application, arguably better than ArcGIS and definitely easier to use.

Sure, there are open source tools we have embraced, like Linux and MediaWiki. But on balance I think this community is overly skeptical of open source software. As evidence of this, how many oil and gas companies donate money to open source projects they use? There's just no culture for supporting Linux, MediaWiki, Apache, Python, etc. Why is that?

If we want awesome tools, someone, somewhere, has to pay the people who made them, somehow.

price.png

So why is software expensive and what can we do about it?

I used to sell Landmark's GeoProbe software in Calgary. At the time, it was USD140k per seat, plus 18% annual maintenance. A lot, in other words. It was hard to sell. It needed a sales team, dinners, and golf.  A sale of a few seats might take a year. There was a lot of overhead just managing licenses and test installations. Of course it was expensive!

In response, on the customer side, the corporate immune system kicked in, spawning machine lockdowns, software spending freezes, and software selection committees. These were (well, are) secret organizations of non-users that did (do) difficult and/or pointless things like workflow mapping and software feature comparisons. They have to be secret because there's a bazillion dollars and a 5-year contract on the line.

Catch 22. Even if an ordinary professional would like to try some cheaper and/or better software, there is no process for this. Hands have been tied. Decisions have been made. It's not approved. It can't be done.

Well, it can be done. I call it the 'computational geophysics manoeuver', because that lot have known about it for years. There is an easy way to reclaim your professional right to the tools of the trade, to rediscover the creativity and fun of doing new things:

Bring or buy your own technology, install whatever the heck you want on it, and get on with your work.

If you don't think that's a possibility for you right now, then consider it a medium term goal.

Old skool plot tool

It's not very glamorous, but sometimes you just want to plot a SEG-Y file. That's why we crafted seisplot. OK, that's why we cobbled seisplot together out of various scripts and functions we had lying around, after a couple of years of blog posts and Leading Edge tutorials and the like.

Pupils of the old skool — when everyone knew how to write a bash script, pencil crayons and lead-filled beanbags ruled the desktop, and Carpal Tunnel Syndrome was just the opening act to the Beastie Boys — will enjoy seisplot. For a start, it's command line only: 

    python seisplot.py -R -c config.py ~/segy_files -o ~/plots

Isn't that... reassuring? In this age of iOS and Android and Oculus Rift... there's still the command line interface.

Features galore

So what sort of features can you look forward to? Other than all the usual things you've come to expect of subsurface software, like a complete lack of support or documentation. (LOL, I'm kidding.) Only these awesome selling points:

  • Make wiggle traces or variable density plots... or don't choose — do both!
  • If you want, the script will descend into subdirectories and make plots for every SEG-Y file it finds.
  • There are plenty of colourmaps to choose from, or if you're insane you can make your own.
  • You can make PNGs, JPGs, SVGs or PDFs. But not CGM, sorry about that.

Well, I say 'selling points', but the tool is 100% free. We think this is a fair price. It's also open source of course, so please — seriously, please — improve the source code, then share it with the world! The code is in GitHub, natch.

Never go full throwback

There is one more feature: you can go full throwback and add scribbles and coffee stains. Here's one for your wall:


The 2D seismic line in this post is from the USGS NPRA Seismic Data Archive, and are in the public domain. This is line number 31-81-PR (links directly to SEG-Y file).

Not picking parameters

I like socks. Bright ones. I've liked bright socks since Grade 6. They were the only visible garment not governed by school uniform, or at least not enforced, and I think that was probably the start of it. The tough boys wore white socks, and I wore odd red and green socks. These days, my favourites are Cole & Parker, and the only problem is: how to choose?

Last Tuesday I wrote about choosing parameters for geophysical algorithms — window lengths, velocities, noise levels, and so on. Like choosing socks, it's subjective, and it's hard to find a pair for every occasion. The comments from Matteo, Toastar, and GuyM raised an interesting question: maybe the best way to pick parameters is to not pick them? I'm not talking about automatically optimizing parameters, because that's still choosing. I'm talking about not choosing at all.

How many ways can we think of to implement this non-choice? I can think of four approaches, but I'm not 100% sure they're all different, or if I can even describe them...

Is the result really optimal, or just a hard-to-interpret patchwork?

Adaptivity

Well, okay, we still choose, but we choose a different value everywhere depending on local conditions. A black pair for a formal function, white for tennis, green for work, and polka dots for special occasions. We can adapt to any property (rather like automatic optimization), along any dimension of our data: spatially, azimuthally,  temporally, or frequentially (there's a word you don't see every day).

Imagine computing seismic continuity. At each sample, we might evaluate some local function — such as contrast — for a range of window sizes. Or, when smoothing, we might specifiy some minimum signal loss compared to the original. We end up using a different value everywhere, and expect an optimal result.

One problem is that we still have to choose a cost function. And to be at all useful, we would need to produce two new data products, besides the actual result: a map of the parameter's value, and a map of the residual cost, so to speak. In other words, we need a way to know what was chosen, and how satisfactory the choice was.

Stochastic shotgun

We could fall back on that geostatistical favourite and pick the parameter values stochastically, grabbing socks at random out of the drawer. This works, but I need a lot of socks to have a chance of getting even a local maximum. And we run into the old problem of really not knowing what to do with all the realizations. Common approaches are to take the P50, P10, and P90, or to average them. Both of these approaches make me want to ask: Why did I generate all those realizations?

Experimental design methods

The design of experiments is a big deal in the life sciences,  but for some reason rarely (never?) talked about in geoscience. Applying a cost function, or even just visual judgment, to a single parameter is perhaps trivial, but what if you have two variables? Three? What if they are non-linear and covariant? Then the optimization process amounts to a sticky inverse problem.

Fortunately, lots of clever people have thought about these problems. I've even seen them implemented in subsurface software. Cool-sounding combinatorial reduction techniques like Greco-Latin squares, or Latin hypercubes offer ways to intelligently sample the parameter space and organize the results. We could do the same with socks, evaluating pattern and toe colour separately...

The mixing board

There is another option: the mixing board. Like a music producer, a film editor, or the Lytro camera, I can leave the raw data in place, and always work from the masters. Given the right tools, I can make myself just the right pair of socks whenever I like.

This way we can navigate the parameter space, applying views, processes, or other tools on the fly. Clearly this would mean changing everything about the way we work. We'd need a totally different approach not just to interpretation, but to the entire subsurface characterization workflow.

Are there other ways to avoid choosing? What are people using in other industries, or other sciences? I think we need to invite some experimental design and machine learning people to SEG...

Cole & Parker socks are awesomeThe quilt image is by missvancamp on Flickr and licensed CC-BY. The spools are by surfzone on Flickr, licensed CC-BY. Many thanks to Cole & Parker for permission to use the sock images, despite not knowing what on earth I was going to do with them. Buy their socks! They're Canadian and everything.

Picking parameters

One of the reasons I got interested in programming was to get smarter about broken workflows like this one from a generic seismic interpretation tool (I'm thinking of Poststack-PAL, but does that even exist any more?)...

  1. I want to make a coherence volume, which requires me to choose a window length.
  2. I use the default on a single line and see how it looks, then try some other values at random.
  3. I can't remember what I did so I get systematic: I try 8 ms, 16 ms, 32 ms, and 64 ms, saving each one as a result with _XXms appended so I can remember what I did
  4. I display them side by side but the windows are annoying to line up and resize, so instead I do it once, display them one at a time, grab screenshots, and import the images into PowerPoint because let's face it I'll need that slide eventually anyway
  5. I can't decide between 16 ms and 32 ms so I try 20 ms, 24 ms, and 28 ms as well, and do it all again, and gaaah I HATE THIS STUPID SOFTWARE

There has to be a better way.

Stumbling towards optimization

Regular readers will know that this is the time to break out the IPython Notebook. Fear not: I will focus on the outcomes here — for the real meat, go to the Notebook. Or click on these images to see larger versions, and code.

Let's run through using the Canny edge detector in scikit-image, a brilliant image processing Python library. The algo uses the derivative of a Gaussian to compute gradient, and I have to choose 3 parameters. First, we'll try to optimize 'sigma', the width of the Gaussian. Let's try the default value of 1:

Clearly, there is too much noise in the result. Let's try the interval method that drove me crazy in desktop software:

Well, I think something between 8 and 16 might work. I could compute the average intensity of each image, choose a value in between them, and then use the sigma that gives that result. OK, it's a horrible hack, but turns out to be 10:

But the whole point of scientific computing is the efficient application of informed human judgment. So let's try adding some interactivity — then we can explore the 3D parameter space in a near-parallel instead of purely serial way:

I finally feel like we're getting somewhere... But it still feels a bit arbitrary. I still don't know I'm getting the optimal result.

What can I try next? I could try to extend the 'goal seek' option, and come up with a more sophisticated cost function. If I could define something well enough — for edge detection, like coherence, I might be interested in contrast — then I could potentially just find the best answers, in much the same way that a digital camera autofocuses (indeed, many of them look for the highest contrast image). But goal seeking, if the cost function is too literal, in a way begs the question. I mean, you sort of have to know the answer — or something about the answer — before you find it.

Social machines

Social machines are the hot new thing in computing (Big Data is so 2013). Perhaps instead I can turn to other humans, in my social and professional networks. I could...

  • Ask my colleagues — perhaps my company has a knowledge sharing network I can go to.
  • Ask t'Internet — I could ask Twitter, or my friends on Facebook, or a seismic interpretation group in LinkedIn. Better yet, Earth Science Stack Exchange!
  • What if the software I was using just told me what other people had used for these parameters? Maybe this is only one step up from the programmer's default... especially if most people just use the programmer's default.
  • But what if people could rate the outcome of the algorithm? What if their colleagues or managers could rate the outcome? Then I could weight the results with these ratings.
  • What if there was a game that involved optimizing images (OK, maybe a bit of a stretch... maybe more like a Mechanical Turk). Then we might have a vast crowd of people all interested in really pushing the edge of what is intuitively reasonable, and maybe exploring the part of the parameter space I'm most interested in.

What if I could combine the best of all these approaches? Interactive exploration, with guided optimization, constrained by some cost function or other expectation. That could be interesting, but unfortunately I have absolutely no idea how that would work. I do think the optimization workflow of the future will contain all of these elements.

What do you think? Do you have an awesome way to optimize the parameters of seismic attributes? Do you have a vision for how it could be better? It occurs to me this could be a great topic for a future hackathon...

ipynb_icon.png
Click here for an IPython Notebook version of this blog post. If you don't have it, IPython is easy to install. The easiest way is to install all of scientific Python, or use Canopy or Anaconda.

Cross sections into seismic sections

We've added to the core functionality of modelr. Instead of creating an arbitrarily shaped wedge (which is plenty useful in its own right), users can now create a synthetic seismogram out of any geology they can think of, or extract from their data.

Turn a geologic-section into an earth model

We implemented a color picker within an image processing scheme, so that each unique colour gets mapped to an editable rock type. Users can create and manage their own rock property catalog, and save models as templates to share and re-use. You can use as many or as few colours as you like, and you'll never run out of rocks.

To give an example, let's use the stratigraphic diagram that Bruce Hart used in making synthetic seismic forward models in his recent Whither seismic stratigraphy article. There are 7 unique colours, so we can generate an earth model by assigning a rock to each of the colours in the image.

If you can imagine it, you can draw it. If you can draw it, you can model it.

Modeling as an interactive experience

We've exposed parameters in the interface and so you can interact with the multidimensional seismic data space. Why is this important? Well, modeling shouldn't be a one-shot deal. It's an iterative process. A feedback cycle where you turn knobs, pull levers, and learn about the behaviour of a physical system; in this case it is the interplay between geologic units and seismic waves. 

A model isn't just a single image, but a swath of possibilities teased out by varying a multitude of inputs. With modelr, the seismic experiment can be manipulated, so that the gamut of geologic variability can be explored. That process is how we train our ability to see geology in seismic.

Hart's paper doesn't specifically mention the rock properties used, so it's difficult to match amplitudes, but you can see here how modelr stands up next to Hart's images for high (75 Hz) and low (25 Hz) frequency Ricker wavelets.

There are some cosmetic differences too... I've used fewer wiggle traces to make it easier to see the seismic waveforms. And I think Bruce forgot the blue strata on his 25 Hz model. But I like this display, with the earth model in the background, and the wiggle traces on top — geology and seismic blended in the same graphical space, as they are in the real world, albeit briefly.


Subscribe to the email list to stay in the loop with modelr news, or sign-up at modelr.io and get started today.

This will add you to the email list for the modeling tool. We never share user details with anyone. You can unsubscribe any time.

Seismic models: Hart, BS (2013). Whither seismic stratigraphy? Interpretation, volume 1 (1). The image is copyright of SEG and AAPG.