What makes a good benchmark dataset?

Last week I mentioned that we need more open benchmark datasets in geoscience. I think benchmarks are important for researchers to work on, as a teaching aid, and as a way for us to objectively measure how well we’re doing on a particular problem. How else can we know how we’re doing, or compare Company X’s claim with Company Y’s?

What makes a good benchmark?

I haven’t unearthed any guides from other domains to help answer this question, and we don’t yet have enought experience to know for ourselves. But here’s what I’m thinking:

  • It must address at least one clear machine learning task. The more obviously useful the task, the more useful (and important) the benchmark. The benchmark dataset should be well suited to the task (but does not have to be comprehensive or definitive).

  • It must be open. That means explicitly licensed with an open, and preferably permissive, license. I think we need to avoid non-permissive (so-called ‘copyleft’) licenses, because it’s not clear how the ‘sharealike’ clause would affect works that depended on the dataset. And we definitely need to avoid restrictive non-commercial clauses.

  • It must be discoverable and accessible. In other words, it needs to be easy to find, and anyone should be able to get it, without registering on a website or waiting for an email or doing anything else that slows down the pace of their research. A properly open dataset can be replicated anywhere, so openness should take care of this.

  • It must have enough features to be interesting. This might mean different things for different tasks, but in general we’d like to see a few physical measurements (e.g. seismic, well logs, RockEval, core photos, field observations, flow rates, and so on). The features should be independent — we can always generate derivatives.

  • It must have labels. As well as some interesting features, the dataset must have some interpretive information with high information value (e.g. seismic facies, lithologies, deposotional environment, sequence boundaries, EURs, and so on). Usually, these are expensive to acquire (which is partly why we’d like to be able to predct them).

  • It should name suitable prediction error evaluation methods, with reference implementations, for the intended task. If people are to use it as a score benchmark, they need to know how to score their own implementations of the task.

  • It can be de-localized, but not completely. We don’t need to know the exact whereabouts of the dataset, but if we remove the relative spatial relationships between wells, say, or don’t know which basin we’re in, then the questions we can ask about the data get a lot less interesting, and the whole situation gets much less realistic.

  • It should not be too big. More than about 1GB means unwieldy. It means difficult to download. It means too much room for nuance. And it means it’s probably impossible to explore in the space of a tutorial. It’s also much harder to get a big dataset into shape than a smaller one. A few thousand records, maybe 100,000 in some cases, is probably plenty.

  • It should be clean, but not too clean. No-one wants to spend hours processing a dataset before it can be used, or — worse — be bitten by some esoteric data problem only a domain expert would spot. But, on the other hand, a dataset with no issues at all might be a bit boring. And, in subsurface at least, completely unrepresentative!

  • It should be well documented. The dataset needs to be described to non-technical people, who know little or nothing about the subsurface. Remember that many users will not be proficient programmers either, so…

  • It should have an accompanying demonstration. For example, a script or notebook, preferably in at least a couple of languages, that shows how to load and inspect the data. Ideally this would include a demonstration of how to pose, and answer, a straightforward question as a machine learning task.

I’m not sure we can call this last one a criterion, but maybe in an ideal world…

  • It should be launched with a data science contest. If you’re felling really brave, what better way to attract attention to the new open dataset than with a Kaggle-style contest?

It’s certainly true that there are several datasets around. Unfortunately, the openness criterion eliminates most of them, so they fall at the first hurdle. For example, the very nice dataset that Brendon Hall used in the SEG machine learning contest is not open.

If you know of a dataset that could be coerced into meeting most of these criteria, we’d like to hear about it. I know a small army of people that would love to help get it into the open, and into the hands of machine learning researchers all over the world.


The thumbnail image for this post was adapted from an image by user arg_flickr on Flickr, licensed CC-BY.

Thanks to several people on Software Underground, for the discussion on this topic. In particular, Justin Gosses and Lukas Mosser pointed out the need for transparent error evaluation.

The next thing

Over the last several years, Agile has been testing some of the new ways of collaborating, centered on digital connections:

2010-2019-timeline.png
  • It all started with this blog, which started in 2010 with my move from Calgary to Nova Scotia. It’s become a central part of my professional life, but we’re all about collaboration and blogs are almost entirely one-way, so…

  • In 2011 we launched SubSurfWiki. It didn’t really catch on, although it was a good basis for some other experiments and I still use it sometimes. Still, we realized we had to do more to connect the community, so…

  • In 2012 we launched our 52 Things collaborative, open access book series. There are well over 5000 of these out in the wild now, but it made us crave a real-life, face-to-face collaboration, so…

  • In 2013 we held the first ‘unsession’, a mini-unconference, at the Canada GeoConvention. Over 50 people came to chat about unsolved problems. We realized we needed a way to actually work on problems, so…

  • Later that year, we followed up with the first geoscience hackathon. Around 15 or so of us gathered in Houston for a weekend of coding and tacos. We realized that the community needed more coding skills, so…

  • In 2014 we started teaching a one-day Python course aimed squarely at geoscientists. We only teach with subsurface data and algorithms, and the course is now 5 days long. We now needed a way to connect all these new hackers and coders, so…

  • In 2014, together with Duncan Child, we also launched Software Underground, a chat room for discussing topics related to the earth and computers. Initially it was a Google Group but in 2015 we relaunched it as an open Slack team. We wanted to double down on scientific computing, so…

  • In 2015 and 2016 we launched a new web app, Pick This (returning soon!), and grew our bruges and welly open source Python projects. We also started building more machine learning projects, and getting really good at it.

Growing and honing

We have spent the recent years growing and honing these projects. The blog gets about 10,000 readers a month. The sixth 52 Things book is on its way. We held two public unsessions this year. The hackathons have now grown to 60 or so hackers, and have had about 400 participants in total, and five of them this year already (plus three to come!). We have also taught Python to 400 geoscientists, including 250 this year alone. And the Software Underground has over 1000 members.

In short, geoscience has gone digital, and we at Agile are grateful and excited to be part of it. At no point in my career have I been more optimistic and energized than I am right now.

So it’s time for the next thing.

The next thing is starting with a new kind of event. The first one is 5 to 11 May 2019, and it’s happening in France. I’ll tell you all about it tomorrow.

Volve: not open after all

Back in June, Equinor made the bold and exciting decision to release all its data from the decommissioned Volve oil field in the North Sea. Although the intent of the release seemed clear, the dataset did not carry a license of any kind. Since you cannot use unlicensed content without permission, this was a problem. I wrote about this at the time.

To its credit, Equinor listened to the concerns from me and others, and considered its options. Sensibly, it chose an off-the-shelf license. It announced its decision a few days ago, and the dataset now carries a Creative Commons Attribution-NonCommercial-ShareAlike license.

Unfortunately, this license is not ‘open’ by any reasonable definition. The non-commercial stipulation means that a lot of people, perhaps most people, will not be able to legally use the data (which is why non-commercial licenses are not open licenses). And the ShareAlike part means that we’re in for some interesting discussion about what derived products are, because any work based on Volve will have to carry the CC BY-NC-SA license too.

Non-commercial licenses are not open

Here are some of the problems with the non-commercial clause:

NC licenses come at a high societal cost: they provide a broad protection for the copyright owner, but strongly limit the potential for re-use, collaboration and sharing in ways unexpected by many users

  • NC licenses are incompatible with CC-BY-SA. This means that the data cannot be used on Wikipedia, SEG Wiki, or AAPG Wiki, or in any openly licensed work carrying that license.

  • NC-licensed data cannot be used commercially. This is obvious, but far-reaching. It means, for example, that nobody can use the data in a course or event for which they charge a fee. It means nobody can use the data as a demo or training data in commercial software. It means nobody can use the data in a book that they sell.

  • The boundaries of the license are unclear. It's arguable whether any business can use the data for any purpose at all, because many of the boundaries of the scope have not been tested legally. What about a course run by AAPG or SEG? What about a private university? What about a government, if it stands to realize monetary gain from, say, a land sale? All of these uses would be illiegal, because it’s the use that matters, not the commercial status of the user.

Now, it seems likely, given the language around the release, that Equinor will not sue people for most of these use cases. They may even say this. Goodness knows, we have enough nudge-nudge-wink-wink agreements like that already in the world of subsurface data. But these arrangements just shift the onus onto the end use and, as we’ve seen with GSI, things can change and one day you wake up with lawsuits.

ShareAlike means you must share too

Creative Commons licenses are, as the name suggests, intended for works of creativity. Indeed, the whole concept of copyright, depends on creativity: copyright protects works of creative expression. If there’s no creativity, there’s no basis for copyright. So for example, a gamma-ray log is unlikely to be copyrightable, but seismic data is (follow the GSI link above to find out why). Non-copyrightable works are not covered by Creative Commons licenses.

All of which is just to help explain some of the language in the CC BY-NC-SA license agreement, which you should read. But the key part is in paragraph 4(b):

You may distribute, publicly display, publicly perform, or publicly digitally perform a Derivative Work only under the terms of this License

What’s a ‘derivative work’? It’s anything ‘based upon’ the licensed material, which is pretty vague and therefore all-encompassing. In short, if you use or show Volve data in your work, no matter how non-commercial it is, then you must attach a CC BY-NC-SA license to your work. This is why SA licenses are sometimes called ‘viral’.

By the way, the much-loved F3 and Penobscot datasets also carry the ShareAlike clause, so any work (e.g. a scientific paper) that uses them is open-access and carries the CC BY-SA license, whether the author of that work likes it or not. I’m pretty sure no-one in academic publishing knows this.

By the way again, everything in Wikipedia is CC BY-SA too. Maybe go and check your papers and presentations now :)

problems-dont-have.png

What should Equinor do?

My impression is that Equinor is trying to satisfy some business partner or legal edge case, but they are forgetting that they have orders of magnitude more capacity to deal with edge cases than the potential users of the dataset do. The principle at work here should be “Don’t solve problems you don’t have”.

Encumbering this amazing dataset with such tight restrictions effectively kills it. It more or less guarantees it cannot have the impact I assume they were looking for. I hope they reconsider their options. The best choice for any open data is CC-BY.

Big open data... or is it?

Huge news for data scientists and educators. Equinor, the company formerly known as Statoil, has taken a bold step into the open data arena. On Thursday last week, it 'disclosed' all of its subsurface and production data for the Volve oil field, located in the North Sea. 

What's in the data package?

A lot! The 40,000-file package contains 5TB of data, that's 5,000GB!

volve_data.png

This collection is substantially larger, both deeper and broader, than any other open subsurface dataset I know of. Most excitingly, Equinor has released a broad range of data types, from reports to reservoir models: 3D and 4D seismic, well logs and real-time drilling records, and everything in between. The only slight problem is that the seismic data are bundled in very large files at the moment; we've asked for them to be split up.

Questions about usage rights

Regular readers of this blog will know that I like open data. One of the cornerstones of open data is access, and there's no doubt that Equinor have done something incredible here. It would be preferable not to have to register at all, but free access to this dataset — which I'm guessing cost more than USD500 million to acquire — is an absolutely amazing gift to the subsurface community.

Another cornerstone is the right to use the data for any purpose. This involves the owner granting certain privileges, such as the right to redistribute the data (say, for a class exercise) or to share derived products (say, in a paper). I'm almost certain that Equinor intends the data to be used this way, but I can't find anything actually granting those rights. Unfortunately, if they aren't explicitly granted, the only safe assumption is that you cannot share or adapt the data.

For reference, here's the language in the CC-BY 4.0 licence:

 

Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:

  1. reproduce and Share the Licensed Material, in whole or in part; and
  2. produce, reproduce, and Share Adapted Material.
 

You can dig further into the requirements for open data in the Open Data Handbook.

The last thing we need is yet another industry dataset with unclear terms, so I hope Equinor attaches a clear licence to this dataset soon. Or, better still, just uses a well-known licence such as CC-BY (this is what I'd recommend). This will clear up the matter and we can get on with making the most of this amazing resource.

More about Volve

The Volve field was discovered in 1993, but not developed until 15 years later. It produced oil and gas for 8.5 years, starting on 12 February 2008 and ending on 17 September 2016, though about half of that came in the first 2 years (see below). The facility was the Maersk Inspirer jack-up rig, standing in 80 m of water, with an oil storage vessel in attendance. Gas was piped to Sleipner A. In all, the field produced 10 million Sm³ (63 million barrels) of oil, so is small by most standards, with a peak rate of 56,000 barrels per day.

Volve production over time in standard m³ (i.e. at 20°C). Multiply by 6.29 for barrels.

Volve production over time in standard m³ (i.e. at 20°C). Multiply by 6.29 for barrels.

The production was from the Jurassic Hugin Formation, a shallow-marine sandstone with good reservoir properties, at a depth of about 3000 m. The top reservoir depth map from the discovery report in the data package is shown here. (I joined Statoil in 1997, not long after this report was written, and the sight of this page brings back a lot of memories.)

 

The top reservoir depth map from the discovery report. The Volve field (my label) is the small closure directly north of Sleipner East, with 15/9-19 well on it.

 

Get the data

To explore the dataset, you must register in the 'data village', which Equinor has committed to maintaining for 2 years. It only takes a moment. You can get to it through this link.

Let us know in the comments what you think of this move, and do share what you get up to with the data!

Weekend worship in Salt Lake City

The Salt Lake City hackathon — only the second we've done with a strong geology theme — is a thing of history, but you can still access the event page to check out who showed up and who did what. (This events page is a new thing we launched in time for this hackathon; it will serve as a public document of what happens at our events, in addition to being a platform for people to register, sponsor, and connect around our events.) 

In true seat-of-the-pants hackathon style we managed to set up an array of webcams and microphones to record the finale. The demos are the icing on the cake. Teams were selected at random and were given 4 minutes to wow the crowd. Here is the video, followed by a summary of what each team got up to... 


Unconformist.ai

Didi Ooi (University of Bristol), Karin Maria Eres Guardia (Shell), Alana Finlayson (UK OGA), Zoe Zhang (Chevron). The team used machine learning the automate the mapping of unconformities in subsurface data. One of the trickiest parts is building up a catalog of data-model pairs for GANs to train on. Instead of relying on thousand or hundreds of thousands of human-made seismic interpretations, the team generated training images by programmatically labelling pixels on synthetic data as being either above (white) or below (black) the unconformity. Project pageSlides.

unconformist.ai_preso

Outcrops Gee Whiz

Thomas Martin (soon Colorado School of Mines), Zane Jobe (Colorado School of Mines), Fabien Laugier (Chevron), and Ross Meyer (Colorado School of Mines). The team wrote some programs to evaluate facies variability along drone-derived digital outcrop models. They did this by processing UAV point cloud data in Python and classified different rock facies using using weather profiles, local cliff face morphology, and rock colour variations as attributes. This research will help in the development drone assisted 3D scanning to automate facies boundaries mapping and rock characterization. RepoSlides.

SLC_outcrop_geewhiz_blog.png

Jet Loggers

Eirik Larsen and Dimitrios Oikonomou (Earth Science Analytics), and Steve Purves (Euclidity). This team of European geoscientists, with their circadian clocks all out of whack, investigated if a language of stratigraphy can be extracted from the rock record and, if so, if it can be used as another tool for classifying rocks. They applied natural language processing (NLP) to an alphabetic encoding of well logs as a means to assist or augment the labour-intensive tasks of classifying stratigraphic units and picking tops. Slides.

 

 

SLC_NLP_jet_loggers.png
SLC_bookcliffs.png

Book Cliffs Bandits

Tom Creech (ExxonMobil) and Jesse Pisel (Wyoming State Geological Survey). The team started munging datasets in the Book Cliffs. Unfortunately, they really did not have the perfect, ready to go data, and by the time they pivoted to some workable open data from Alaska, their team name had already became a thing. The goal was build a tool to assist with lithology and stratigraphic correlation. They settled on change-point detection using Bayesian statistics, which they were using to build richer feature sets to test if it could produce more robust automatic stratigraphic interpretation. Repo, and presentation.

 

 

A channel runs through it

Nam Pham (UT Austin), Graham Brew (Dynamic Graphics), Nathan Suurmeyer (Shell). Because morphologically realistic 3D synthetic seismic data is scarce, this team wrote a Python program that can take seismic horizon interpretations from real data, then construct richer training data sets for building an AI that can automatically delineate geological entities in the subsurface. The pixels enclosed by any two horizons are labelled with ones, pixels outside this region are labelled with zeros. This work was in support of Nam's thesis research which is using the SegNet architecture, and aims to extract not only major channel boundaries in seismic data, but also the internal channel structure and variability – details that many seismic interpreters, armed even with state-of-the art attribute toolboxes, would be unable to resolve. Project page, and code.

GeoHacker

Malcolm Gall (UK OGA), Brendon Hall and Ben Lasscock (Enthought). Innovation happens when hackers have the ability to try things... but they also need data to try things out on. There is a massive shortage of geoscience datasets that have been staged and curated for machine learning research. Team Geohacker's project wasn't a project per se, but a platform aimed at the sharing, distribution, and long-term stewardship of geoscience data benchmarks. The subsurface realm is swimming with disparate data types across a dizzying range of length scales, and indeed community efforts may be the only way to prove machine-learning's usefulness and keep the hype in check. A place where we can take geoscience data, and put it online in a ready-to-use for for machine learning. It's not only about being open, online and accessible. Good datasets, like good software, need to be hosted by individuals, properly documented, enriched with tutorials and getting-started guides, not to mention properly funded. Website.

SLC_petrodict.png

Petrodict

Mark Mlella (Univ. Louisiana, Lafayette), Matthew Bauer (Anchutz Exploration), Charley Le (Shell), Thomas Nguyen (Devon). Petrodict is a machine-learning driven, cloud-based lithology prediction tool that takes petrophysics measurements (well logs) and gives back lithology. Users upload a triple combo log to the app, and the app returns that same log with with volumetric fractions for it's various lithologic or mineralogical constituents. For training, the team selected several dozen wells that had elemental capture spectroscopy (ECS) logs – a premium tool that is run only in a small fraction of wells – as well as triple combo measurements to build a model for predicting lithology. Repo.

Seismizor

George Hinkel, Vivek Patel, and Alex Waumann (all from University of Texas at Arlington). Earthquakes are hard. This team of computer science undergraduate students drove in from Texas to spend their weekend with all the other geo-enthusiasts. What problem in subsurface oil and gas did they identify as being important, interesting, and worthy of their relatively unvested attention? They took on the problem of induced seismicity. To test whether machine learning and analytics can be used to predict the likelihood that injected waste water from fracking will cause an earthquake like the ones that have been making news in Oklahoma. The majority of this team's time was spent doing what all good scientists do –understanding the physical system they were trying to investigate – unabashedly pulling a number of the more geomechanically inclined hackers from neighbouring teams and peppering them with questions. Induced seismicity is indeed a complex phenomenon, but George's realization that, "we massively overestimated the availability of data", struck a chord, I think, with the judges and the audience. Another systemic problem. The dynamic earth – incredible in its complexity and forces – coupled with the fascinating and politically charged technologies we use for drilling and fracking might be one of the hardest problems for machine learning to attack in the subsurface. 


AAPG next year is in San Antonio. If it runs, the hackathon will be 18–19 of May. Mark your calendar and stay tuned!

Unweaving the rainbow

Last week at the Canada GeoConvention in Calgary I gave a slightly silly talk on colourmaps with Matteo Niccoli. It was the longest, funnest, and least fruitful piece of research I think I've ever embarked upon. And that's saying something.

Freeing data from figures

It all started at the Unsession we ran at the GeoConvention in 2013. We asked a roomful of geoscientists, 'What are the biggest unsolved problems in petroleum geoscience?'. The list we generated was topped by Free the data, and that one topic alone has inspired several projects, including this one. 

Our goal: recover digital data from any pseudocoloured scientific image, without prior knowledge of the colourmap.

I subsequently proferred this challenge at the 2015 Geophysics Hackathon in New Orleans, and a team from Colorado School of Mines took it on. Their first step was to plot a pseudocoloured image in (red, green blue) space, which reveals the colourmap and brings you tantalizingly close to retrieving the data. Or so it seems...

Here's our talk:

Copyright and seismic data

Seismic company GSI has sued a lot of organizations recently for sharing its copyrighted seismic data, undermining its business. A recent court decision found that seismic data is indeed copyrightable, but Canadian petroleum regulations can override the copyright. This allows data to be disclosed by the regulator and copied by others — made public, effectively.


Seismic data is not like other data

Data is uncopyrightable. Like facts and ideas, data is considered objective, uncreative — too cold to copyright. But in an important ruling last month, the Honourable Madam Justice Eidsvik established at the Alberta Court of the Queen's Bench that seismic data is not like ordinary data. According to this ruling:

 

...the creation of field and processed [seismic] data requires the exercise of sufficient skill and judgment of the seismic crew and processors to satisfy the requirements of [copyrightability].

 

These requirements were established in the case of accounting firm CCH Canadian Limited vs The Law Society of Upper Canada (2004) in the Supreme Court of Canada. Quoting from that ruling:

 

What is required to attract copyright protection in the expression of an idea is an exercise of skill and judgment. By skill, I mean the use of one’s knowledge, developed aptitude or practised ability in producing the work. By judgment, I mean the use of one’s capacity for discernment or ability to form an opinion or evaluation by comparing different possible options in producing the work.

 

Interestingly:

 

There exist no cases expressly deciding whether Seismic Data is copyrightable under the American Copyright Act [in the US].

 

Fortunately, Justice Eidsvik added this remark to her ruling — just in case there was any doubt:

 

I agree that the rocks at the bottom of the sea are not copyrightable.

It's really worth reading through some of the ruling, especially sections 7 and 8, entitled Ideas and facts are not protected and Trivial and purely mechanical respectively. 

Why are we arguing about this?

This recent ruling about seismic data was the result of an action brought by Geophysical Service Incorporated against pretty much anyone they could accuse of infringing their rights in their offshore seismic data, by sharing it or copying it in some way. Specifically, the claim was that data they had been required to submit to regulators like the C-NLOPB and the C-NSOPB was improperly shared, undermining its business of shooting seismic data on spec.

You may not have heard of GSI, but the company has a rich history as a technical and business innovator. The company was the precursor to Texas Instruments, a huge player in the early development of computing hardware — seismic processing was the 'big data' of its time. GSI still owns the largest offshore seismic dataset in Canada. Recently, however, the company seems to have focused entirely on litigation.

The Calgary company brought more than 25 lawsuits in Alberta alone against corporations, petroleum boards, and others. There have been other actions in other jurisdictions. This ruling is just the latest one; here's the full list of defendants in this particular suit (there were only 25, but some were multiple entities):

  • Devon Canada Corporation
  • Statoil Canada Ltd.
  • Anadarko Petroleum Corporation
  • Anadarko US Offshore Corporation
  • NWest Energy Corp.
  • Shoal Point Energy Ltd.
  • Vulcan Minerals Inc.
  • Corridor Resources Inc.
  • CalWest Printing and Reproductions
  • Arcis Seismic Solutions Corp.
  • Exploration Geosciences (UK) Limited
  • Lynx Canada Information Systems Ltd.
  • Olympic Seismic Ltd.
  • Canadian Discovery Ltd.
  • Jebco Seismic UK Limited
  • Jebco Seismic (Canada) Company
  • Jebco Seismic, LP
  • Jebco/Sei Partnership LLC
  • Encana Corporation
  • ExxonMobil Canada Ltd.
  • Imperial Oil Limited
  • Plains Midstream Canada ULC
  • BP Canada Energy Group ULC
  • Total S.A.
  • Total E&P Canada Ltd.
  • Edison S.P.A.
  • Edison International S.P.A.
  • ConocoPhillips Canada Resources Corp.
  • Canadian Natural Resources Limited
  • MGM Energy Corp
  • Husky Oil Limited
  • Husky Oil Operations Limited
  • Nalcor Energy – Oil and Gas Inc.
  • Suncor Energy Inc.
  • Murphy Oil Company Ltd.
  • Devon ARL Corporation

Why did people share the data?

According to Section 101 Disclosure of Information of the Canada Petroleum Resources Act (1985) , geophysical data should be released to regulators — and thus, effectively, the public — five years after acquisition:

 

(2) Subject to this section, information or documentation is privileged if it is provided for the purposes of this Act [...]
(2.1) Subject to this section, information or documentation that is privileged under subsection 2 shall not knowingly be disclosed without the consent in writing of the person who provided it, except for the purposes of the administration or enforcement of this Act [...]

(7) Subsection 2 does not apply in respect of the following classes of information or documentation obtained as a result of carrying on a work or activity that is authorized under the Canada Oil and Gas Operations Act, namely, information or documentation in respect of

(d) geological work or geophysical work performed on or in relation to any frontier lands,
    (i) in the case of a well site seabed survey [...], or
    (ii) in any other case, after the expiration of five years following the date of completion of the work;

 

As far as I can tell, this does not necessarily happen, by the way. There seems to be a great deal of confusion in Canada about what 'seismic data' actually is — companies submit paper versions, sometimes with poor processing, or perhaps only every 10th line of a 3D. But the Canada Oil and Gas Geophysical Operations Regulations are quite clear. This is from the extensive and pretty explicit 'Final Report' requirements: 

 

(j) a fully processed, migrated seismic section for each seismic line recorded and, in the case of a 3-D survey, each line generated from the 3-D data set;

 

The intent is quite clear: the regulators are entitled to the stacked, migrated data. The full list is worth reading, it covers a large amount of data. If this is enforced, it is not very rigorous. If these datasets ever make it into the hands of the regulators, and I doubt it ever all does, then it's still subject to the haphazard data management practices that this industry has ubiquitously adopted.

GSI argued that 'disclosure', as set out in Section 101 of the Act, does not imply the right to copy, but the court was unmoved:

 

Nonetheless, I agree with the Defendants that [Section 101] read in its entirety does not make sense unless it is interpreted to mean that permission to disclose without consent after the expiry of the 5 year period [...] must include the ability to copy the information. In effect, permission to access and copy the information is part of the right to disclose.

 

So this is the heart of the matter: the seismic data was owned and copyrighted by GSI, but the regulations specify that seismic data must be submitted to regulators, and that they can disclose that data to others. There's obvious conflict between these ideas, so which one prevails? 

The decision

There is a principle in law called Generalia Specialibus Non Derogant. Quoting from another case involving GSI:

 

Where two provisions are in conflict and one of them deals specifically with the matter in question while the other is of more general application, the conflict may be avoided by applying the specific provision to the exclusion of the more general one. The specific prevails over the more general: it does not matter which was enacted first.

 

Quoting again from the recent ruling in GSI vs Encana et al.:

 

Parliament was aware of the commercial value of seismic data and attempted to take this into consideration in its legislative drafting. The considerations balanced in this regard are the same as those found in the Copyright Act, i.e. the rights of the creator versus the rights of the public to access data. To the extent that GSI feels that this policy is misplaced, its rights are political ones – it is not for this Court to change the intent of Parliament, unfair as it may be to GSI’s interests.

 

Finally:

 

[...the Regulatory Regime] is a complete answer to the suggestion that the Boards acted unlawfully in disclosing the information and documentation to the public. The Regulatory Regime is also a complete answer to whether the copying companies and organizations were entitled to receive and copy the information and documentation for customers. For the oil companies, it establishes that there is nothing unlawful about accessing or copying the information from the Boards [...]

 

So that's it: the data was copyright, but the regulations override the copyright, effectively. The regulations were legal, and — while GSI might find the result unfair — it must operate under them. 

The decision must be another step towards the end of this ugly matter. Maybe it's the end. I'm sure those (non-lawyers) involved can't wait for it to be over. I hope GSI finds a way back to its technical core and becomes a great company again. And I hope the regulators find ways to better live up to the fundamental idea behind releasing data in the first place: that the availability of the data to the public should promote better science and better decisions for Canada's offshore. As things stand today, the whole issue of 'public subsurface data' in Canada is, frankly, a mess.

New open data and a competition

First, a quick announcement. EMC, the data storage and cloud computing company, has stepped up to sponsor the Subsurface Hackathon in Vienna in a few weeks. Their generous help will ensure a fun event with some awesome prizes — so get signed up and start planning your project!


New open data

A correspondent got in touch last week about an exciting new open seismic dataset. In the late summer and early autumn of 2015, the WesternGeco-acquired a large new 2D seismic survey in the Rockall Basin and the Mid North Sea High for the UK Government. The survey cost about £20 million and consists of 20,000 km of new broadband PreSTM data. At the end of March, the dataset will be released to the public for free download, along with about 20,000 km of legacy 2D data, 40,000 km of new gravity and magnetic data, and wells.

© Crown Copyright — Used under fair use provision.

If you are interested in downloading the data, the government is asking that you fill out this form — it will help them figure out what to make available, and how much infrastructure to provision. Excitingly, they are asking about angle stacks, PSTM gathers, not just the full stack. It sounds like being an important resource for our community.

They are even asking about interest in the field data — all 60TB of it. There will almost certainly be a fee associated with the larger datasets, by the way. I asked about this and it sounds like it will likely be on the order of several thousand pounds to handle the full SEGD data, because of course it will be on physical media. But the government is open to suggestions if the geophysical community would like to find another way to distribute the data — do let me know if you'd like to talk about this.

New seed funding

Along with the data package, the government has announced an exciting new competition for 'seed funding':

The £500,000 competition has been designed to encourage geoscientists and engineers to develop innovative interpretations and products potentially using [this new open data]...

The motivation for the competition is clear:

It is hoped the competition will not only significantly increase the understanding of these frontier areas in respect of the 29th Seaward Licensing Round later in the year, but also retain talent in the oil and gas community which has been affected by the oil and gas industry downturn.

The parameters of the competition are spelled out in the Word document on this tender notice. It sounds like almost anything goes: data analysis, product development, even exploration activity. So get creative — and pitch the coolest thing you can think of!

You'll have to get cracking though, because applications to take part must be in by 1 April. If selected, the project must be delivered on 11 November.

The Rock Property Catalog again

Do you like data? Data about rocks? Open, accessible data that you can use for any purpose without asking? Read on.

After writing about anisotropy back in February, and then experimenting with storing rock properties in SubSurfWiki later that month, a few things happened:

  • The server I run the wiki on — legacy Amazon AWS infrastructure — crashed, and my backup strategy turned out to be <cough> flawed. It's now running on state-of-the-art Amazon servers. So my earlier efforts were mostly wiped out... Leaving the road clear for a new experiment!
  • I came across an amazing resource called Mudrock Anisotropy, or — more appealingly — Mr Anisotropy. Compiled by Steve Horne, it contains over 1000 records of rocks, gathered from the literature. It is also public domain and carries only a disclaimer. But it's a spreadsheet, and emailing a spreadsheet around is not sustainable.
  • The Common Ground database that was built by John A. Scales, Hans Ecke and Mike Batzle at Colorado School of Mines in the late 1990s, is now defunct and has been officially discontinued, as of about two weeks ago. It contains over 4000 records, and is public domain. The trouble is, you have to restore a SQLite database to use it.

All this was pointing towards a new experiment. I give you: the Rock Property Catalog again! This time it contains not 66 rocks, but 5095 rocks. Most of them have \(V_\mathrm{P}\), \(V_\mathrm{S}\) and  \(\rho\). Many of them have Thomsen's parameters too. Most have a lithology, and they all have a reference. Looking for Cretaceous shales in North America to use as analogs on your crossplots? There's a rock for that.

As before, you can query the catalog in various ways, either via the wiki or via the web API. Let's say we want to find shales with a velocity over 5000 m/s. You have a few options:

  1. Go to the semantic search form on the wiki and type [[lithology::shale]][[vp::>5000]]
  2. Make a so-called inline query on your own wiki page (you need an account for this).
  3. Make a query via the web API with a rather long URL: http://www.subsurfwiki.org/api.php?action=ask&query=[[RPC:%2B]][[lithology::shale]][[Vp::>5000]]|%3FVp|%3FVs|%3FRho&format=jsonfm

I updated the Jupyter Notebook I published last time with a new query. It's pretty hacky. I'll work on this to produce a more robust method, with some error handling and cleaner code — stay tuned.

The database supports lots of properties, including:

  • Citation and reference
  • Description, lithology, colour (you can have pictures if you want!)
  • Location, lat/lon, basin, age, depth
  • Vp, Vs, \(\rho\), as well as \(\rho_\mathrm{dry}\) and \(\rho_\mathrm{grain}\)
  • Thomsen's \(\epsilon\), \(\delta\), and \(\gamma\)
  • Static and dynamic Young's modulus and Poisson ratio
  • Confining pressure, pore pressure, effective stress, axial stress
  • Frequency
  • Fluid, saturation type, saturation
  • Porosity, permeability, temperature
  • Composition

There is more from the Common Ground data to add, especially photographs. But for now, I'd love some feedback: is this the right set of properties? Do we need more? I want this to be useful — what kind of data and metadata would you like to see? 

I'll end with the usual appeal — I'm open to any kind of suggestions or help with this. Perhaps you can contribute new rocks, or a paper containing data? Or maybe you have some wiki skills, or can help write bots to improve the data? What can you bring? 

Rock property catalog

RPC.png

One of the first things I do on a new play is to start building a Big Giant Spreadsheet. What goes in the big giant spreadsheet? Everything — XRD results, petrography, geochemistry, curve values, elastic parameters, core photo attributes (e.g. RGB triples), and so on. If you're working in the Athabasca or the Eagle Ford then one thing you have is heaps of wells. So the spreadsheet is Big. And Giant. 

But other people's spreadsheets are hard to use. There's no documentation, no references. And how to share them? Email just generates obsolete duplicates and data chaos. And while XLS files are not hard to put on the intranet or Internet,  it's hard to do it in a way that doesn't involve asking people to download the entire spreadsheet — duplicates again. So spreadsheets are not the best choice for collaboration or open science. But wikis might be...

The wiki as database

Regular readers will know that I'm a big fan of MediaWiki. One of the most interesting extensions for the software is Semantic MediaWiki (SMW), which essentially turns a wiki into a database — I've written about it before. Of course we can read any wiki page over the web, but you can query an SMW-powered wiki, which means you can, for example, ask for the elastic properties of a rock, such as this Mesaverde sandstone from Thomsen (1986). And the wiki will send you this JSON string:

{u'exists': True,
 u'fulltext': u'Mesaverde immature sandstone 3 (Kelly 1983)',
 u'fullurl': u'http://subsurfwiki.org/wiki/Mesaverde_immature_sandstone_3_(Kelly_1983)',
 u'namespace': 0,
 u'printouts': {
    u'Lithology': [{u'exists': True,
      u'fulltext': u'Sandstone',
      u'fullurl': u'http://www.subsurfwiki.org/wiki/Sandstone',
      u'namespace': 0}],
    u'Delta': [0.148],
    u'Epsilon': [0.091],
    u'Rho': [{u'unit': u'kg/m\xb3', u'value': 2460}],
    u'Vp': [{u'unit': u'm/s', u'value': 4349}],
    u'Vs': [{u'unit': u'm/s', u'value': 2571}]
  }
}

This might look horrendous at first, or even at last, but it's actually perfectly legible to Python. A little bit of data wrangling and we end up with data we can easily plot. It takes no more than a few lines of code to read the wiki's data, and construct this plot of \(V_\text{P}\) vs \(V_\text{S}\) for all the rocks I have so far put in the wiki — grouped by gross lithology:

A page from the Rock Property Catalog in Subsurfwiki.org. Very much an experiment, rocks contain only a few key properties today.

A page from the Rock Property Catalog in Subsurfwiki.org. Very much an experiment, rocks contain only a few key properties today.

If you're interested in seeing how to make these queries, have a look at this IPython Notebook. It takes you through reading the data from my embryonic catalogue on Subsurfwiki, processing the JSON response from the wiki, and making the plot. Once you see how easy it is, I hope you can imagine a day when people are publishing open data on the web, and sharing tools to query and visualize it.

Imagine it, then figure out how you can help build it!


References

Thomsen, L (1986). Weak elastic anisotropy. Geophysics 51 (10), 1954–1966. DOI 10.1190/1.1442051.