Café con leche

At the weekend, 28 digital geoscientists gathered at MAZ Café in Santa Ana, California, to sprint on some open geophysics software projects. Teams and individuals pushed pull requests — code contributions to open source projects — left, right, and centre. Meanwhile, Senah and her team at MAZ kept us plied with coffee and horchata, with fantastic food on the side.

Because people were helping each other and contributing where they could, I found it a bit hard to stay on top of what everyone was working on. But here are some of the things I heard at the project breakdown on Sunday afternoon:

Gerard Gorman, Navjot Kukreja, Fabio Luporini, Mathias Louboutin, and Philipp Witte, all from the devito project, continued their work to bring Kubernetes cluster management to devito. Trying to balance ease of use and unlimited compute turns out to be A Hard Problem! They also supported the other teams hacking on devito.

Thibaut Astic (UBC) worked on implementing DC resistivity models in devito. He said he enjoyed the expressiveness of devito’s symbolic equation definitions, but that there were some challenges with implementing the grad, div, and curl operator matrices for EM.

Vitor Mickus and Lucas Cavalcante (Campinas) continued their work implementing a CUDA framework for devito. Again, all part of the devito project trying to give scientists easy ways to scale to production-scale datasets.

That wasn’t all for devito. Alongside all these projects, Stephen Alwon worked on adapting segyio to read shot records, Robert Walker worked on poro-elastic models for devito, and Mohammed Yadecuri and Justin Clark (California Resources) contributed too. On the second day, the devito team was joined by Felix Hermann (now Georgia Tech), with Mengmeng Yang, and Ali Siakoohi (both UBC). Clearly there’s something to this technology!

Brendon Hall and Ben Lasscock (Enthought) hacked on an open data portal concept, based on the UCI Machine Learning Repository, coincidentally based just down the road from our location. The team successfully got some examples of open data and code snippets working.

Jesper Dramsch (Heriot-Watt), Matteo Niccoli (Birchcliffe), Yuriy Ivanov (NTNU) and Adriana Gordon and Volodymyr Vragov (U Calgary), hacked on bruges for the weekend, mostly on its documentation and the example notebooks in the in-bruges project. Yuriy got started on a ray-tracing code for us.

Nathan Jones (California Resources) and Vegard Hagen (NTNU) did some great hacking on an interactive plotting framework for geoscience data, based on Altair. What they did looked really polished and will definitely come in useful at future hackathons.

All in all, an amazing array of projects!

This event was low-key compared to recent hackathons, and I enjoyed the slightly more relaxed atmosphere. The venue was also incredibly supportive, making my life very easy.

A big thank you as always to our sponsors, Dell EMC and Enthought. The presence of the irrepressible David Holmes and Chris Lenzsch (both Dell EMC), and Enthought’s new VP of Energy, Charlie Cosad, was greatly appreciated.

sponsors.svg.png

We will definitely be revisiting the sprint concept in the future einmal ist keinmal, as they say. Devito and bruges both got a boost from the weekend, and I think all the developers did too. So stay tuned for the next edition!

Volve: not open after all

Back in June, Equinor made the bold and exciting decision to release all its data from the decommissioned Volve oil field in the North Sea. Although the intent of the release seemed clear, the dataset did not carry a license of any kind. Since you cannot use unlicensed content without permission, this was a problem. I wrote about this at the time.

To its credit, Equinor listened to the concerns from me and others, and considered its options. Sensibly, it chose an off-the-shelf license. It announced its decision a few days ago, and the dataset now carries a Creative Commons Attribution-NonCommercial-ShareAlike license.

Unfortunately, this license is not ‘open’ by any reasonable definition. The non-commercial stipulation means that a lot of people, perhaps most people, will not be able to legally use the data (which is why non-commercial licenses are not open licenses). And the ShareAlike part means that we’re in for some interesting discussion about what derived products are, because any work based on Volve will have to carry the CC BY-NC-SA license too.

Non-commercial licenses are not open

Here are some of the problems with the non-commercial clause:

NC licenses come at a high societal cost: they provide a broad protection for the copyright owner, but strongly limit the potential for re-use, collaboration and sharing in ways unexpected by many users

  • NC licenses are incompatible with CC-BY-SA. This means that the data cannot be used on Wikipedia, SEG Wiki, or AAPG Wiki, or in any openly licensed work carrying that license.

  • NC-licensed data cannot be used commercially. This is obvious, but far-reaching. It means, for example, that nobody can use the data in a course or event for which they charge a fee. It means nobody can use the data as a demo or training data in commercial software. It means nobody can use the data in a book that they sell.

  • The boundaries of the license are unclear. It's arguable whether any business can use the data for any purpose at all, because many of the boundaries of the scope have not been tested legally. What about a course run by AAPG or SEG? What about a private university? What about a government, if it stands to realize monetary gain from, say, a land sale? All of these uses would be illiegal, because it’s the use that matters, not the commercial status of the user.

Now, it seems likely, given the language around the release, that Equinor will not sue people for most of these use cases. They may even say this. Goodness knows, we have enough nudge-nudge-wink-wink agreements like that already in the world of subsurface data. But these arrangements just shift the onus onto the end use and, as we’ve seen with GSI, things can change and one day you wake up with lawsuits.

ShareAlike means you must share too

Creative Commons licenses are, as the name suggests, intended for works of creativity. Indeed, the whole concept of copyright, depends on creativity: copyright protects works of creative expression. If there’s no creativity, there’s no basis for copyright. So for example, a gamma-ray log is unlikely to be copyrightable, but seismic data is (follow the GSI link above to find out why). Non-copyrightable works are not covered by Creative Commons licenses.

All of which is just to help explain some of the language in the CC BY-NC-SA license agreement, which you should read. But the key part is in paragraph 4(b):

You may distribute, publicly display, publicly perform, or publicly digitally perform a Derivative Work only under the terms of this License

What’s a ‘derivative work’? It’s anything ‘based upon’ the licensed material, which is pretty vague and therefore all-encompassing. In short, if you use or show Volve data in your work, no matter how non-commercial it is, then you must attach a CC BY-NC-SA license to your work. This is why SA licenses are sometimes called ‘viral’.

By the way, the much-loved F3 and Penobscot datasets also carry the ShareAlike clause, so any work (e.g. a scientific paper) that uses them is open-access and carries the CC BY-SA license, whether the author of that work likes it or not. I’m pretty sure no-one in academic publishing knows this.

By the way again, everything in Wikipedia is CC BY-SA too. Maybe go and check your papers and presentations now :)

problems-dont-have.png

What should Equinor do?

My impression is that Equinor is trying to satisfy some business partner or legal edge case, but they are forgetting that they have orders of magnitude more capacity to deal with edge cases than the potential users of the dataset do. The principle at work here should be “Don’t solve problems you don’t have”.

Encumbering this amazing dataset with such tight restrictions effectively kills it. It more or less guarantees it cannot have the impact I assume they were looking for. I hope they reconsider their options. The best choice for any open data is CC-BY.

Reproduce this!

logo_simple.png

There’s a saying in programming: untested code is broken code. Is unreproducible science broken science?

I hope not, because geophysical research is — in general — not reproducible. In other words, we have no way of checking the results. Some of it, hopefully not a lot of it, could be broken. We have no way of knowing.

Next week, at the SEG Annual Meeting, we plan to change that. Well, start changing it… it’s going to take a while to get to all of it. For now we’ll be content with starting.

We’re going to make geophysical research reproducible again!

Welcome to the Repro Zoo!

If you’re coming to SEG in Anaheim next week, you are hereby invited to join us in Exposition Hall A, Booth #749.

We’ll be finding papers and figures to reproduce, equations to implement, and data tables to digitize. We’ll be hunting down datasets, recreating plots, and dissecting derivations. All of it will be done in the open, and all the results will be public and free for the community to use.

You can help

There are thousands of unreproducible papers in the geophysical literature, so we are going to need your help. If you’ll be in Anaheim, and even if you’re not, here some things you can do:

That’s all there is to it! Whether you’re a coder or an interpreter, whether you have half an hour or half a day, come along to the Repro Zoo and we’ll get you started.

 Figure 1 from Connolly’s classic paper on elastic impedance. This is the kind of thing we’ll be reproducing.

Figure 1 from Connolly’s classic paper on elastic impedance. This is the kind of thing we’ll be reproducing.

FORCE ML Hackathon: project round-up

The FORCE Machine Learning Hackathon last week generated hundreds of new relationships and nine new projects, including seven new open source tools. Here’s the full run-down, in no particular order…


Predicting well rates in real time

Team Virtual Flow Metering: Nils Barlaug, Trygve Karper, Stian Laagstad, Erlend Vollset (all from Cognite) and Emil Hansen (AkerBP).

Tech: Cognite Data Platform, scikit-learn. GitHub repo.

Project: An engineer from AkerBP brought a problem: testing the rate from a well reduces the pressure and therefore reduces the production rate for a short time, costing about $10k per day. His team investigated whether they could instead predict the rate from other known variables, thereby reducing the number of expensive tests.

This project won the Most Commercial Potential award.

The predicted flow rate (blue) compared to the true flow rate (orange). The team used various models, from multilinear regression to boosted trees.


Reinforcement learning tackles interpretation

Team Gully Attack: Steve Purves, Eirik Larsen, JB Bonas (all Earth Analytics), Aina Bugge (Kalkulo), Thormod Myrvang (NTNU), Peder Aursand (AkerBP).

Tech: keras-rl. GitHub repo.

Project: Deep reinforcement learning has proven adept at learning, and winning, games, and at other tasks including image segmentation. The team tried training an agent to pick these channels in the Parihaka 3D, as well as some other automatic interpretation approaches.

The agent learned something, but in the end it did not prevail. The team learned lots, and did prevail!

This project won the Most Creative Idea award.

Early in training, the learning agent wanders around the image (top left). After an hour of training, the agent tends to stick to the gullies (right).


A new kind of AVO crossplot?

Team ASAP: Per Avseth (Dig), Lucy MacGregor (Rock Solid Images), Lukas Mosser (Imperial), Sandeep Shelke (Emerson), Anders Draege (Equinor), Jostein Heredsvela (DEA), Alessandro Amato del Monte (ENI).

Tech: t-SNE, UMAP, VAE. GitHub repo.

Project: If you were trying to come up with a new approach to AVO analysis, these are the scientists you’d look for. The idea was to reduce the dimensionality of the input traces — using first t-SNE and UMAP then a VAE. This resulted in a new 2-space in which interesting clusters could be probed, chiefly by processing synthetics with known variations (e.g. in thickness or porosity).

This project won the Best In Show award. Look out for the developments that come from this work!

Top: Illustration of the variational autoencoder, which reduces the input data (top left) into some abstract representation — a crossplot, essentially (top middle) — and can also reconstruct the data, but without the features that did not discriminate between the datasets, effectively reducing noise (top right).

The lower image shows the interpreted crossplot (left) and the implied distribution of rock properties (right).


Acquiring seismic with crayons

Team: Jesper Dramsch (Technical University of Denmark), Thilo Wrona (University of Bergen), Victor Aare (Schlumberger), Arno Lettman (DEA), Alf Veland (NPD).

Tech: pix2pix GAN (TensorFlow). GitHub repo.

Project: Not everything tht looks like a toy is a toy. The team spent a few hours drawing cartoons of small seismic sections, then re-trained the pix2pix GAN on them. The result — an app (try it!) that turns sketches into seismic!

This project won the People’s Choice award.

 A sketch of a salt diapir penetrating geological layers (left) and the inferred seismic expression, generated by the neural network. In principal, the model could also be trained to work in the other direction.

A sketch of a salt diapir penetrating geological layers (left) and the inferred seismic expression, generated by the neural network. In principal, the model could also be trained to work in the other direction.


Extracting show depths and confidence from PDFs

Team: Florian Basier (Emerson), Jesse Lord (Kadme), Chris Olsen (ConocoPhillips), Anne Estoppey (student), Kaouther Hadji (Accenture).

Project: A couple of decades ago, the last great digital revolution gave us PDFs. Lots of PDFs. But these pseudodigital documents still need to be wrangled into Proper Data. This team took on that project, trying in particular to extract both the depth of a show, and the confidence in its identification, from well reports.

This project won the Best Presentation award.

 Kaouther Hadji (left), Florian Basier, Jesse Lord, and Anne Estoppey (right).

Kaouther Hadji (left), Florian Basier, Jesse Lord, and Anne Estoppey (right).


Grain size and structure from core images

Team: Eirik Time, Xiaopeng Liao, Fahad Dilib (all Equinor), Nathan Jones (California Resource Corp), Steve Braun (ExxonMobil), Silje Moeller (Cegal).

Tech: sklearn, skimage, fast.ai. GitHub repo.

Project: One of the many teams composed of professionals from all over the industry — it’s amazing to see this kind of collaboration. The team did a great job of breaking the problem down, going after what they could and getting some decent results. An epic task, but so many interesting avenues — we need more teams on these problems!

The pipeline was as ambitious as it looks. But this is a hard problem that will take some time to get good at. Kudos to this team for starting to dig into it and for making amazing progress in just 2 days.


Learning geological age from bugs

Team: David Wade (Equinor), Per Olav Svendsen (Equinor), Bjoern Harald Fotland (Schlumberger), Tore Aadland (University of Bergen), Christopher Rege (Cegal).

Tech: scikit-learn (random forest). GitHub repo.

Project: The team used DEX files from five wells from the recently released Volve dataset from Equinor. The goal was to learn to predict geological age from biostratigraphic species counts. They made substantial progress — and highlighted what a great resource Volve will be as the community explores it and publishes results like these.

David Wade and Per Olav Svendsen of Equinor (top), and some results (bottom)


Lost in 4D space!

Team: Andres Hatloey, Doug Hakkarinen, Mike Brhlik (all ConocoPhillips), Espen Knudsen, Raul Kist, Robin Chalmers (all Cegal), Einar Kjos (AkerBP).

Tech: scikit-learn (random forest regressor). GitHub repo.

Project: Another cross-industry collaboration. In their own words, the team set out to “identify trends between 4D seismic and well measurements in order to calculate reservoir pressures and/or thickness between well control”. They were motivated by real data from Valhall, and did a great job making sense of a lot of real-world data. One nice innovation: using the seismic quality as a weighting factor to try to understand the role of uncertainty. See the team’s presentation.

4D-pressure.png

Clustering reveals patterns in 4D maps

Team: Tetyana Kholodna, Simon Stavland, Nithya Mohan, Saktipada Maity, Jone Kristoffersen Bakkevig (all CapGemini), Reidar Devold Midtun (ConocoPhillips).

Project: The team worked on real 4D data from an operating field. Reidar provided a lot of maps computed with multiple seismic attributes. Groups of maps represent different reservoir layers, and thirteen different time-lapse acquisitions. So… a lot of maps. The team attempted to correlate 4D effects across all of these dimensions — attributes, layers, and production time. Reidar, the only geoscientist on a team of data scientists, also provided one of the quotes of the hackathon: “I’m the geophysicist, and I represent the problem”.

4D-layers.png

That’s it for the FORCE Hackathon for 2018. I daresay there may be more in the coming months and years. If they can build on what we started last week, I think more remarkable things are on the way!


all_small.png

One more thing…

I mentioned the UK hackathons last time, but I went and forgot to include the links to the events. So here they are again, in case you couldn’t find them online…

What are you waiting for? Get signed up and tell your friends!

Machine learning goes mainstream

At our first machine-learning-themed hackathon, in New Orleans in 2015, we had fifteen hackers. TImes were hard in the industry. Few were willing or able to compe out and play. Well, it’s now clear that times have changed! After two epic ML hacks last year (in Paris and Houston), at which we hosted about 115 scientists, it’s clear this year is continuing the trend. Indeed, by the end of 2018 we expect to have welcomed at least 240 more digital scientists to hackathons in the US and Europe.

Conclusion: something remarkable is happening in our field.

The FORCE hackathon

Last Tuesday and Wednesday, Agile co-organized the FORCE Machine Learning Hackathon in Stavanger, Norway. FORCE is a cross-industry geoscience organization, coordinating meetings and research in subsurface. The event preceeded a 1-day symposium on the same theme: machine learning in geoscience. And it was spectacular.

Get a flavour of the spectacularness in Alessandro Amato’s beautiful photographs:

Fifty geoscientists and engineers spent two days at the Norwegian Petroleum Directorate (NPD) in Stavanger. Our hosts were welcoming, accommodating, and generous with the waffles. As usual, we gently nudged the participants into teams, and encouraged them to define projects and find data to work on. It always amazes me how smoothly this potentially daunting task goes; I think this says something about the purposefulness and resourcefulness of our community.

Here’s a quick run-down of the projects:

  • Biostrat! Geological ages from species counts.

  • Lost in 4D Space. Pressure drawdown prediction.

  • Virtual Metering. Predicting wellhead pressure in real time.

  • 300 Wells. Extracting shows and uncertainty from well reports.

  • AVO ML. Unsupervised machine learning for more geological AVO.

  • Core Images. Grain size and lithology from core photos.

  • 4D Layers. Classification engine for 4D seismic data.

  • Gully Attack. Strat trap picking with deep reinforcement learning.

  • sketch2seis. Turning geological cartoons into seismic with pix2pix.

I will do a complete review of the projects in the coming few days, but notice the diversity here. Five of the projects straddle geological topics, and five are geophysical. Two or three involve petroleum engineering issues, while two or three move into sed/strat. We saw natural language processing. We saw random forests. We saw GANs, VAEs, and deep reinforcement learning. In terms of input data, we saw core photos, PDF reports, synthetic seismograms, real-time production data, and hastily assembled label sets. In short — we saw everything.

Takk skal du ha

Many thanks to everyone that helped the event come together:

  • Peter Bormann, the mastermind behind the symposium, was instrumental in making the hackathon happen.

  • Grete Block Vargle (AkerBP) and Pernille Hammernes (Equinor) kept everyone organized and inspired.

  • Tone Helene Mydland (NPD) and Soelvi Amundrud (NPD) made sure everything was logistically honed.

  • Eva Halland (NPD) supported the event throughout and helped with the judging.

  • Alessandro Amato del Monte (Eni) took some fantastic photos — as seen in this post.

  • Diego Castaneda and Rob Leckenby helped me on the Agile side of things, and helped several teams.

And a huge thank you to the sponsors of the event — too many to name, but here they all are:

all_small.png

There’s more to come!

If you’re reading this thinking, “I’d love to go to a geoscience hackathon”, and you happen to live in or near the UK, you’re in luck! There are two machine learning geoscience hackathons coming up this fall:

Don’t miss out! Get signed up and we’ll see you there.

How good is what?

Geology is a descriptive science, which is to say, geologists are label-makers. We record observations by assigning labels to data. Labels can either be numbers or they can be words. As such, of the numerous tasks that machine learning is fit for attacking, supervised classification problems are perhaps the most accessible – the most intuitive – for geoscientists. Take data that already has labels. Build a model that learns the relationships between the data and labels. Use that model to make labels for new data. The concept is the same whether a geologist or an algorithm is doing it, and in both cases we want to test how well our classifier is at doing its label-making.

2d_2class_classifier_left.png

Say we have a classifier that will tell us whether a given combination of rock properties is either a dolomite (purple) or a sandstone (orange). Our classifier could be a person named Sally, who has seen a lot of rocks, or it could be a statistical model trained on a lot of rocks (e.g. this one on the right). For the sake of illustration, say we only have two tools to measure our rocks – that will make visualizing things easier. Maybe we have the gamma-ray tool that measures natural radioactivity, and the density tool that measures bulk density. Give these two measurements to our classifier, and they return to you a label. 

How good is my classifier?

Once you've trained your classifier – you've done the machine learning and all that – you've got yourself an automatic label maker. But that's not even the best part. The best part is that we get to analyze our system and get a handle on how good we can expect our predictions to be. We do this by seeing if the classifier returns the correct labels for samples that it has never seen before, using a dataset for which we know the labels. This dataset is called validation data.

Using the validation data, we can generate a suite of statistical scores to tell us unambiguously how this particular classifier is performing. In scikit-learn, this information compiled into a so-called classification report, and it’s available to you with a few simple lines of code. It’s a window into the behaviour of the classifier that warrants deeper inquiry.

To describe various elements in a classification report, it will be helpful to refer to some validation data:

 Our Two-class Classifier (left) has not seen the Validation Data (middle). We can calculate a classification report by Analyzing the intersection of the two (right).

Our Two-class Classifier (left) has not seen the Validation Data (middle). We can calculate a classification report by Analyzing the intersection of the two (right).

Accuracy is not enough

When people straight up ask about a model’s accuracy, it could be that they aren't thinking deeply enough about the performance of the classifier. Accuracy is a measure of the entire classifier. It tells us nothing about how well we are doing with one class compared to another, but there are other metrics that tell us this:

metric_definitions2.png

Support — how many instances there were of that label in the validation set.

Precision — the fraction of correct predictions for a given label. Also known as positive predictive value.

Recall — the proportion of the class that we correctly predicted. Also known as sensitivity.

F1 score — the harmonic mean of precision and recall. It's a combined metric for each class.

Accuracy – the total fraction of correct predictions for all classes. You can calculate this for each class, but it will be the same value for each of the class.   

DIY classification report

If you're like me and you find the grammar of true positives and false negatives confusing, it might help to to treat each class within the classifier as its own mini diagnostic test, and build up data for the classification report row by row. Then it's as simple as counting hits and misses from the validation data and computing some fractions. Inspired by this diagram on the Wikipedia page for the F1 score, I've given both text and pictorial versions of the equations:

dolomite_and_classifier_report_sheet.png

Have a go at filling in the scores for the two classes above. After that, fill in your answers into your own hand-drawn version of the empty table below. Notice that there is only a single score for accuracy for the entire classifier, and that there may be a richer story between the various other scores in the table. Do you want to optimize accuracy overall? Or perhaps you care about maximizing recall in one class above all else? What matters most to you? Should you penalize some mistakes stronger than others?

clf_report.png

When data sets get larger – by either increasing the number of samples, or increasing the dimensionality of the data – even though this scoring-by-hand technique becomes impractical, the implementation stays the same. In classification problems that have more than two classes we can add in a confusion matrix to our reporting, which is something that deserves a whole other post. 

Upon finishing logging a slab of core, if you were to ask Sally the stratigrapher, "How accurate are your facies?", she may dismiss your inquiry outright, or maybe point to some samples she's not completely confident in. Or she might tell you that she was extra diligent in the transition zones, or point to regions where this is very sandy sand, or this is very hydrothermally altered. Sadly, we in geoscience – emphasis on the science – seldom take the extra steps to test and report our own performance. But we totally could.

 The ANSWERS. Upside Down. To two Decimal places.

The ANSWERS. Upside Down. To two Decimal places.

Are there benefits to pseudoscience?

No, of course there aren't. 

 Balance! The scourge of modern news. CC-BY by SkepticalScience.com

Balance! The scourge of modern news. CC-BY by SkepticalScience.com

Unless... unless you're a journalist, perhaps. Then a bit of pseudoscience can provide some much-needed balance — just to be fair! — to the monotonic barrage of boring old scientific consensus. Now you can write stories about flat-earthers, anti-vaxxers, homeopathy, or the benefits of climate change!*

So far, so good. It's fun to pillory the dimwits who think the moon landings were filmed in a studio in Utah, or that humans have had no impact on Earth's climate. The important thing is for the journalist to have a clear and unequivocal opinion about it. If an article doesn't make it clear that the deluded people at the flat-earth convention ("Hey, everyone thought Copernicus was mad!") have formed their opinions in spite of, not because of, the overwhelming evidence before them, then readers might think the journalist — and the publisher — agree with them.

In other words, if you report on hogwash, then you had better say that it's hogwash, or you end up looking like one of the washers of the hog.


Fake geoscience?

AAPG found this out recently, when the August issue of its Explorer magazine published an article by Ken Milam called Are there benefits to climate change? Ken was reporting on a talk by AAPG member Greg Wrightstone at URTeC in July. Greg wrote a book called Inconvenient Facts: The Science That Al Gore Doesn't Want You To Know. The gist: no need to be concerned about carbon dioxide because, "The U.S. Navy’s submarines often exceed 8,000 ppm (20 times current levels) and there is no danger to our sailors" — surely some of the least watertight reasoning I've ever encountered. Greg's basic idea is that, since the earth has been warmer before, with higher levels of CO2, there's nothing to worry about today (those Cretaceous conurbations and Silurian civilizations had no trouble adapting!) So he thinks, "the correct policy to address climate change is to have the courage to do nothing".

So far, so good. Except that Ken — in reporting 'just the facts' — didn't mention that Greg's talk was full of half-truths and inaccuracies and that few earth scientists agree with him. He forgot to remark upon the real news story: how worrying it is that URTeC 2018 put on a breakfast promoting Greg and his marginal views. He omitted to point out that this industry needs to grow up and face the future with reponsibility, supporting society with sound geoscience.

So it looked a bit like Explorer and AAPG were contributing to the washing of this particular hog.


Discussion

As you might expect, there was some discussion about the article — both on aapg.org and on Twitter (and probably elsewhere). For example, Mark Tingay (University of Adelaide) called AAPG and SPE out:

So did Brian Romans (Virginia Tech):

And there was further discussion (sort of) involving Greg Wrightstone himself. Trawl through Mark Tingay's timeline, especially his systematic dismantling of Greg's 'evidence', if your curiosity gets the better of you.


Response

Of course AAPG noticed the commotion. The September issue of Explorer contains two statements from AAPG staff. David Curtiss, AAPG Executive Director, said this in his column:

Milam was assigned to report on an invited presentation by Greg Wrightstone, a past president of AAPG’s Eastern Section, based on a recently self-published book on climate change, at the Unconventional Resources Technology Conference in July. Here was an AAPG Member and past section officer speaking about climate change – an issue of interest to many of our members, who had been invited by a group of his geoscience and engineering peers to present at a topical breakfast – not a technical session – at a major conference.

This sounds fine, on the face of it, but details matter. A glance at the book in question should have been enough to indicate that the content of the talk could only have been presented in a non-technical session, with a side of hash browns.

Anyway, David does go on to point out the tension between the petroleum industry's activities and society's environmental concerns. The tension is real, and AAPG and its members, are in the middle of it. We can contribute scientifically to the conversations that need to happen to resolve that tension. But pushing junk science and polemical bluster is definitely not going to help. I believe that most of the officers and members of AAPG agree. 

The editor of Explorer, Brian Ervin, had this to say:

For the record, none of our coverage of any issue or any given perspective on an issue should be taken as an endorsement — explicit or implicit — of that perspective. Also, the EXPLORER is — quite emphatically — not a scientific journal. Our content is not peer-reviewed. [...] No, the EXPLORER exists for an entirely different purpose. We provide news about Earth science, the industry and the Association, so our mission is different and unrelated to that of a scientific publication.

He goes on to say that he knew that Wrightstone's views are not popular and that it would provoke some reaction, but wanted to present it impartially and "give [readers] the opportunity to evaluate his position for themselves".

I just hope Explorer doesn't start doing this with too many other marginal opinions.


I'd have preferred to see AAPG back-pedal a bit more energetically. Publishing this article was a mistake. AAPG needs to think about the purpose, and influence, of its reporting, as well as its stance on climate change (which, according to David Curtiss, hasn't been discussed substantially in more than 10 years). This isn't about pushing agendas, any more than talking about the moon landings is about pushing agendas. It's about being a modern scientific association with high aspirations for itself, its members, and society.

What is a sprint?

In October we're hosting our first 'code sprint'! What is that?

A code sprint is a type of hackathon, in which efforts are focused around a small number of open source projects. They are related to, but not really the same as, sprints in the Scrum software development framework. They are non-competitive — the only goal is to improve the software in question, whether it's adding functionality, fixing bugs, writing tests, improving documentation, or doing any of the other countless things that good software needs. 

On 13 and 14 October, we'll be hacking on 3 projects:

  • Devito: a high-level finite difference library for Python. Devito featured in three Geophysical Tutorials at the end of 2017 and beginning of 2018 (see Witte et al. for Part 3). The project needs help with code, tests, model examples, and documentation. There will be core devs from the project at the sprint. GitHub repo is here.
  • Bruges: a simple collection of Python functions representing basic geophysical equations. We built this library back in 2015, and have been chipping away ever since. It needs more equations, better docs, and better tests — and the project is basic enough for anyone to contribute to it, even a total Python newbie. GitHub repo is here.
  • G3.js: a JavaScript wrapper for D3.js, a popular plotting toolkit for web developers. When we tried to adapt D3.js to geoscience data, we found we wanted to simplify basic tasks like making vertical plots, and plotting raster-like data (e.g. seismic) with line plots on top (e.g. horizons). Experience with JavaScript is a must. GitHub repo is here.

The sprint will be at a small joint called MAZ Café Con Leche, located in Santa Ana about 10 km or 15 minutes from the Anaheim Convention Center where the SEG Annual Meeting is happening the following week.

Thank you, as ever, to our fantastic sponsors: Dell EMC and Enthought. These two companies are powered by amazing people doing amazing things. I'm very grateful to them both for being such enthusiastic champions of the change we're working for in our science and our industry. 

If you like the sound of spending the weekend coding, talking geophysics, and enjoying the best coffee in southern California, please join us at the Geophysics Sprint! Register on Eventbrite and we'll see you there.

Get out of the way

This tweet from the Ecological Society of America conference was interesting:

This kind of thing is not new — many conferences have 'No photos' signs around the posters and the talk sessions. 'No tweeting' seems pretty extreme though. I'm not sure if that's what the ESA was pushing for in this case, but either way the message is: 'No sharing stuff'. They do have a hashtag though, so...

Anyway, I tweeted this in response:

I think this tells you just as much about how broken the conference model is, as about how naïve/afraid our technical societies are.

I think there's a general rule: if you're trying to control the flow of information, you're getting in the way. You're also going to be disappointed because you can't control the flow of information — perhaps because it's not yours to control. I want to say to the organizers: The people you invited into your society are, thankfully, enthusiastic collaborators who can't wait to share the exciting things they heard at your conference. Why on earth would you try to shut that down? Why wouldn't you go out of your way to support them, amplify them, and find more people like them?

But wait, the no-tweeting society asks, what if the author didn't want anyone to share their work? My first question is: why did you give a talk then? My second question is: did the sharer give you proper attribution? If not — you are right to be annoyed and your society should help set this norm in your community. If so — see my first question.

Technical societies need to get over the idea that they own their communities and the knowledge their communities produce. They fret about revenue and membership numbers, but they just need to focus on making their members' technical and professional lives richer and more connected. The rest will take care of itself.


Interested in this topic? Here's a great post about tweeting at conferences, by Jacquelyn Gill. It also links to lots of other opinions, and there are lots of comments.

Image by Rob Salguero-Gómez.

Life lessons from a neural network

The latest Geophysical Tutorial came out this week in The Leading Edge. It's by my friend Gram Ganssle, and it's about neural networks. Although the example in the article is not, strictly speaking, a deep net (it only has one hidden layer), it concisely illustrates many of the features of deep learning.

Whilst editing the article, it struck me that some of the features of deep learning are really features of life. Maybe humans can learn a few life lessons from neural networks! 

Seek nonlinearity

Activation functions are one of the most important ingredients in a neural network. They are the reason neural nets are able to learn complex, nonlinear relationships without a gigantic number of parameters.

Life lesson: look for nonlinearities in your life. Go to an event aimed at another profession. Take a new route to work. Buy a random volume at your local bookshop. Pick that ice-cream flavour you've never dared try (durian, anyone?).

Iterate

Neural networks learn by repetition. They start with random guesses about what might work, then they process each data point a hundred, maybe 100,000 times, check the answer, adjust weights, and get a little better each time. 

Life lesson: practice makes perfect. You won't get anything right the first time (if you do, celebrate!). The important thing is that you pay attention, figure out what to change, and tweak it. Then try again.

More data

One of the things we know for sure about neural networks is that they work best when they train on a lot of data. They need to see as much of the problem domain as possible, including the edge cases and the worst cases.

Life lesson: seek data. If you're a geologist, get out into the field and see more rocks. Geophysicists: look at more seismic. Whoever you are, read more. Afterwards, share what you find with others, and listen to what they have learned.

Stretch metaphors

Yes, well, I could probably go on. Convolutional networks teach us to create new things by mixing ideas from different parts of our experience. Long training times for neural nets teach us to be patient, and invest in GPUs. Hidden layers with many units teach us to... er, expect a lot of parameters in our lives...?

Anyway, the point is that life is like a neural net. Or maybe, no less interestingly, neural nets are like life. My impression is that most of the innovations in deep learning have come from people looking at their own interpretive and discriminatory powers and asking, "What do I do here? How do I make these decisions?" — and then trying to approximate that heuristic or thought process in code.

What's the lesson here? I have no idea. Enjoy your weekend!


Thumbnail image by Flickr user latteda, licensed CC-BY. The Leading Edge cover is copyright of SEG, fair use terms.