Abstract horror
/This isn't really a horror story, more of a Grimm fairy tale. Still, I thought it worthy of a Hallowe'eny title.
I've been reviewing abstracts for the 2018 AAPG annual convention. It's fun, because you get to read about new research months ahead of the rest of the world. But it's also not fun because... well, most abstracts aren't that great. I have no idea what proportion of abstracts the conference accepts, but I hope it's not too far above about 50%. (There was some speculation at SEG that there are so many talks now — 18 parallel sessions! — because giving a talk is the only way for many people to get permission to travel to it. I hope this isn't true.)
Some of the abstracts were great; at least 1 in 4 was better than 'good'. So what's wrong with the others? Here are the three main issues I saw:
- Lots of abstracts were uninteresting.
- Even more of them were vague.
- Almost all of them were about unreproducible research.
Let's look at each of these in turn and ask what we can do about it.
Uninteresting
Let's face it, not all research is interesting research. That's OK — it might still be useful or otherwise important. I think you can still write an interesting abstract about it. Here are some tips:
- Don't be vague! Details are interesting. See the next section.
- Break things up a bit. Use at least 2 paragraphs, maybe 3 or 4. Maybe a list or two.
- Use natural, everyday language. Try reading your abstract aloud.
- In the first sentence, tell me why I should come to your talk or visit your poster.
Vague
I scribbled 'Vague' on nearly every abstract. In almost every case, either the method or the results, and usually both, were described in woolly language. For example (this is not a direct quote, but paraphrased):
Machine learning was used to predict the reservoir quality in most of the wells in the area, using millions of training examples and getting good results. The inputs were wireline log data from nearby wells.
This is useless information — which algorithm? How did you optimize it? How much training data did you have, and how many data instances did you validate against? How many features did you use? What kind of validation did you do, and what scores did you achieve? Which competing methods did you compare with? Use numbers, be specific:
We used a 9-dimensional support vector machine, implemented in scikit-learn, to model the permeability. With over 3 million training examples from logs in 150 nearby wells in the training set, and 1 million in cross-validation, we achieved an F1 score of 0.75 or more in 18 of the 20 wells.
A roughly 50% increase in the number of words, but an ∞% increase in the information content.
Unreproducible
Maybe I'm being unfair on this one, because I can't really tell if something is going to be reproducible or not from an abstract... or can I?
I'd venture to say that, if the formations are called A, B, C, and D, and the wells are called 1, 2, 3, and 4, then I'm pretty sure I'm not going to find out much about your research. (I had a long debate with someone in Houston recently about whether this sort of thing even qualifies as science.)
So what can you do to make a more useful abstract?
- Name your methods and algorithms. Where did they come from? Which other work did you build on?
- Name the dataset and tell me where it came from. Don't obfuscate the details — they're what make you interesting! Share as much of the data as you can.
- Name the software you're using. If it's open source, it's the least you can do. If it's not open source, it's not reproducible, but I'd still like to know how you're doing what you do.
I realize not everyone is in a position to do 100% reproducible research, but you can aim for something over 50%. If your work really is top secret (<50% reproducible), then you might think twice about sharing your work at conferences, since no-one can really learn anything from you. Ask yourself if your paper is really just an advertisement.
So what does a good abstract look like?
Well, I do like this one-word abstract from Gardner & Knopoff (1974), from the Bulletin of the Seismological Society of America:
Is the sequence of earthquakes in Southern California, with aftershocks removed, Poissonian?
Yes.
A classic, but I'm not sure it would get your paper accepted at a conference. I don't collect awesome abstracts — maybe I should — but here are some papers with great abstracts that caught my interest recently:
- Dean, T (2017). The seismic signature of rain. Geophysics 82 (5). The title is great too; what curious person could resist this paper?
- Durkin, P et al. (2017) on their beautiful McMurry Fm interpretation in JSR 27 (10). It could arguably be improved by a snappier first sentence that gives punchline of the paper.
- Doughty-Jones, G, et al (2017) in AAPG Bulletin 101 (11). There's maybe a bit of an assumption that the reader cares about intraslope minibasins, but the abstract has meat.
Becoming a better abstracter
The number one thing to improve as a writer is probably asking other people — friendly but critical ones — for honest feedback. So start there.
As I mentioned in my post More on brevity way back in March 2011, you should probably read Landes (1966) once every couple of years:
Landes, K (1966). A scrutiny of the abstract II. AAPG Bulletin 50 (9). Available online. (An update to his original 1951 piece, A scrutiny of the abstract, AAPG Bulletin 35, no 7.)
There's also this plea from geophysicist Paul Lowman, to stop turning abstracts into introductions:
Lowman, Paul (1988). The abstract rescrutinized. Geology 16 (12). Available online.
Give those a read — they are very short — and maybe pay extra attention to the next dozen or so abstracts you read. Do they tell you what you need to know? Are they either useful or interesting? Do they paint a vivid picture? Or are they too... abstract?