The new open geophysics tools

The hackathon in Denver was more than 6 weeks ago. I kept thinking, "Oh, I must post a review of what went down" (beyond the quick wrap-up I did at the time), but while I'm a firm believer in procrastination six weeks seems unreasonable... Maybe it's taken this long to scrub down to the lasting lessons. Before those, I want to tell you who the teams were, what they did, and where you can find their (100% open source!) stuff. Enjoy!

Geophys Wiz

Andrew Pethick, Josh Poirier, Colton Kohnke, Katerina Gonzales, and Elijah Thomas — GitHub repo

This team had no trouble coming up with ideas — perhaps a reflection of their composition, which was more heterogeneous than the other teams. Josh is at NEOS, the consulting and software firm, and Andrew is a postdoc at Curtin in Perth, Australia, while the other 3 are students at Mines. The team eventually settled on building MT Black Box, a magnetotellurics modeling web application. 

Last thing: Don't miss Andrew Pethick's write-up of the event. 

Seemingly Concerned Neighbours

Elias Arias, Brent Putman, Thomas Rapstine, and Gabriel Martinez — Github repo

These four young geophysicists from the Colorado School of Mines impressed everyone with their work ethic. Their tight-knit team came in with a plan, and proceeded to scribble up the coolest-looking whiteboard of the weekend. After learning some Android development skills 'earlier this week', they pulled together a great little app for forward modeling magnetotelluric responses. 

Hackathon_well_tie_guys.jpg

Well tie guys

Michaël Montouchet, Graham Dawes, Mark Roberts

It was terrific to have pro coders Graham and Michaël with us — they flew from the UK to be with us, thanks to their employer and generous sponsor ffA GeoTeric. They hooked up with Mark, a Denver geophysicist and developer, and hacked on a well-tie web application, rightly identifying a gap in the open source market, so to speak (there is precious little out there for well-based workflows). They may have bitten off more than they could chew in just 2 days, so I hope we can get together with them again to finish it off. Who's up for a European hackathon? 

These two characters from UBC didn't get going till Sunday morning, but in just five hours they built a sweet web app for forward modeling the DC resistivity response of a buried disk. They weren't starting from scratch, because Rowan and others have spent months honing SimPEG, a rich open-source geophysical library, but minds were nonetheless blown.

Key takeaway: interactivity beyond sliders for the win.

Pick This!

Ben Bougher, Jacob Foshee, Evan Bianco, and an immiscible mixture of Chris Chalcraft and me — GitHub repo

Wouldn't you sometimes like to know how other people would interpret the section you're working on? This team, a reprise of the dream team from Houston in 2013, built a simple way to share images and invite others to interpret them. When someone has completed their interpretation, only then do they get to see the ensemble — everyone else's interpretations — in a heatmap. Not only did this team demo live software at pickthis.io, but the audience provided the first crowdsourced picks in real time. 

We'll be blogging more about Pick This soon. We're actively seeking ideas, images, interpreters, and financial support. Keep an eye out.

What I learned at this hackathon

  • Potential fields are an actual thing! OK, kidding, but three out of five teams built potential field modeling tools. I wasn't expecting that, and I think the judges were impressed at the breadth. 
  • 30 hours is easily enough time to build something pretty cool. Heck, 5 hours is enough if you're made of the right stuff. 
  • Students can happily build prototypes alongside professional developers, and even teach them a thing or two. And vice versa. Are hackathons a leveller of playing fields?
  • We need to remove the road blocks to more people enjoying this event. To help with this, next time there will be a 1-day bootcamp before the hackathon.
  • After virtually doubling in size from 2013 to 2014, it's clear that the 2015 Hackathon in New Orleans is going to be awesome! Mark your calendar: 17 and 18 October 2015.

Thank you!

Thank you to the creative, energetic geophysicists that came. It was a privilege to meet and hack with you!

Thank you to the judges who gave up their Sunday teatime to watch the demos and give precious feedback to the teams: Steve Adcock, Jamie Allison, Maitri Erwin, Dennis Cooke, Chris Krohn, Shannon Bjarnason, David Holmes, and Tracy Stark. Amazing people, one and all.

A final Thank You to our sponsors — dGB Earth Sciences, ffA GeoTeric, and OpenGeoSolutions. You guys are totally awesome! Seriously.

sponsors_white_noagile.png

It's the GGGG (giant geoscience gift guide)

I expect you've been wondering what to get me and Evan for Christmas. Wonder no more! Or, if you aren't that into Agile, I suppose other geoscientists might even like some of this stuff. If you're feeling more needy than generous, just leave this post up on a computer where people who love you will definitely see it, or print it out and mail it to everyone you know with prominent red arrows pointing to the things you like best. That's what I do.

Geology in the home

Paul_Smith_rug.jpg

Art!

Museums and trips and stuff

Image is CC-BY by Greg Westfall on Flickr

Image is CC-BY by Greg Westfall on Flickr

Geo-apparel

Blimey... books!

Who over the age of 21 or maybe 30 doesn't love getting books for Christmas? I don't!... not love it. Er, anyway, here are some great reads!

  • How about 156 things for the price of three? Yeah, that is a deal.
  • They're not geological but my two favourite books of the year were highly geeky — What If? by Randall Munroe and Cool Tools by Kevin Kelly.
  • Let's face it, you're going to get books for the kids in your life too (I hope). You can't do better than Jon Tennant's Excavate! Dinosaurs.
  • You're gonna need some bookends for all these books.

Still stuck? Come on!


All of the smaller images in this post are copyright of their respective owners, and I'm hoping they don't mind me using them to help sell their stuff.

Update on 2014-12-12 01:41 by Matt Hall
​In case you're still struggling, Evelyn Mervine has posted her annual list over on the AGU Blogosphere. If you find any more geo-inspired gift lists, or have ideas for others, please drop them in the comments.

Neglected near-surface workhorses

Yesterday afternoon, I attended a talk at Dalhousie by Peter Cary who has begun the CSEG distinguished lecture tour series. Peter's work is well known in the seismic processing world, and he's now spreading his insights to the broader geoscience community. This was only his fourth stop out of 26 on the tour, so there's plenty of time to catch it.

Three steps of seismic processing

In the head-spinning jargon of seismic processing, if you're lost, it's maybe not be your fault. Sometimes it might even seem like you're going in circles.

Ask the vendor or processing specialist first to keep it simple, and second to tell you in which of the three processing stages you are in. Seismic data processing has steps:

  • Attenuate all types of noise.
  • Remove the effects of the near surface.
  • Migration, sometimes called imaging.

If time migration is the workshorse of seismic processing, and if is fk filtering (or f–anything filtering) is the workhorse of noise attenuation, then surface consistent deconvolution is the workhorse of the near surface. These topics aren't as sexy or as new as FWI or compressed sensing, but Peter has been questioning the basics of surface-consistent scaling, and the approximations we make when processing land seismic data. 

The ambiguity of phase and travel-time corrections

To the processor, removing the effects of the near surface means making things flat in the CMP domain. It turns out you can do this with travel time corrections (static shifts), you can do this with phase corrections, or you can do it with both.

A simple synthetic example showing (a) a gather with surface-consistent statics and phase variations; (b) the same gather after surface-consistent residual statics correction, and (c) after simultaneous surface-consistent statics and phase correcition. Image © Cary & Nagarajappa and CSEG.

It's troubling that there is more than one way to achieve flatness. Peter's advice is to use shot stacks and receiver stacks to compare the efficacy of static corrections. They eliminate doubt about whether surface consistent scaling is working, and are a better QC tool than other data domains.

Deeper than shallow

It may sound trivial, but the hardest part about using seismic waves for imaging is that they have to travel down and back up through the near surface on their path to the target. It might seem counter-intuitive, but the geometric configurations that work well for the deep earth are not well suited to the shallow earth, and how we might correct for it. I can imagine that two surveys could be useful, one for the target and one for characterizing the shallow that gets in the way of the target, but seismic experiments are already expensive enough when there is only target to be concerned with.

Still, the near surface is something we can't avoid. Much like astronomers using ground-based telescopes shooting for the stars, seismic processors too have to get the noisy stuff that is sitting closest to the detectors out of the way.

Another 52 Things hits the shelves

The new book is out today: 52 Things You Should Know About Palaeontology. Having been up for pre-order in the US, it is now shipping. The book will appear in Amazons globally in the next 24 hours or so, perhaps a bit longer for Canada.

I'm very proud of this volume. It shows that 52 Things has legs, and the quality is as high as ever. Euan Clarkson knows a thing or two about fossils and about books, and here's what he thought of it: 

This is sheer delight for the reader, with a great range of short but fascinating articles; serious science but often funny. Altogether brilliant!

Each purchase benefits The Micropalaeontological Society's Educational Trust, a UK charity, for the furthering of postgraduate education in microfossils. You should probably go and buy it now before it runs out. Go on, I'll wait here...

1000 years of fossil obsession

So what's in the book? There's too much variety to describe. Dinosaurs, plants, foraminifera, arthropods — they're all in there. There's a geographical index, as before, and also a chronostratigraphic one. The geography shows some distinct clustering, that partly reflects the emphasis on the science of applied fossil-gazing: biostratigraphy. 

The book has 48 authors, a new record for these collections. It's an honour to work with each of them — their passion, commitment, and professionalism positively shines from the pages. Geologists and fossil nuts alike will recognize many of the names, though some will, I hope, be new to you. As a group, these scientists represent  1000 years of experience!


Amazingly, and completely by chance, it is one year to the day since we announced 52 Things You Should Know About Geology. Sales of that book benefit The AAPG Foundation, so today I am delighted to be sending a cheque for $1280 to them in Tulsa. Thank you to everyone who bought a copy, and of course to the authors of that book for making it happen.

Imaging with vectors

Even though it took way too long (I had been admiring it for quite some time), I recently became the first kid on the block to own a Lytro. The Lytro, if you haven't heard, is sort of like a camera, except that it definitely isn't. Apart from a viewfinder on one end, a piece of glass on the other, and a shutter release button on top, it doesn't really look or feel like a point-and-shoot or SLR either. It actually bares a closer resemblance to a pocket-sized telescope. So don't you dare call it a camera. Indeed, the thing that the Lytro is built to do is what makes it completely different than any camera, and this perhaps, is the best mark of its identity. It captures not only the intensity of the light rays hitting the sensor (or film), but the directionality of those light rays as well.

So what. Right? What does this mean? Why is this interesting? It means that with a light-field camera, the focal point and depth of field are parameters that can be controlled by the viewer. It is interesting because of freeing up of space and of the physical atoms of hardware by deliberately removing the motorized auto-focus mechanism, and placing instead into the capable and powerful hands of software. I find it particularly elegant that this technology was acheived as a result of harnessing light's true nature better than any other camera that came before it. A device designed to to record light as light is; a physical property defined by both a magnitude and a direction.

How do I interact with this picture? 

Normally this would be a weird question to ask, but with the Lytro the viewer can take part in the imaging process in three ways. Try it out on the samples above:

  • Point to focus: collecting the light field from a scene is a technical thing. Creating images by deciding what to focus on, and what to not focus on is an artistic thing. It is an interpretive thing. It's a narrative that the viewer has with the data. The goal of the light field camera is not to impose a narrative, but instead get entirely out of the way.
  • Extended focus: for artistic reasons, the viewer might want to have some parts of the image in focus, other parts out of focus. It's how our eyes work; our peripheral vision. But in cases where you want to see the full depth of field, where everything is in focus, the software has an algorithm for that (to try it out you can press 'E' on your keyboard).
  • Stereo viewing: speaks to the multidimensional nature of the vector field data. In the real world, when we move our head, the foreground moves faster than the background. So too with light-field images, you can simulate parallax, by moving your cursor and better understand the spatial relationship between objects in the scene.

These capabilities aren't just components of the device, they are technological paradigms embodied by the device. That, to me, is what is so incredibly beautiful about this technology. It's the best example of what technology should be: a material thing that improves the work of the mind.

A call to the seismic industry

The seismic wavefield is what we should be giving to the interpreter. This probably means engineering a seismic system where less work is done by the processor, and more control is given to the interpreter through software that does the heavy lifting. Interpreters need to have direct feedback with the medium they are interpreting. How does seismic have to change to allow that narrative?

R is for Resolution

Resolution is becoming a catch-all term for various aspects of the quality of a digital signal, whether it's a photograph, a sound recording, or a seismic volume.

I got thinking about this on seeing an ad in AAPG Explorer magazine, announcing an 'ultra-high-resolution' 3D in the Gulf of Mexico (right), aimed at site-survey and geohazard detection. There's a nice image of the 3D, but the only evidence offered for the 'ultra-high-res' claim is the sample interval in space and time (3 m × 6 m bins and 0.25 ms sampling). This is analogous to the obsession with megapixels in digital photography, but it is only one of several ways to look at resolution. The effect of increasing the sample interval of some digital images is shown in the second column here, compared to 200 × 200 pixels originals (click to zoom):

Another aspect of resolution is spatial bandwidth, which gets at resolving power, perhaps analogous to focus for a photographer. If the range of frequencies is too narrow, then broadband features like edges cannot be represented. We can simulate poor frequency content by bandpassing the data, for example smoothing it with a Gaussian filter (column 3).

Yet another way to think about resolution is precision (column 4). Indeed, when audiophiles talk about resolution, they are talking about bit depth. We usually record seismic with 32 bits per sample, which allows us to discriminate between a large number of values — but we often view seismic with only 6 or 8 bits of precision. In the examples here, we're looking at 2 bits. Fewer bits means we can't tell the difference between some values, especially as it usually results in clipping.

If it comes down to our ability to tell events (or objects, or values) apart, then another factor enters the fray: signal-to-noise ratio. Too much noise (column 5) impairs our ability to resolve detail and discriminate between things, and to measure the true value of, say, amplitude. So while we don't normally talk about the noise level as a resolution issue, it is one. And it may have the most variety: in seismic acquisition we suffer from thermal noise, line noise, wind and helicopters, coherent noise, and so on.

I can only think of one more impairment to the signals we collect, and it may be the most troubling: the total duration or extent of the observation (column 6). How much information can you afford to gather? Uncertainty resulting from a small window is the basis of the game Name That Tune. If the scale of observation is not appropriate to the scale we're interested in, we risk a kind of interpretation 'gap' — related to a concept we've touched on before — and it's why geologists' brains need to be helicoptery. A small 3D is harder to interpret than a large one. 

The final consideration is not a signal effect at all. It has to do with the nature of the target itself. Notice how tolerant the brick wall image is to the various impairments (especially if you know what it is), and how intolerant the photomicrograph is. In the astronomical image, the galaxy is tolerant; the stars are not. Notice too that trying to 'resolve' the galaxy (into a point, say) would be a mistake: it is inherently low-resolution. Indeed, its fuzziness is one of its salient features.

Have I missed anything? Are there other ways in which the recorded signal can suffer and targets can be confused or otherwise unresolved? How does illumination fit in here, or spectral bandwidth? What do you mean when you talk about resolution?


This post is an exceprt from my talk at SEG, which you can read about in this blog post. You can even listen to it if you're really bored. The images were generated by one of my IPython Notebooks that I point to in the talk, specifically images.ipynb

Astute readers with potent memories will have noticed that we have skipped Q in our A to Z. I just cannot seem to finish my post about Q, but I will!

The Safe Band ad is copyright of NCS SubSea. This low-res snippet qualifies as fair use for comment.

All the time freaks

SEG 2014Thursday was our last day at the SEG Annual Meeting. Evan and I took in the Recent developments in time-frequency analysis workshop, organized by Mirko van der Baan, Sergey Fomel, and Jean-Baptiste Tary (Vienna). The workshop came out of an excellent paper I reviewed this summer, which was published online a couple of weeks ago:

Tary, JB, RH Herrera, J Han, and M van der Baan (2014), Spectral estimation—What is new? What is next?, Rev. Geophys. 52. doi:10.1002/2014RG000461.

The paper compares the results of several time–frequency transforms on a suite of 'benchmark' signals. The idea of the workshop was to invite further investigation or other transforms. The organizers did a nice job of inviting contributors with diverse interests and backgrounds. The following people gave talks, several of them sharing their code (*):

  • John Castagna (Lumina) with a review of the applications of spectral decomposition for seismic analysis.
  • Steven Lin (NCU, Taiwan) on empirical methods and the Hilbert–Huang transform.
  • Hau-Tieng Wu (Toronto) on the application of transforms to monitoring respiratory patterns in animals.*
  • Marcílio Matos (SISMO) gave an entertaining, talk about various aspects of the problem.
  • Haizhou Yang (Standford) on synchrosqueezing transforms applied to problems in anatomy.*
  • Sergey Fomel (UT Austin) on Prony's method... and how things don't always work out.*
  • Me, talking about the fidelity of time–frequency transforms, and some 'unsolved problems' (for me).*
  • Mirko van der Baan (Alberta) on the results from the Tary et al. paper.

Some interesting discussion came up in the two or three unstructured parts of the session, organized as mini-panel discussions with groups of authors. Indeed, it felt like the session could have lasted longer, because I don't think we got very close to resolving anything. Some of the points I took away from the discussion:

  • My observation: there is no existing survey of the performance of spectral decomposition (or AVO) — these would be great risking tools.
  • Castagna's assertion: there is no model that predicts the low-frequency 'shadow' effect (confusingly it's a bright thing, not a shadow).
  • There is no agreement on whether the so-called 'Gabor limit' of time–frequency localization is a lower-bound on spectral decomposition. I will write more about this in the coming weeks.
  • Should we even be attempting to use reassignment, or other 'sharpening' tools, on broadband signals? To put it another way: does instantaneous frequency mean anything in seismic signals?
  • What statistical measures might help us understand the amount of reassignment, or the precision of time–frequency decompositions in general?

The fidelity of time–frequency transforms

My own talk was one of the hardest I've ever done, mainly because I don't think about these problems very often. I'm not much of a mathematician, so when I do think about them, I tend to have more questions than insights, so I made my talk into a series of questions for the audience. I'm not sure I got much closer to any answers, but I have a better idea of my questions now... which is a kind of progress I suppose.

Here's my talk (latest slidesGitHub repo). Comments and feedback are, as always, welcome.


Two sides to every story?

We all have our biases.

Ovation, a data management company, set up a sexy shoeshine stand again this year at the SEG Annual Meeting, a science & technology meeting for subsurface professionals. This cynical and spurious subordination of women by a technology company in our community should be addressed by the immediate adoption of a code of conduct by SEG.     Ovation wants to liven up a boring tradeshow. They hired a small business, owned and run by women, to provide their customers and prospects with shiny shoes. The women are smart to capitalize on their looks to make a living. Anyone who thinks they're being exploited, or that this is an inappropriate way to attract customers at a scientific conference, needs to get over themselves.
       
Last year I picked on one of the marketing strategies employed by SeisWare, a Calgary software company. I implied that the women in fitted dresses handing out beer tickets were probably marketing consultants, not scientists, and I was not alone in my misgivings. My interpretation was that the sexy gimmick was a stand-in for more geophysics-based engagement, something many vendors are afraid of.     On Tuesday, one of SeisWare's geologists called me out on this. On Twitter, in the open, where these conversations belong. She was one of the women in tight dresses; the others were also geoscientists. She had chosen the dresses, felt great about them, and been excited about the chance to represent the company and look awesome doing it. She was saddened and frustrated by the negative remarks about those choices. I need to check my assumptions next time.
       
Evan and I went to the excellently named Euclid Hall on Monday evening. It was full; whilst waiting, the maître d' told us the place was full of exploration geophysicists, to which we replied that we were geophysicists too. She went on to say that she was studying the subject at CU, prompting a high-five from Evan. Then she said, "I shouldn't say this, but I worry that I won't be taken seriously, because I'm a girl."      
 

What's the other side to this story?

 

Big imaging, little imaging, and telescopes

I caught three lovely talks at the special session yesterday afternoon, Recent Advances and the Road Ahead. Here are my notes...

The neglected workhorse

If you were to count up all the presentations at this convention on seismic migration, only 6% of them are on time migration. Even though it is the workhorse of seismic data processing, it is the most neglected topic in migration. It's old technology, it's a commodity. Who needs to do research on time migration anymore? Sergey does.

Speaking as an academic, Fomel said, "we are used to the idea that most of our ideas are ignored by industry," even though many transformative ideas in the industry can be traced back to academics. He noted that it takes at least 5 years to get traction, and the 5 years are up for his time migration ideas, "and I'm starting to lose hope". Here's five things you probably didn't know about time migration:

  • Time migration does not need travel times.
  • Time migration does not need velocity analysis.
  • Single offsets can be used to determine velocities.
  • Time migration does need approximations, but the approximation can be made increasingly accurate.
  • Time migration distorts images, but the distortion can be removed with regularized inversion.

It was joy to listen to Sergey describe these observations through what he called beautiful equations: "the beautiful part about this equation is that it has no parameters", or "the beauty of this equation is that is does not contain velocity", an so on. Mad respect.

Seismic adaptive optics

Alongside seismic multiples, poor illumination, and bandwidth limitations, John Etgen (BP) submitted that, in complex overburden, velocity is the number one problem for seismic imaging. Correct velocity model equals acceptable image. His (perhaps controversial) point was that when velocities are complex, multiples, no matter how severe, are second order thorns in the side of the seismic imager. "It's the thing that's killing us, and that's the frontier." He also posited that full waveform inversion may not save us after all, and image gather analysis looks even less promising.

While FWI looks to catch the wavefield and look at it in the space of the data, migration looks to catch the wavefield and look at it at the image point itself. He elegantly explained these two paradigms, and suggested that both may be flawed.

John urged, "We need things other than what we are working on", and shared his insights from another field. In ground-based optical astronomy, for example, when the image of a star is be distorted by turbulence in our atmosphere, astromoners numerically warp the curvature of the lens to correct for rapid variations in phase of the incoming wavefront. The lenses we use for seismic focusing, velocities, can be tweaked just the same by looking at the wavefield part of the way through its propagation. He quoted Jon Claerbout:

If you want to understand how a horse runs, you gotta run along with it.

Big imaging, little imaging, and combination of the two

There's a number of ways one could summarize what petroleum seismologists do. But hearing (CGG researcher) Sam Gray's talk yesterday was a bit of an awakening. His talk was a remark on the notion of big imaging vs little imaging, and the need for convergence.

Big imaging is the structural stuff. Structural migration, stratigraphic imaging, wide-azimuth acquisition, and so on. It includes the hardware and compute innovations of broadband, blended sources, deblending processing, anisotropic imaging, and the beginnings of viscoacoustic reverse-time migration. 

Little imaging is inversion. It's reservoir characterization. It's AVO and beyond. Azimuthal velocities (fast and slow directions) hint at fracture orientations, azimuthal amplitudes hint even more subtly at fracture compliance.

Big imaging is hard because it's computationally expensive, and velocities are unknown. Little imaging is hard because features like fractures, faults and pores are at the centimetre scale, but on land we lay out inlines and crossline hundreds of metres apart, and use signals that carry only a few bits of information from an area the size of a football field.

What we've been doing with imaging is what he called a separated workflow. We use gathers to make big images. We use gathers to make rock properties, but seldom do they meet. How often have you tested to see if the rock properties the little are explain the wiggles in the big? Our work needs to be such a cycle, if we want our relevance and impact to improve.

The figures are copyright of the authors of SEG, and used in accordance with SEG's permission guidelines.

The most epic geophysics hackathon in the world, ever

Words can't express how awesome the 2014 Geophysics Hackthon was. The spirit embodied by the participants is shared by our generous sponsors... the deliberate practice of creativity and collaboration. 

We convened at Thrive, a fantastic coworking space in the hip Lower Downtown district of Denver. Their friendly staff went well beyond their duty in accommodating our group. The abundance of eateries and bars makes it perfect for an event like this, especially when the organization is a bit, er, spontaneous.

We opened the doors at 8 on Saturday morning and put the coffee and breakfast out, without any firm idea of how many people would show up. But by 9 a sizeable cohort of undergrads and grad students from the Colorado School of Mines had already convened around projects, while others trickled in. The way these students showed up, took ownership, and rolled up their sleeves was inspiring. A few folks even spent last week learning Android in order to put their ideas on a mobile device. While at times we encounter examples that have caused us to wonder if we are going to be alright, these folks, with their audacity and wholesomeness, revive faith that we will. 

The theme of the event was resolution, but really the brief was wide open. There was a lot of non-seismic geophysics, a lot of interactive widgets ('slide this to change the thickness; slide that to change the resistivity'), and a lot of novel approaches. In a week or two we'll be posting a thorough review of the projects the 6 teams built, so stay tuned for that.

The photos are all on Flickr, or you can visit our Hashpi.pe for the captions and other tweetage.

Another great outcome was that all of the projects are open source. Several of the projects highlighted the escape-velocity innovation that is possible when you have an open platform behind you. The potential impact of tools like Mines JTK, SimPEG, and Madagascar is huge. Our community must not underestimate the super-powers these frameworks give us.

The hackathon will be back next year in New Orleans (17 and 18 October: mark your calendars!). We will find a way to add a hacker bootcamp for those wanting to get into this gig. And we're looking for ways to make something happen in Europe. If you have a bright idea about that, please get in touch