Le meilleur hackathon du monde

hackathon_2017_calendar.png

Hackathons are short bursts of creative energy, making things that may or may not turn out to be useful. In general, people work in small teams on new projects with no prior planning. The goal is to find a great idea, then manifest that idea as something that (barely) works, but might not do very much, then show it to other people.

Hackathons are intellectually and professionally invigorating. In my opinion, there's no better team-building, networking, or learning event.

The next event will be 10 & 11 June 2017, right before the EAGE Conference & Exhibition in Paris. I hope you can come.

The theme for this event will be machine learning. We had the same theme in New Orleans in 2015, but suffered a bit from a lack of data. This time we will have a collection of open datasets for participants to build off, and we'll prime hackers with a data-and-skills bootcamp on Friday 9 June. We did this once before in Calgary – it was a lot of fun. 

Can you help?

It's my goal to get 52 participants to this edition of the event. But I'll need your help to get there. Please share this post with any friends or colleagues you think might be up for a weekend of messing about with geoscience data and ideas. 

Other than participants, the other thing we always need is sponsors. So far we have three organizations sponsoring the event — Dell EMC is stepping up once again, thanks to the unstoppable David Holmes and his team. And we welcome Sandstone — thank you to Graham Ganssle, my Undersampled Radio co-host, who I did not coerce in any way.

sponsors_so_far.png

If your organization might be awesome enough to help make amazing things happen in our community, I'd love to hear from you. There's info for sponsors here.

If you're still unsure what a hackathon is, or what's so great about them, check out my November article in the Recorder (Hall 2015, CSEG Recorder, vol 40, no 9).

x lines of Python: AVO plot

Amplitude vs offset (or, more properly, angle) analysis is a core component of quantitative interpretation. The AVO method is based on the fact that the reflectivity of a geological interface does not depend only on the acoustic rock properties (velocity and density) on both sides of the interface, but also on the angle of the incident ray. Happily, this angular reflectivity encodes elastic rock property information. Long story short: AVO is awesome.

As you may know, I'm a big fan of forward modeling — predicting the seismic response of an earth model. So let's model the response the interface between a very simple model of only two rock layers. And we'll do it in only a few lines of Python. The workflow is straightforward:

  1. Define the properties of a model shale; this will be the upper layer.
  2. Define a model sandstone with brine in its pores; this will be the lower layer.
  3. Define a gas-saturated sand for comparison with the wet sand. 
  4. Define a range of angles to calculate the response at.
  5. Calculate the brine sand's response at the interface, given the rock properties and the angle range.
  6. For comparison, calculate the gas sand's response with the same parameters.
  7. Plot the brine case.
  8. Plot the gas case.
  9. Add a legend to the plot.

That's it — nine lines! Here's the result:

 

 

 

 

Once we have rock properties, the key bit is in the middle:

    θ = range(0, 31)
    shuey = bruges.reflection.shuey2(vp0, vs0, ρ0, vp1, vs1, ρ1, θ)

shuey2 is one of the many functions in bruges — here it provides the two-term Shuey approximation, but it contains lots of other useful equations. Virtually everything else in our AVO plotting routine is just accounting and plotting.


As in all these posts, you can follow along with the code in the Jupyter Notebook. You can view this on GitHub, or run it yourself in the increasingly flaky MyBinder (which is down at the time of writing... I'm working on an alternative).

What would you like to see in x lines of Python? Requests welcome!

St Nick's list for the geoscientist

It's that time again. Perhaps you know a geoscientist that needs a tiny gift, carefully wrapped, under a tiny tree. Perhaps that geoscientist has subtly emailed you this blog post, or non-subtly printed it out and left copies of it around your house and/or office and/or person. Perhaps you will finally take the hint and get them something awesomely geological.

Or perhaps 2016 really is the rubbish year everyone says it is, and it's gonna be boring non-geological things for everyone again. You decide.

Science!

I have a feeling science is going to stick around for a while. Get used to it. Better still, do some! You can get started on a fun science project for well under USD 100 — how about these spectrometers from Public Lab? Or these amazing aerial photography kits

All scientists must have a globe. It's compulsory. Nice ones are expensive, and they don't get much nicer than this one (right) from Real World Globes (USD 175 to USD 3000, depending on size). You can even draw on it. Check out their extra-terrestrial globes too: you can have Ganymede for only USD 125!

If you can't decide what kind of science gear to get, you could inspire someone to make their own with a bunch of Arduino accessories from SparkFun. When you need something to power your gadget in the field, get a fuel cell — just add water! Or if it's all just too much, play with some toy science like this UNBELIEVABLE Lego volcano, drone, crystal egg scenario.

Stuff for your house

Just because you're at home doesn't mean you have to stop loving rocks. Relive those idyllic field lunches with this crazy rock sofa that looks exactly like a rock but is not actually a rock (below left). Complete the fieldwork effect with a rainhead shower and some mosquitoes.

No? OK, check out these very cool Livingstone bouldery cushions and seats (below right, EUR 72 to EUR 4750).

If you already have enough rocks and/or sofas to sit on, there are some earth sciencey ceramics out there, like this contour-based coffee cup by Polish designer Kina Gorska, who's based in Oxford, UK. You'll need something to put it on; how about a nice absorbent sandstone coaster?

Wearables

T-shirts can make powerful statements, so don't waste it on tired old tropes like "schist happens" or "it's not my fault". Go for bold design before nerdy puns... check out these beauties: one pretty bold one containing the text to Lyell's Principles of Geology (below left), one celebrating Bob Moog with waveforms (perfect for a geophysicist!), and one featuring the lonely Chrome T-Rex (about her). Or if you don't like those, you can scour Etsy for volcano shirts.

Books

You're probably expecting me to lamely plug our own books, like the new 52 Things you Should Know About Rock Physics, which came out a few weeks ago. Well, you'd be wrong. There are lots of other great books about geoscience out there!

For example, Brian Frehner (a historian at Oklahoma State) has Finding Oil (2016, U Nebraska Press) coming out on Thursday this week. It covers the early history of petroleum geology, and I'm sure it'll be a great read. Or how about a slightly 'deeper history' book the new one from Walter Alvarez (the Alvarez), A Most Improbable Journey: A Big History of Our Planet and Ourselves (2016, WW Norton), which is getting good reviews. Or for something a little lighter, check out my post on scientific comic books — all of which are fantastic — or this book, which I don't think I can describe.

Dry your eyes

If you're still at a loss, you could try poking around in the prehistoric giftological posts from 2011201220132014, or 2015. They contain over a hundred ideas between them, I mean, come on.

Still nothing? Nevermind, dry your eyes in style with one of these tissue box holders. Paaarp!


The images in this post are all someone else's copyright and are used here under fair use guidelines. I'm hoping the owners are cool with people helping them sell stuff!

The disappearing lake trick

On Sunday 20 November it's the 36th anniversary of the 1980 Lake Peigneur drilling disaster. The shallow lake — almost just a puddle at about 3 m deep — disappeared completely when the Texaco wellbore penetrated the Diamond Crystal Salt Company mine at a depth of about 350 m.

Location, location, location

It's thought that the rig, operated by Wilson Brothers Ltd, was in the wrong place. It seems a calculation error or misunderstanding resulted in the incorrect coordinates being used for the well site. (I'd love to know if anyone knows more about this as the Wikipedia page and the video below offer slightly different versions of this story, one suggesting a CRS error, the other a triangulation error.)

The entire lake sits on top of the Jefferson Island salt dome, but the steep sides of the salt dome, and a bit of bad luck, meant that a few metres were enough to spoil everyone's day. If you have 10 minutes, it's worth watching this video...

Apparently the accident happened at about 0430, and the crew abandoned the subsiding rig before breakfast. The lake was gone by dinner time. Here's how John Warren, a geologist and proprietor of Saltworks, describes the emptying in his book Evaporites (Springer 2006, and repeated on his awesome blog, Salty Matters):

Eyewitnesses all agreed that the lake drained like a giant unplugged bathtub—taking with it trees, two oil rigs [...], eleven barges, a tugboat and a sizeable part of the Live Oak Botanical Garden. It almost took local fisherman Leonce Viator Jr. as well. He was out fishing with his nephew Timmy on his fourteen-foot aluminium boat when the disaster struck. The water drained from the lake so quickly that the boat got stuck in the mud, and they were able to walk away! The drained lake didn’t stay dry for long, within two days it was refilled to its normal level by Gulf of Mexico waters flowing backwards into the lake depression through a connecting bayou...

The other source that seems reliable is Oil Rig Disasters, a nice little collection of data about various accidents. It ends with this:

Federal experts from the Mine Safety and Health Administration were not able to apportion blame due to confusion over whether Texaco was drilling in the wrong place or that the mine’s maps were inaccurate. Of course, all evidence was lost.

If the bit about the location is true, it may be one of the best stories of the perils of data management errors. If anyone (at Chevron?!) can find out more about it, please share!

x lines of Python: web scraping and web APIs

The Web is obviously an incredible source of information, and sometimes we'd like access to that information from within our code. Indeed, if the information keeps changing — like the price of natural gas, say — then we really have no alternative.

Fortunately, Python provides tools to make it easy to access the web from within a program. In this installment of x lines of Python, I look at getting information from Wikipedia and requesting natural gas prices from Yahoo Finance. All that in 10 lines of Python — total.

As before, there's a completely interactive, live notebook version of this post for you to run, right in your browser. Quick tip: Just keep hitting Shift+Enter to run the cells. There's also a static repo if you want to run it locally.

Geological ages from Wikipedia

Instead of writing the sentences that describe the code, I'll just show you the code. Here's how we can get the duration of the Jurassic period fresh from Wikipedia:

url = "http://en.wikipedia.org/wiki/Jurassic"
r = requests.get(url).text
start, end = re.search(r'<i>([\.0-9]+)–([\.0-9]+)&#160;million</i>', r.text).groups()
duration = float(start) - float(end)
print("According to Wikipedia, the Jurassic lasted {:.2f} Ma.".format(duration))

The output:

According to Wikipedia, the Jurassic lasted 56.30 Ma.

There's the opportunity for you to try writing a little function to get the age of any period from Wikipedia. I've given you a spot of help, and you can even complete it right in your browser — just click here to launch your own copy of the notebook.

Gas price from Yahoo Finance

url = "http://download.finance.yahoo.com/d/quotes.csv"
params = {'s': 'HHG17.NYM', 'f': 'l1'}
r = requests.get(url, params=params)
price = float(r.text)
print("Henry Hub price for Feb 2017: ${:.2f}".format(price))

Again, the output is fast, and pleasingly up-to-the-minute:

Henry Hub price for Feb 2017: $2.86

I've added another little challenge in the notebook. Give it a try... maybe you can even adapt it to find other live financial information, such as stock prices or interest rates.

What would you like to see in x lines of Python? Requests welcome!

Welly to the wescue

I apologize for the widiculous title.

Last week I described some headaches I was having with well data, and I introduced welly, an open source Python tool that we've built to help cure the migraine. The first versions of welly were built — along with the first versions of striplog — for the Nova Scotia Department of Energy, to help with their various data wrangling efforts.

Aside — all software projects funded by government should in principle be open source.

Today we're using welly to get data out of LAS files and into so-called feature vectors for a machine learning project we're doing for Canstrat (kudos to Canstrat for their support for open source software!). In our case, the features are wireline log measurements. The workflow looks something like this:

  1. Read LAS files into a welly 'project', which contains all the wells. This bit depends on lasio.
  2. Check what curves we have with the project table I showed you on Thursday.
  3. Check curve quality by passing a test suite to the project, and making a quality table (see below).
  4. Fix problems with curves with whatever tricks you like. I'm not sure how to automate this.
  5. Export as the X matrix, all ready for the machine learning task.

Let's look at these key steps as Python code.

1. Read LAS files

 
from welly import Project
p = Project.from_las('data/*.las')

2. Check what curves we have

Now we have a project full of wells and can easily make the table we saw last week. This time we'll use aliases to simplify things a bit — this trick allows us to refer to all GR curves as 'Gamma', so for a given well, welly will take the first curve it finds in the list of alternatives we give it. We'll also pass a list of the curves (called keys here) we are interested in:

The project table. The name of the curve selected for each alias is selected. The mean and units of each curve are shown as a quick QC. A couple of those RHOB curves definitely look dodgy, and they turned out to be DRHO correction curves.

The project table. The name of the curve selected for each alias is selected. The mean and units of each curve are shown as a quick QC. A couple of those RHOB curves definitely look dodgy, and they turned out to be DRHO correction curves.

3. Check curve quality

Now we have to define a suite of tests. Lists of test to run on each curve are held in a Python data structure called a dictionary. As well as tests for specific curves, there are two special test lists: Each and All, which are run on each curve encountered, and on all curves together, respectively. (The latter is required to, for example, compare the curves to each other to look for duplicates). The welly module quality contains some predefined tests, but you can also define your own test functions — these functions take a curve as input, and return either True (for a test pass) for False.

 
import welly.quality as qty
tests = {
    'All': [qty.no_similarities],
    'Each': [qty.no_monotonic],
    'Gamma': [
        qty.all_positive,
        qty.mean_between(10, 100),
    ],
    'Density': [qty.mean_between(1000,3000)],
    'Sonic': [qty.mean_between(180, 400)],
    }

html = p.curve_table_html(keys=keys, alias=alias, tests=tests)
HTML(html)
the green dot means that all tests passed for that curve. Orange means some tests failed. If all tests fail, the dot is red. The quality score shows a normalized score for all the tests on that well. In this case, RHOB and DT are failing the 'mean_b…

the green dot means that all tests passed for that curve. Orange means some tests failed. If all tests fail, the dot is red. The quality score shows a normalized score for all the tests on that well. In this case, RHOB and DT are failing the 'mean_between' test because they have Imperial units.

4. Fix problems

Now we can fix any problems. This part is not yet automated, so it's a fairly hands-on process. Here's a very high-level example of how I fix one issue, just as an example:

 
def fix_negs(c):
    c[c < 0] = np.nan
    return c

# Glossing over some details, we give a mnemonic, a test
# to apply, and the function to apply if the test fails.
fix_curve_if_bad('GAM', qty.all_positive, fix_negs)

What I like about this workflow is that the code itself is the documentation. Everything is fully reproducible: load the data, apply some tests, fix some problems, and export or process the data. There's no need for intermediate files called things like DT_MATT_EDIT or RHOB_DESPIKE_FINAL_DELETEME. The workflow is completely self-contained.

5. Export

The data can now be exported as a matrix, specifying a depth step that all data will be interpolated to:

 
X, _ = p.data_as_matrix(X_keys=keys, step=0.1, alias=alias)

That's it. We end up with a 2D array of log values that will go straight into, say, scikit-learn*. I've omitted here the process of loading the Canstrat data and exporting that, because it's a bit more involved. I will try to look at that part in a future post. For now, I hope this is useful to someone. If you'd like to collaborate on this project in the future — you know where to find us.

* For more on scikit-learn, don't miss Brendon Hall's tutorial in October's Leading Edge.


I'm happy to let you know that agilegeoscience.com and agilelibre.com are now served over HTTPS — so connections are private and secure by default. This is just a matter of principle for the Web, and we go to great pains to ensure our web apps modelr.io and pickthis.io are served over HTTPS. Find out more about SSL from DigiCert, the provider of Squarespace's (and Agile's) certs, which are implemented with the help of the non-profit Let's Encrypt, who we use and support with dollars.

Well data woes

I probably shouldn't be telling you this, but we've built a little tool for wrangling well data. I wanted to mention it, becase it's doing some really useful things for us — and maybe it can help you too. But I probably shouldn't because it's far from stable and we're messing with it every day.

But hey, what software doesn't have a few or several or loads of bugs?

Buggy data?

It's not just software that's buggy. Data is as buggy as heck, and subsurface data is, I assert, the buggiest data of all. Give units or datums or coordinate reference systems or filenames or standards or basically anything at all a chance to get corrupted in cryptic ways, and they take it. Twice if possible.

By way of example, we got a package of 10 wells recently. It came from a "data management" company. There are issues... Here are some of them:

  • All of the latitude and longitude data were in the wrong header fields. No coordinate reference system in sight anywhere. This is normal of course, and the only real side-effect is that YOU HAVE NO IDEA WHERE THE WELL IS.
  • Header chaos aside, the files were non-standard LAS sort-of-2.0 format, because tops had been added in their own little completely illegal section. But the LAS specification has a section for stuff like this (it's called OTHER in LAS 2.0).
  • Half the porosity curves had units of v/v, and half %. No big deal...
  • ...but a different half of the porosity curves were actually v/v. Nice.
  • One of the porosity curves couldn't make its mind up and changed scale halfway down. I am not making this up.
  • Several of the curves were repeated with other names, e.g. GR and GAM, DT and AC. Always good to have a spare, if only you knew if or how they were different. Our tool curvenam.es tries to help with this, but it's far from perfect.
  • One well's RHOB curve was actually the PEF curve. I can't even...

The remarkable thing is not really that I have this headache. It's that I expected it. But this time, I was out of paracetamol.

Cards on the table

Our tool welly, which I stress is very much still in development, tries to simplify the process of wrangling data like this. It has a project object for collecting a lot of wells into a single data structure, so we can get a nice overview of everything: 

Click to enlarge.

Our goal is to include these curves in the training data for a machine learning task to predict lithology from well logs. The trained model can make really good lithology predictions... if we start with non-terrible data. Next time I'll tell you more about how welly has been helping us get from this chaos to non-terrible data.

Nowhere near Nyquist

This is a guest post by my Undersampled Radio co-host, Graham Ganssle.

You can find Gram on the webLinkedInTwitterGitHub

This post is a follow up to Tuesday's post about the podcast — you might want to read that first.


Undersampled Radio was born out of a dual interest in podcasting. Matt and I both wanted to give it a shot, but we didn’t know what to talk about. We still don’t. My philosophy on UR is that it’s forumesque; we have a channel on the Software Underground where we solicit ideas, draft guests, and brainstorm about what should be on the show. We take semi-formed thoughts and give them a good think with a guest who knows more than us. Live and uncensored.

Since with words I... have not.. a way... the live nature of the show gives it a silly, laid back attitude. We attempt to bring our guests out of interview mode by asking about their intellectual curiosities in addition to their professional interests. Though the podcast releases are lightly edited, the YouTube live-stream recordings are completely raw. For a good laugh at our expense you should certainly watch one or two.

Techie deets

Have a look at the command center. It’s where all the UR magic (okay, digital trickery) happens in pre- and post-production.

It's a mess but it works!

It's a mess but it works!

We’ve migrated away from the traditional hardware combination used by most podcasters. Rather than use the optimum mic/mixer/spaghetti-of-cables preferred by podcasting operations which actually generate revenue, we’ve opted to use less hardware and do a bit of digital conditioning on the back end. We conduct our interviews via YouTube live (aka Google Hangouts on Air) then on my Ubuntu machine I record the audio through stereo mix using PulseAudio and do the filtering and editing in Audacity.

Though we usually interview guests via Google Hangouts, we have had one interviewee in my office for an in-person chat. It was an incredible episode that was filled with the type of nonlinear thinking which can only be accomplished face to face. I mention this because I’m currently soliciting another New Orleans recording session (message me if you’re interested). You buy the plane ticket to come record in the studio. I buy the beer we’ll drink while recording.

as Matt guessed&nbsp;there actually are paddle boats rolling by while I record. Here’s the view from my recording studio; note the paddle boat on the left.

as Matt guessed there actually are paddle boats rolling by while I record. Here’s the view from my recording studio; note the paddle boat on the left.

Forward projections

We have several ideas about what to do next. One is a live competition of some sort, where Matt and I compete while a guest(s) judge our performance. We’re also keen to do a group chat session, in which all the members of the Software Underground will be invited to a raucous, unscripted chat about whatever’s on their minds. Unfortunately we dropped the ball on a live interview session at the SEG conference this year, but we’d still like to get together in some sciencey venue and grab randos walking by for lightning interviews.

In accord with the remainder of our professional lives, Matt and I both conduct the show in a manner which keeps us off balance. I have more fun, and learn more information more quickly, by operating in a space outside of my realm of knowledge. Ergo, we are open to your suggestions and your participation in Undersampled Radio. Come join us!

 

Tune in to Undersampled Radio

Back in the summer I mentioned Undersampled Radio, the world's newest podcast about geoscience. Well, geoscience and computers. OK, machine learning and geoscience. And conferences.

We're now 25 shows in, having started with Episode 0 on 28 January. The show is hosted by Graham 'Gram' Ganssle, a consulting and research geophysicist based in New Orleans, and me. Appropriately enough, I met Gram at the machine-learning-themed hackathon we did at SEG in 2015. He was also a big help with the local knowledge.

I broadcast from one of the phone rooms at The HUB South Shore. Gram has the luxury of a substantial book-lined office, which I imagine has ample views of paddle-steamers lolling on the Mississippi (but I actually have no idea where it is). 

To get an idea of what we chat about, check out the guests on some recent episodes:

Better than cable

The podcast is really more than just a podcast, it's really a live TV show, broadcasting on YouTube Live. You can catch the action while it's happening on the Undersampled Radio channel. However, it's not easy to catch live because the episodes are not that predictable — they are announced about 24 hours in advance on the Software Underground Slack group (you are in there, right?). We should try to put them out on the @undrsmpldrdio Twitter feed too... 

So, go ahead and watch the very latest episode, recorded last Thursday. We spoke to Tim Hopper, a data scientist in Raleigh, NC, who works at Distil Networks, a cybersecurity firm. It turns out that using machine learning to filter web traffic has some features in common with computational geophysics...

You can subscribe to the show in iTunes or Google Play, or anywhere else good podcasts are served. Grab the RSS Feed from the UndersampledRad.io website.

Of course, we take guest requests. Who would you like to hear us talk to? 

Working without a job

I have drafted variants of this post lots of times. I've never published them because advice always feels... presumptuous. So let me say: I don't have any answers. But I do know that the usual way of 'finding work' doesn't work any more, so maybe the need for ideas, or just hope, has grown. 

Lots of people are out of work right now. I just read that 120,000 jobs have been lost in the oil industry in the UK alone. It's about the same order of magnitude in Canada, maybe as much as 200,000. Indeed, several of my friends — smart, uber-capable professionals — are newly out of jobs. There's no fat left to trim in operator or service companies... but the cuts continue. It's awful.

The good news is that I think we can leave this downturn with a new, and much better, template for employment. The idea is to be more resilient for 'next time' (the coming mergers, the next downturn, the death throes of the industry, that sort of thing).

The tragedy of the corporate professional 

At least 15 years ago, probably during a downturn, our corporate employers started telling us that we are responsible for our own careers. This might sound like a cop-out, maybe it was even meant as one, but really it's not. Taken at face value, it's a clear empowerment.

My perception is that most professionals did not rise to the challenge, however. I still hear, literally all the time, that people can't submit a paper to a conference, or give a talk, or write a blog, or that they can't take a course, or travel to a workshop. Most of the time this comes from people who have not even asked, they just assume the answer will be No. I worry that they have completely given in; their professional growth curtailed by the real or imagined conditions of their employment.

More than just their missed opportunity, I think this is a tragedy for all of us. Their expertise effectively gone from the profession, these lost scientists are unknown outside their organizations.

Many organizations are happy for things to work out that way, but when they make the situation crystal clear by letting people go, the inequity is obvious. The professional realizes, too late, that the career they were supposed to be managing (and perhaps thought they were managing by completing their annual review forms on time) was just that — a career, not a job. A career spanning multiple jobs and, it turns out, multiple organizations.

I read on LinkedIn recently someone wishing recently let-go people good luck, hoping that they could soon 'resume their careers'. I understand the sentiment, but I don't see it the same way. You don't stop being a professional, it's not a job. Your career continues, it's just going in a different direction. It's definitely not 'on hold'. If you treat it that way, you're missing an opportunity, perhaps the best one of your career so far.

What you can do

Oh great, unsolicited advice from someone who has no idea what you're going through. I know. But hey, you're reading a blog, what did you expect? 

  • Do you want out? If you think you might want to leave the industry and change your career in a profound way, do it. Start doing it right now and don't look back. If your heart's not in this work, the next months and maybe years are really not going to be fun. You're never going to have a better run at something completely different.
  • You never stop being a professional, just like a doctor never stops being a doctor. If you're committed to this profession, grasp and believe this idea. Your status as such is unrelated to the job you happen to have or the work you happen to be doing. Regaining ownership of our brains would be the silveriest of linings to this downturn.
  • Your purpose as a professional is to offer help and advice, informed by your experience, in and around your field of expertise. This has not changed. There are many, many channels for this purpose. A job is only one. I firmly believe that if you create value for people, you will be valued and — eventually — rewarded.
  • Establish a professional identity that exists outside and above your work identity. Get your own business cards. Go to meetings and conferences on your own time. Write papers and articles. Get on social media. Participate in the global community of professional geoscientists. 
  • Build self-sufficiency. Invest in a powerful computer and fast Internet. Learn to use QGIS and OpendTect. Embrace open source software and open data. If and when you get some contracting work, use Tick to count hours, Wave for accounting and invoicing, and Todoist to keep track of your tasks. 
  • Find a place to work — I highly recommend coworking spaces. There is one near you, I can practically guarantee it. Trust me, it's a much better place to work than home. I can barely begin to describe the uplift, courage, and inspiration you will get from the other entrepreneurs and freelancers in the space.
  • Find others like you, even if you can't get to a coworking space, your new peers are out there somewhere. Create the conditions for collaboration. Find people on meetup.com, go along to tech and start-up events at your local university, or if you really can't find anything, organize an event yourself! 
  • Note that there are many ways to make a living. Money in exchange for time is one, but it's not a very efficient one. It's just another hokey self-help business book, but reading The 4-Hour Workweek honestly changed the way I look at money, time, and work forever.
  • Remember entrepreneurship. If you have an idea for a new product or service, now's your chance. There's a world of making sh*t happen out there — you genuinely do not need to wait for a job. Seek out your local startup scene and get inspired. If you've only ever worked in a corporation, people's audacity will blow you away.

If you are out of a job right now, I'm sorry for your loss. And I'm excited to see what you do next.