Welcome to our Digital Corner! A lot is happening in the EAGE digital world: on this page you can find some highlights from the latest initiatives on machine learning, A.I. and digitalization involving EAGE members worldwide, in addition to new contributions on the topic of Artificial Intelligence by the EAGE A.I. special interest community every week.

Sign up to receive the EAGE Digital Newsletter


Coming soon

On 8-11 December 2020, the EAGE 2020 Annual Conference & Exhibition Online will present a rich programme including sessions dedicated entirely to Data and Computer Science. You can choose among "High Performance Computing & Neural Networks", "Data Management and Machine Learning Driven Seismic Interpretation", and "Machine Learning Applications in Various Fields". Click here to explore the programme and register!

The near-surface industry makes great use of data processing which is why at the Near Surface Geoscience 2020 Conference Online we bring you a number of technical sessions focused on this. Sessions pertaining to New Technologies, Developments and Research Trends and Modelling, Inversion and Data-Processing in Near-Surface Geophysics, amongst others will be available for delegates to join and enhance their knowledge on the topic. Learn more here and sign up today!


The EAGE Interactive Online Short Courses (IOSC) are a new format. They bring carefully selected programme by experienced instructors from industry and academia online to give participants the possibility to follow the latest education in geoscience and engineering remotely. The courses are designed to be easily digested over the course of 2 or 3 days. Participants can interact live with the instructors and ask questions. 

In EAGE's education offering, Machine Learning is a red thread. Keep an eye on upcoming IOSC courses and other learning opportunities to match your digitalization goals.

Learn more about upcoming courses.


Recent activities

On 2 July 2020 EAGE Local Chapter Netherlands organized an online event on "Artificial Intelligence in Oil & Gas" featuring the following talks:

- Machine learning in seismic data processing by Dr Paul Zwartjes (Senior Research Geophysicist, Aramco)
- From super human performance in games to augmented decision making in the real world by Mr. Norbert Dolle (Managing Partner, WhiteSpaceEnergy)

The recording of this event was shared in the community groups of Local Chapter Netherlands and EAGE A.I.
Join EAGE communities to learn about upcoming initiatives and access more resources!


News from the EAGE A.I. Community

The A.I. Committee aims to share tips, techniques, learning experiences and whatever else is of interest to stay informed and up to speed with this emerging field. We also want geoscientists to maintain their employability throughout the many reorganization cycles when the world requires different skills than those taught to many of us technical experts and generalists while in university.

Learning A.I.

To get started, the focus naturally falls on learningIf you have been looking for the right entry, here are three (free) courses for you to consider:

Resources Level 
AI For Everyone  Beginner 
Practical Deep Learning for Coders, v3 Intermediate 
UVA Deep Learning Course  Advanced 
Staying up-to-date with A.I.

This week's focus is on staying up-to-date with the rapidly moving field of AI.
"Two-minute papers" is a video podcast series that aims to distill the hottest and most fascinating research in the field of computer vision and machine learning into a format accessible for everyone. The artificial intelligence podcast by Lex Friedman is an excellent resource for interviews with researchers around the field of AI and machine learning. Arxiv-sanity preserver is your one-stop-shop to the world of AI and machine learning preprints, where the most recent publications from the ArXiv can be found in one place.

Resources Level 
Two-minute Papers Beginner 
The Artificial Intelligence Podcast Intermediate 
ArXiv Sanity Preserver Advanced 
The pandemic and A.I.

The coronavirus outbreak put us in unprecedented times. This week we take a special look at the role A.I. can play in battling the pandemic as well as transforming the healthcare practice. Check out the latest issue of Nature Machine Intelligence for a general read about the potential advantages and challenges of deploying A.I. in the pandemic. With no prior medical expertise required "A.I. for Medicine Specialization" teaches how to apply A.I. tools to medical diagnosis, prognosis and treatment, including working with 2D and 3D medical image data. You can obtain open dataset, share code and models, and enter competitions on the largest machine learning community Kaggle to join the battle against Covid-19 as an A.I. practitioner.

Resources Level 
A path for A.I. in the pandemic Beginner 
AI for Medicine Specialization Intermediate 
Kaggle ML Community Advanced 
A.I. - Give it a go, it won't bite

If you haven’t had a go before – try it, get your hands (digitally) dirty. You can play without breaking anything (or in some cases even without installing anything). It can help you understand what’s possible. It can help at work, in your AI studies or across the rest of your life. Thankfully these days you don’t have to be a coding supremo to take those first steps. Much of the AI world is moving towards ‘low-code’ or even ‘no-code’, so you can do some pretty impressive AI stuff without leaving the comfort of a friendly app, whether that be on your phone or laptop. Below are a few cool places where you can start exploring the art-of-the-possible – hopefully it will inspire you and be fun, enjoy!

Resources Level 
AI Experiments with Google Beginner 
Machine Learning Experiments with GitHub Beginner to Intermediate 
Anaconda, incl. Orange (no code),
Jupyter, Spyder (Python)
& RStudio (low to high code)
Beginner to Advanced 
Hands-on A.I. exercises

A hurdle for many wanting to gain hands-on experience with AI is setting up a development environment - hours of frustration trying to install Python on Windows, we have all been there! Google's CoLab provides an online Jupyter-like environment with FREE GPU resources where you can experiment to your heart's content. Whilst Google has already provided a number of data science tutorials, one of the great benefits of CoLab is it is possible to open any .ipynb file. Whether you are looking at csv files, images or jumping right into manipulating segy data, there are hundreds of geoscience-specific examples sitting in open GitHub repositories. Here are three notebooks to get you started:

Resources Level 
Analysing thin section compositions Beginner 
An image segmentation example from the
TGS salt detection Kaggle competition
Seismic inversion on the Volve dataset Advanced 
Deep Learning for A.I.

Drastic improvements in hardware performance (GPU) enabled wide spread use of Deep Neural Networks (DNN). Combined with the Convolutional Neural Network (CNN) approach, they complement seismic workflows very well; fault detection, time lapse, inversion seismic-log integration, etc.
In application, careful consideration is advised: non transparency (black box), dependence on training data, outcomes being approximations, sometimes artefacts.
However, because of the multilayered architecture, Deep Learning has proven ‘unreasonably effective’, and improved understanding through research (MIT) will enable novel breakthroughs.

Resources Level 
TensorFlow Neural Network Beginner 
Deep Learning Specialization Intermediate 
Seismic Deep Learning libraries Advanced 
Understanding the U-net

Understanding what happens in images in crucial in the field of machine vision. This problem is broken up into separate but similar topics, such as classification, localization, object detection, semantic segmentation and image segmentation. Without realizing it, geoscientists face similar challenges. Think first break picking or salt interpretation. One of the workhorses for image segmentation problems is the U-net and to get ahead in the field, or simply grasp what your colleagues have developed for you now, one should really
have a basic understanding of this algorithm. Here are three useful links:

Resources Level 
Convolutional Networks for Biomedical Image Segmentation (video) Beginner 
Convolutional Networks for Biomedical Image Segmentation (paper) Intermediate 
U-net application fo TGS challenge Advanced 
Deep Learning for A.I. (2)

There is plenty of online training material on Deep Learning. This week we recommend three sources that are very useful for illustrating the practicalities of Deep Learning. They are real fun to use!

TensorFlow playground (already discussed in a different context) provides simple two-dimensional examples of feed-forward neural networks, mostly for classification, and displays the results in a very useful way for somebody who is new to neural networks.

3D Visualization of a Convolutional Neural Network shows the details of the structure and performance of a simple convolutional neural network applied to the classical MNIST dataset.

GAN Lab explains Generative Adversarial Networks, and it really helps understand the interaction between the Generator and the Discriminator.

Resources Level 
TensorFlow playground Beginner 
3D Visualization of a Convolutional Neural Network Intermediate 
Generative Adversarial Networks Advanced 
A.I. Challenges
Historically, the Imagenet Challenge has allowed researchers to develop ground-breaking machine learning methods on open data, enabling reproducible, comparable progress in computer vision.
In geoscience, efforts such as the SEG contest on facies prediction have inspired geoscientists to engage in the field of AI and serve as an excellent entry point for machine learning in geoscience.
Currently ongoing, the FORCE machine learning contest on wells and seismic provides a labeled dataset for facies prediction from wireline logs and a seismic dataset for fault detection.
These and other collaborative challenges will help to inspire future geoscientists and breakthrough technologies in applied machine learning for geoscience.
Explainable A.I.
Explainable Artificial Intelligence (XAI) tries to open the black box of Machine Learning models such that their behavior can be understood by humans. Google Cloud's A.I. Explanations provide a set of tools and frameworks to explain how much each feature in your model contributed to the predicted results for classification and regression tasks. More specifically, SHAP (SHapley Additive exPlanations) is a popular XAI tool based on a solution in cooperative game theory. It can explain the output of any machine learning model with rich visualizations that are friendly for end users.
Quick and easy A.I.
Its' great to try simple examples to see what A.I. can do. There are multiple sites where you can try examples for free and see the results. In the examples below you can upload images and see how ML systems perform classifications and extractions and what data they return.

Google – Image classification

Microsoft – Image classification

When you want to step into run your own more domain specific data (e.g. timeseries or multiple attribute data) then many of the ‘AI Platforms’, like Dataiku and DataRobot allow you to register and run free versions. These systems can run ‘code free’, so if you can use excel then should be able to run those.

These are great ways to explore quickly what A.I. can do and see if it might be relevant for you and your data challenges.
The use of Neural Networks
An exciting area in the deep learning space is the use of neural networks for solving PDEs, equations which dictate the majority of geophysical phenomena. Through tailoring of the cost function, physics-informed neural networks (PINNs) have recently been shown to accurately solve a variety of PDEs. Early attempts in geophysics have been published for solving both the Eikonal and the wave equation. Whilst it is still unclear whether PINNs will reach the precision of our waveform modeling procedures, they are likely to be a fierce competitor with respect to compute time.

The underlying principles of PINNs are detailed in this page.

An example of such a network being to solve the wave equation is illustrated by this paper.

And, for those ready to get your hands dirty, checkout the DeepXDE python library.
Interpretable Machine Learning
Essential for business confidence and in critical decisions is the ability not just to provide accuracy with Machine Learning but also the why and how.

In short, interpretability means to determine a representation in terms of human understanding of the results; with few parameters (i.e. linear regression) this is straightforward. At the other end, Deep Neural Networks (DNN) are effective in finding subtle relationships among many features but are hard to interpret.

Recently developed methods to analyze DNN include LIME (Local Interpretable Model-Agnostic Explanations) and DeepLIFT (Deep Learning Important Features)

With using alternatives to DNN, the common belief is that interpretability goes at the expense of accuracy, on which assertion some disagree with.

Suggested reading:
A.I. failures

There are many inspiring quotes on failure. Like Thomas Edison’s “I have not failed. I've just found 10,000 ways that won't work” and Churchill’s “Success is not final, failure is not fatal: it is the courage to continue that counts.”. Most of the quotes encourage one to persists and to take lessons from the failed endeavors. Failures in A.I. and machine learning happen all the time, they are just not talked about much and therefore such learning is not as easy to come by as the learnings from success. Here are some links about failure,
overpromise and underdelivery of A.I. and machine learning technology for you to learn from.

1) Weapons of Math Destruction

2) How IBM Watson Overpromised and Underdelivered on AI Health Care

3) Consumer Reports Unmasks Tesla’s Full Self-Driving Mystique, Here’s The Upshot

Gaussian Processes and Neural Networks

Gaussian Processes (GPs) for Machine Learning are closely related to geostatistical models, with the exception that Geostatistics tends to focus on one, two or three-dimensional models, while GPs typically live in spaces of very large dimension. GPs are often used to generate possible stochastic realizations constrained by data and provide a way to quantify uncertainties. The book “Gaussian Processes for Machine Learning” by Rasmussen and Williams, is a great introduction to GPs.

Neal showed that, before training, feed-forward Neural Networks (NNs) with just one infinite hidden layer, generate a GP with a covariance derived from the NN’s activation function and the initial probability distributions of the NN’s weights and biases.

Neal’s results have been generalized to deep and convolutional networks. This means that, by defining a NN’s architecture and its hyperparameters, we are already defining an implicit “prior” on the output of the NN. The concept of “Deep Image Priors” takes advantage of this by proposing not to train the model using a Training Set, but to directly apply the prior NN model to the optimization task. This has close links with Bayesian Deep Learning, that we will discuss in the near future.

Cross-Validation for Subsurface Machine Learning

Many predictive tasks we encounter in the subsurface are of spatial or temporal nature e.g. predicting porosity and permeability away from well-control, or predicting the future flow behavior of a subsurface reservoir given historical data.

In many applications, we evaluate the performance of algorithms using cross-validation.
Sebastian Raschka’s introduction to model validation provides an excellent overview of the definitions, assumptions, and techniques used to choose the best algorithms and their parameters.

Code examples (Part IV) provide a practical starting point for practitioners.

Spatially correlated data used to build predictive models can have a significant impact on our ability to judge the spatial predictive performance of algorithms and can lead to an optimistic bias in model evaluation. Roberts et. al. provide a comparison of various temporal, and spatial validation strategies as well as the significant impact the choice of validation strategy can have on our ability to judge a model’s predictive performance.

Choosing the right validation strategy for the task at hand, allows practitioners to reduce bias through model selection and builds trust in a method’s ability to make predictions away from data and the future state of subsurface systems.

Learning A.I. (2)

Are you ready to learn A.I. but overwhelmed by the growing number of online courses? Here are three good ones and are 100% free to take.

Google gives away their secrets by offering the Google Machine Learning Crash Course. The course develops intuition around fundamental machine learning concepts and is not very technical.

More intensively, the Stanford Machine Learning Course on Coursera teaches the most effective A.I. techniques with hands-on implementation and Silicon Valley's best practices. 

In addition, the Udacity Artificial Intelligence Course covers a broad introduction to machine learning, probabilistic reasoning, robotics, computer vision, and natural language processing.

AI to enable the Energy Transition

Across the world and throughout the energy industry the direction of travel is clear. The world needs to dramatically cut emissions while ensuring there is enough energy for countries and communities to continue to develop. AI will have a key role to play. Whether that be in terms of energy efficiency and optimisation, reducing emissions, low carbon energy generation, energy distribution and storage.

Below are several views from different parts of the energy creation and consumption ecosystem:

AI has a key role to play to enable a future more sustainable world. People who can critically apply such techniques have a vital role to play in our future world.

Vulnerability of Neural Networks

This week we will discuss the vulnerability of neural networks to hacking attempts either by manipulation from a software perspective or by altering input data in the physical world. Towards Data Science provides a nice introduction to the security vulnerabilities of NNs and the different forms in which attacks can take. The most common attacks are in the form of strategically adapting the input data which fools the network into a misclassification. To the human eye, the adapted input data is often almost identical to the original input data however these small adaptations have the power to completely deceive the classification procedure.

At a software level this can be by adding noise to the input data as illustrated by Goodfellow et al., 2015. Their experiments showed how computationally-generated noise can be used to trick a network into misclassifying images that visually look identical, resulting in incorrect classifications with a very high confidence score.

Adverserial attacks can also be performed in the physical world by adding stickers or patches to objects to confuse classification network. Brown et al., 2018 illustrate the use of physical adverserial stickers that when placed within a cameras reference frame cause misclassification of a banana to be identified as a toaster.

Graph Neural Networks

Any data related problem statement can be represented using a Graph network, which is a mathematical construct defining interactions between data objects. Formally expressed as an ordered pair G of two sets V (vertices or nodes; data objects) and E (edges; interconnections): G = (V, E).

Graphs can have any structure; Decision Trees is an example of Graphs with extra restrictions on direction and connectivity.

Graph Neural Networks (GNN) is a category of learning methodologies for optimizing Graph networks currently under rapid development and showing high potential in effectiveness and efficiency.

For application of Graph networks to generate fast physics simulators, check this video.

Here is also an easy to read (re) introduction to Graph theory and a fairly readable short tutorial on GNN applied to Imaging with PyTorch examples.

A.I. back to basics

Artificial Intelligence seems to be all about fancy machine learning and neural networks mathematics and algorithms. The reality is that easily 80% of the time will be spent getting your data ready for action. For geoscientist this at least is familiar, since before you can run your fancy RTM or seismic inversion, there is quite some pre-processing to be done also. So, this week we go back to some basics and since we like Python, that means Python basics.

1 page Pandas cheat sheet
Tutorials on various pre-processing topics
Complete course on Python for datascience

GPT-3, the largest neural network in the world

GPT-3 has made the AI headlines since it appeared in May. It is a product of the company OpenAI, and it can write poetry, translate, calculate, write code, have on-line conversations or write papers... It is the largest neural network in the world, with a total of 175 Billion parameters. GPT-3 was trained by reading 500 Billion words, that is the equivalent of 150 times the size of Wikipedia (in all the different languages)!

Wikipedia provides a general presentation of GPT-3.

There are plenty of different things that GPT-3 can do, many are useful and some are potentially harmful.

GPT stands for “Generative Pretrained Transformer“. GPT-3 addresses some of the well-known issues associated with standard Recurrent Neural Networks.


For more inspiration, make sure to check out First Break's Special Topic "Machine Learning" 2020.
First Break is temporarily offered open access, including technical content and industry news.


More opportunities and news are shared within the A.I. community in LinkedIn: join to hear first-hand about upcoming initiatives and get involved!

Questions? Ideas? You can always reach us at communities@eage.org

Join on LinkedIn