# 2021 Virtual Undergraduate Research Symposium

# 2021 Virtual Undergraduate Research Symposium

# Lofting: Estimating Mineral Distribution with Convolutional GAN Models

# Lofting: Estimating Mineral Distribution with Convolutional GAN Models

**PROJECT NUMBER: 33** | **AUTHOR:** Kane Bruce, Computer Science

**MENTOR: **Hua Wang, Computer Science

**GRADUATE STUDENT MENTOR: **Hoon Seo, Computer Science

**ABSTRACT**

Estimating geographic mineral distribution is an important task in mine location planning. Planners often do this by drilling long thin boreholes in terrain to try and sample the mineral distribution. As a result, this 3D data is considered very sparse. However, the borehole results can often be misleading about the actual mineral content of the ground. To improve upon the accuracy of such data, we propose a convolutional generative adversarial imputation network called CGAIN to perform sparse data imputation over a volume of terrain and predict the mineral contents of the ground. We also propose a 3D visualization technique called “alpha-shape coloring” to better observe and visualize the average mineral densities in different areas within the terrain.

**PRESENTATION**

**AUTHOR BIOGRAPHY**

Kane Bruce is a combined BS/MS senior in Computer Science with a focus in Artificial Intelligence at the Colorado School of Mines, performing research for the MInDS@Mines lab. He has applied machine learning in bioinformatics and geology, and has worked in the defense industry. He currently works for Seagate as a data science intern, and in the future he hopes to perform high-performance research in AI modularity, security, memory, and cognitive modeling with the use of enterprise-grade servers.

Hi Kane,

This is super interesting. Do you think this method will work better for certain minerals compared to others? What deposits has this been tested on?

Hi Samantha, great question! I think that minerals that are less sparse and thus provide more initial borehole data for a similar number of boreholes will likely yield better results. More training data for the model means it will perform better. Our datasets have included gold, silver, platinum, copper, and tin; from these, we have obtained better results with the more common and data-rich copper and gold — though, this will also depend on the terrain.

Hi Kane,

Very nice work here in applying methods from machine learning to estimating mineral distributions. As a follow-up to Samantha’s question, how does in the measured bore hole data coverage affect the sparsity pattern inferred in the mapping step of your algorithm? How well does your imputation step work on different types of sparsity patterns, i.e. different distributions of materials? How does this method compare with existing mineral distribution estimation techniques apart from the work cited in your poster?

Hi Nick, wonderful string of questions! The mapping step is merely the process by which we take our raw data in spherical coordinates and convert it into the Cartesian data tensor format more accessible to our model — we find that this step doesn’t significantly influence the shape of the predicted mineral distribution itself by way of inference, so much so the actual predicted point quantities of a mineral in or across the distribution. This means that it is the imputation step that learns different sparsity patterns for each of the minerals provided in the model’s training data, regardless of the chosen mapping, since the physical region a borehole occupies is approximately the same across the different mappings. We plan to test if different mappings better suit different minerals for accuracy.

Note that existing orebody block estimation techniques such as nearest-neighbor, inverse-distance weighing, and Kriging are essentially singular functions for this task. Our neural net model theoretically performs better than these individually, because it intends to learn the correct imputation “function”, which may be some combination of statistical or mathematical functions within the neural network, directly from the sparse and *actual* data given. However, we have yet to come up with a direct data comparison between the methods.

Great presentation, Kane! Do you think you could share more about how the model imputes unknown data? Specifically, are there any heuristics being used that are based on patterns of mineral density that are informed by geologic observations from previous studies, or is it purely done using statistics?

Hi Ryker, interesting thought! As we describe, our imputation step is essentially the traditional train/test/validation of a convolutional neural net model — pure deep machine learning. Any geological heuristics about the data it may learn and use become internal to the neural network itself, and thus not easily extractable aside from output interpretation (or visualization on the neuron layers themselves). It does raise the question of whether there *are* such heuristics beforehand we can implement in our loss functions or the network architecture itself, which wouldn’t be trivial, but we may consider moving forward.

Kane great presentation. I was wondering if including other types of data other than borehole data, maybe such as 2 perpendicular horizontal surveys from ground-penetrating data be useful for more training. I’m also interested in how much data preprocessing was required.

Hi Ana, thank you! This was a thought I was having as well, since I was introduced to the project after it’s initial conception and model implementation. I think that incorporating other geological markers would be a great idea, but we would have to consider how much extra space we would need for memory and storage considerations. Already as we are, the entire model cannot fully fit onto a single GPU, and we have had to come up with a number of cube-slicing techniques to process the convolutional space in chunks rather than as a whole without sacrificing model performance. Maybe using a group or series of models in conjunction with one another could be a viable way to go? Thank you for the thoughts and input!