See what a trained segmentation model predicts on real Maya LiDAR data
What you're seeing: A U-Net with a ResNet34 encoder (pre-trained on ImageNet, fine-tuned on 100 Chactun tiles) processes 3-band LiDAR visualizations (Sky-View Factor, Positive Openness, Slope) and outputs a per-pixel probability of archaeological structure. Brighter = higher confidence. Compare the model's prediction to the expert-annotated ground truth.
Prediction heatmap overlaid on the terrain. Adjust opacity and threshold.
Adjust the threshold slider to see how precision, recall, and IoU change — the same tradeoff from the Threshold Lab, but now with a real model's output.