The need for simulated environments to replicate current conditions in real-life locations is becoming increasingly important as simulation moves into the operational domain. Most militaries already have the ability to capture high-resolution imagery of live locations using satellites, manned aircraft, or UAVs. However, turning that raw imagery into a fully interactive training environment has historically required complex, time-consuming, manual, and compute-heavy data processing.
Until now, the problem has required an investment of weeks of manual labor. Although costly, this approach works for non-time-sensitive builds where we just want a highly accurate model of a small area. But within the operations world, timescales shrink to a matter of hours — not weeks or months.
This blog will explain how to quickly and cost-efficiently transform raw imagery into fully interactive training environments using commercially available software in combination with BISim’s geospatial suite of tools. Learn how to quickly convert live survey data into simulation-ready terrains by leveraging automation and advanced processing techniques, significantly reducing time-to-training and enhancing mission rehearsal capabilities.
Using Photogrammetry to Replicate Reality
Photogrammetry is the process of converting high-resolution survey photos into accurate 3D models of real-world locations. This technique relies on capturing multiple overlapping images from aerial platforms such as UAVs, satellites, or manned aircraft.
Images are fed into photogrammetry software, which detects common points across multiple images and aligns them to create a 3D point cloud. The cloud is then processed to generate a mesh representation of the terrain, with textures mapped from original photos to enhance realism. By analyzing shared features such as edges, corners, and surface details, the software reconstructs an environment's geometry and visual characteristics. This process, often referred to as "Structure from Motion" (SfM), creates a digital representation of physical terrain, commonly known as a "3D reality mesh."
3D Reality Mesh vs a Simulation-Ready Training Environment
Reality mesh is a digital reconstruction of a physical location made up of interconnected polygons forming a continuous surface. While it provides a visually accurate representation, it lacks the necessary structure for simulation training. To be useful in a training environment, the mesh must be optimized for real-time rendering, enriched with semantic data, and support AI-driven behaviors and physics interactions. In short, a simulation-ready training environment must include:
- Efficient terrain rendering through Level of Detail (LOD) optimization
- Semantically classified objects that distinguish roads, buildings, and vegetation
- Collidable physics properties such as destructible structures
- AI-driven navigation that enables vehicles and personnel movement
- Military-standard interoperability for seamless integration (e.g., DIS, HLA)
Transforming Reality Mesh into Training Environments
We’re excited to announce that our geospatial engineering teams have created a fully automated process for transforming 3D reality mesh into simulation-ready 3D terrains. Even more impressive, this process takes just hours, not days. The reality mesh processing workflow provides all layers required to automatically generate simulation-ready 3D terrain that can be used immediately in training, mission planning, and mission rehearsal.
The process involves importing and georeferencing the 3D mesh, extracting point clouds, classifying terrain features, and generating a digital terrain model (DTM) with high-resolution textures.
Above-ground objects are segmented, and textures are optimized to reduce rendering overhead while maintaining visual fidelity.
The simulation-ready 3D-terrain can also be hosted on Mantle, our terrain management platform, which enables terrain creation for systems based on VBS 4, Blue IG, Unity, Unreal, Cesium, and many other formats.
Learn More at ITEC
Join us at ITEC 2025 to explore our automated terrain pipeline firsthand and see how it transforms reality mesh into simulation-ready environments.
Get a demo at booth B31
Watch our terrain processing in action at booth B31 with a demonstration highlighting how the Mantle pipeline efficiently converts Reality Mesh Data into fully interactive, relevant training and rehearsal environments for the Forces. The data used to demonstrate this was provided in the context of their digital innovation by the French Air Force Helicopter Crew Training Center.
Attend our white paper presentation
Join Dr. Nataliya Tyagur, Geospatial Engineer at BISim, on March 27th from 13:00 to 13:30 as she delves into the technology behind our automated data processing and its real-world applications.
We look forward to seeing you at ITEC 2025—visit our booth and experience the future of simulation terrain generation!