How scalable workflows, automation, and distributed processing make managing large photogrammetry projects possible.

DRONELIFE spoke with Philippe Simard, co-founder and CEO of SimActive, the Canadian company behind the Correlator3D photogrammetry suite, to explore one of the most pressing challenges in the aerial mapping industry: how to efficiently manage and process massive datasets from large-scale projects, sometimes across multiple operations at once.
Defining “Large” in Photogrammetry
When it comes to photogrammetry, size is not just a question of geography. “A large photogrammetry project is primarily defined by its total data volume,” said Simard. “For drone-based initiatives, this could involve tens of thousands of 60-megapixel photos, resulting in terabytes of raw data.” While geographic scope contributes, he noted that it’s the data size that truly dictates the level of complexity and resource demand.
Where Bottlenecks Begin
According to Simard, the biggest slowdowns typically appear during the processing stage, not in flight operations. “Data acquisition is generally linear, involving multiple missions over days to cover large areas,” he explained. “The real challenge is handling massive datasets—just the transfers alone can become a bottleneck if not managed efficiently.”
A common mistake, he said, is trying to process everything in one go. “Users often attempt to process all data in a single batch using software not optimized for scale, leading to exponentially longer processing times and crashes,” said Simard. He added that many teams also misjudge hardware needs, investing heavily in high-end systems without addressing core software inefficiencies.
Scalable Solutions and Smart Workflows
For teams working on large or concurrent projects, Simard advises starting with software designed to handle massive workloads. “Our Correlator3D suite handles vast datasets on standard hardware,” he said. The key, he explained, is to divide projects into manageable tiles. “Breaking a project into tiled subparts accelerates processing and simplifies quality checks, ensuring faster turnaround while maintaining accuracy.”
Hardware remains a key factor in scaling. “Storage speed is often the bottleneck in data-intensive tasks,” said Simard. “We advise using PCI Express NVMe SSDs for source imagery, since each photo may be accessed multiple times.” For larger setups, he recommends pairing SSDs for inputs with HDDs or high-speed networks, like 10-Gigabit systems, to balance performance and cost.
Scaling with Distributed and Cloud Processing
SimActive’s approach to scaling is built on distributed processing—maximizing the resources teams already have. “Correlator3D automatically detects available PCs and distributes project chunks, achieving near-linear speedups,” Simard said. “For instance, five machines can reduce processing time by about 4.6x.” This approach allows organizations to increase throughput without heavy investment in new hardware.
Cloud processing, he added, is becoming an increasingly viable option. “Platforms like AWS or Azure allow users to scale computing power on demand,” said Simard. Uploading terabytes of imagery can still be time-consuming, but for teams already delivering results via the cloud, “it integrates seamlessly, turning potential drawbacks into workflow advantages.”
Automation and Quality Control Across Multiple Projects
Automation is another major factor in managing multiple large projects simultaneously. “Automation enables 24/7 operations through scripting that chains processes like aerial triangulation and orthomosaic generation,” Simard said. Correlator3D supports email notifications for remote monitoring, allowing teams to reduce manual work, minimize errors, and handle more projects without proportional staff increases.
Still, efficiency means little without consistency. “Teams should establish documented protocols with standardized checks, such as verifying accuracy metrics,” said Simard. Comprehensive training, he added, helps ensure that all team members adhere to uniform quality control practices. Tools within Correlator3D, such as editing and QC features, streamline review processes and reduce the risk of oversight.
Lessons from the Field
A notable example of large-scale photogrammetry in action came after a tornado struck Selma, Alabama, in 2023. The Alabama Department of Transportation captured more than 18,000 drone images to assist in recovery efforts. “Using Correlator3D’s distributed processing, they generated maps and began delivery within 24 hours,” said Simard. “It demonstrated how preparation, scalable software, and modular workflows enable rapid, effective responses—even under emergency conditions.”
The Future of Managing Large Datasets
As drone, satellite, and sensor technology advances, the volume of data generated will continue to grow. “Multi-camera systems are generating immense data volumes,” said Simard. “Project management will rely more on automation and distributed or cloud processing to keep pace.”
The evolution, he believes, will enable teams to deliver increasingly complex datasets quickly and accurately—turning what was once a logistical challenge into a strategic advantage.
Read more:
- SimActive Releases Correlator3D Version 10.4 with Enhanced 3D Model Controls
- SimActive Software Supports Highway Ramp Expansion Through Integrated Lidar and Photogrammetry
- SimActive’s Correlator3D Speeds Up Processing of Large Drone Datasets for SurvTech
Miriam McNabb is the Editor-in-Chief of DRONELIFE and CEO of JobForDrones, a professional drone services marketplace, and a fascinated observer of the emerging drone industry and the regulatory environment for drones. Miriam has penned over 3,000 articles focused on the commercial drone space and is an international speaker and recognized figure in the industry. Miriam has a degree from the University of Chicago and over 20 years of experience in high tech sales and marketing for new technologies.
For drone industry consulting or writing, Email Miriam.
TWITTER:@spaldingbarker
Subscribe to DroneLife here.