<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=3382404&amp;fmt=gif">
Skip to content

The History of 3D Reality Mapping with 360 Cameras

No quod sanctus instructior ius, et intellegam interesset duo. Vix cu nibh gubergren dissentias. His velit veniam habemus ne. No doctus neglegentur vituperatoribus est, qui ad ipsum oratio. Ei duo dicant facilisi, qui at harum democritum consetetur.
4D digital twin showing actual and expected states of work in progress

Read any article about the future of project monitoring and site surveys in construction, and you're sure to see some buzz about developing technology that may soon or has recently begun to generate point clouds and mesh models from 360 videos. The result? Jobsite footage that's transformed into a measurable, 3D model of construction at any stage.

But the truth is, this technology has existed for years. This article will detail how the wants and needs of several early Reconstruct customers led to our 2015 and 2016 development of photogrammetry technology that modeled footage from any device, including smartphones, 360 cameras, drones, and laser scanners. The resultant algorithms have fueled remote quality assurance and quality control, visual progress monitoring, facility condition assessments, digital twins for construction, and more for years.

Key Takeaways

  • Reconstruct developed the reality mapping technology required to 3D model job sites using footage from 360 cameras over six years ago. 
  • This breakthrough empowered construction stakeholders to accurately and remotely monitor and document ongoing indoor and outdoor construction.
  • Unlike other reality mapping engines, Reconstruct allows stakeholders to blend footage from any device (including smartphone, 360 camera, drone, and laser scanner) to create one digital twin for construction that reflects as-built conditions along the entire construction timeline. 

Eight years ago, drones and laser scanners were the most dominant ways to capture reality data 

When Reconstruct first started, laser scanning was the most dominant form of reality mapping for indoor construction documentation. At that time, drones had just made their way into construction sites across the United States, and this was primarily to improve aerial (exterior) photography, which was already an existing practice for high-profile projects that could afford the cost.  

We thought that if you could take plenty of overlapping pictures with a drone over a project’s timeline and feed them to the right engine, you could use the imagery to generate measurable walkthrough experiences and 3D point cloud models without frequently using a laser scanner. 

We tailored our existing reality mapping technology to model buildings and structures based on drone pictures to test our hypothesis. It worked. So long as these pictures had the correct overlap, we could easily generate models. At once, reality mapping the exterior of our customers' assets was solved.

Above are early examples of 3D point clouds and measurable images produced by Reconstruct’s photogrammetry engine on the Sacramento Kings’ Golden 1 Center stadium project under construction by Turner Construction between 2014 and 2017.

 

Above, from the same Turner Construction project, an early example of 4D integrated information models (4D digital twins) showing actual and expected states of work in progress, highlighting at-risk locations, and communicating who does what work in what location for more effective project controls.

 

Above are additional early examples of 3D point clouds and measurable images produced at scale on the cloud by Reconstruct’s photogrammetry engine across many projects in the US and overseas.

But then, Reconstruct tailored its reality mapping engines to read camera images and 360 videos.

Finding a new solution for reality mapping a structure’s interior was more complicated. At the time, the most dominant method of reality mapping in 3D was laser scanning. This solution offers high-resolution reality mapping but does take time and relies on expert operators. 

In search of a solution that met the realities of many construction projects' resource constraints, we figured: Why not just take a ton of overlapping photos using a DSLR camera? So long as the right imagery existed, we could fine-tune our photogrammetry engine to turn the data into a model.

And so we did. Early iterations focused on using point-and-shoot or DSLR cameras to take many overlapping photos, then feeding the results into Reconstruct’s photogrammetry engine to generate 3D point clouds and mesh models.

By late 2015, a photographer working with one of our customers was concerned about the time spent walking large sites, particularly given the limited field of view of point-and-shoot cameras. We worked with him to place a camera on a mini robot that would cruise around the Sacramento Kings stadium, snapping pictures. 

A year later, we began working with a general contractor in the Midwest. Tasked with building a massive facility for energy distribution on an automaker’s campus, we initially suggested using drone technology to empower reality mapping. After all, the structure was large enough to enable indoor drone capture.

But the contractor and its customer didn’t want to use drones for many reasons, including safety and security. Fortunately, at the same time, 360 cameras were beginning to be utilized for reality capture, albeit in a limited way, such as capturing images to pin manually against floor plans.

Another reason we were interested in exploring 360 video technology for reality mapping at the time? How rapidly and accurately it could capture the reality of an indoor environment. While drones can be programmed to operate autonomously for exterior capture, they may create safety concerns, and they capture only a restricted field of view indoors. But with a 360 camera and just one quick capture walk, an organization can see what's in front of the camera operator, what’s behind them, what’s to their left and right, and what’s above and below them.

This means that in a fraction of the time and cost of other reality capture methods, virtually anybody on a job site could capture six different viewpoints—and quickly. This is because a 360 camera:

  • Usually costs only a few hundred dollars.
  • Can be operated by regular field team members, slashing the need for (and expense of) professional site surveyors.
  • Offers incredibly fast 360 capture—virtually eliminating disruption on the job.

For the third time in a few years, we adapted our photogrammetry engine to generate models from 360 video footage. By developing an updated algorithm that consumed this data, Reconstruct began generating fully immersive street walkthroughs, point clouds, and interior floor plans of many construction sites without drones or laser scanners. 

Above are early examples of 3D point clouds and measurable images produced by Reconstruct’s photogrammetry engine on the Athlete Village Project at the University of Minnesota’s campus. (Project: Mortenson Construction; 2015 to 2017)

Above, a camera-equipped rover solution was used to visually document project interiors to monitor work in progress and generate quality control reports. This camera-equipped rover was owned and operated by Thomas Bartlett, CEO of ImageInFlight, and used during the Sacramento Kings Stadium project from 2014 to 2017. 

To the left, a project manager reviews a 3D point cloud produced by 360 video footage of a site walkthrough. To the right, a measurable 360 image of an energy distribution building site offers a street-view walkthrough for coordination, progress monitoring, and quality control purposes.

To the left, a measurable 360 image of an industrial building site, offering a street-view walkthrough for coordination. Progress monitoring and quality control purposes. To the right is a 3D point cloud model of the same site and a floor plan generated from the same 360 video footage from 2016 to 2018.

Related: The Future of AI Project Monitoring in Construction

At present, only Reconstruct can blend any data from any device into measurable, timestamped street-view walkthroughs and 3D models

Today, Reconstruct remains the only provider of 3D indoor reality mapping with 360 cameras that allow customers to utilize virtually any device or combination of instruments, feed captured 360 or flat imagery or video to the photogrammetry engine, then sit back as our technology transforms the data into measurable 2D floor plans and 3D models in space and over time. 

Reconstruct is also the only software in the market that offers an engine and an online viewer to process and visualize all forms of reality capture data, from camera-equipped drones to hardhat-mounted 360 cameras to smartphone videos, as well as Matterport and laser scan data. 

So which tool, exactly, is best for your reality capture needs? That depends. Fortunately, our recent experiment with Oracle has done the hard work of picking and choosing which tool is right for what task. Read the results now, or reach out to us to determine which reality capture strategy, what set of tools, and what level of expertise you need to maximize the ROI of reality mapping on your job site.

Whatever your budget and resources, and whatever reality capture tool you decide to use, we can support you.

From left to right: 3D point clouds produced using Reconstruct’s reality mapping and photogrammetry engine from a manually operated drone with a perspective camera; an autonomous drone with a perspective camera; and a 360-degree video camera mounted on a hardhat.

From left to right: 3D point clouds produced using Reconstruct’s reality mapping and photogrammetry engine from a smartphone video camera; and point clouds from Matterport and a mobile laser scanner. 

From left to right: Point clouds from a laser scan-mounted ground robot; a stationary laser scanner.

On the left: a measurable street view walkthrough. Users can click on the donuts or use the navigation map in the lower left corner to teleport themselves into different locations and review the most updated states of their project sites. On the right: the same measurable street-view walkthrough shown on the left, but with BIM overlay so the user can review “what is there” vs. “what should be there” for quality control, progress monitoring, or design-construction interfaces.