<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=3382404&amp;fmt=gif">
Skip to content

The Future of AI Project Monitoring in Construction

No quod sanctus instructior ius, et intellegam interesset duo. Vix cu nibh gubergren dissentias. His velit veniam habemus ne. No doctus neglegentur vituperatoribus est, qui ad ipsum oratio. Ei duo dicant facilisi, qui at harum democritum consetetur.
3d point cloud generated from drone photogrammetry software

As strides in artificial intelligence continue to reshape virtually every industry, it’s no surprise that more and more construction, engineering, and inspection stakeholders are curious about what impact AI may have on their industries. In this article, we’ll zero in on three technologies with the most potential to impact construction, then touch on what’s preventing AI from taking over the most tedious tasks on your project monitoring to-do list.

Key Takeaways

  • Three leading AI technologies are likely to impact construction: large language models (LLMs) such as ChatGPT, neural radiance fields (NeRFs) that transform a few images into robust 3D models, and recognition engines that can intelligently identify objects.
  • New developments in recognition are especially promising and bring the construction industry that much closer to precise, machine-powered project monitoring. 
  • One day, when these three technologies are properly integrated, AI may be able to review the digital twin of a building or structure to identify objects, fluidly answer complex questions about progress, write detailed field notes, and so much more. 
  • Because many of these AI technologies are relatively new, further refining is necessary before these tools can work together and officially take the reins on construction project monitoring.

The three AI technologies most likely to reshape the construction industry

There are three main AI technologies experts believe will impact construction-related work most significantly in the coming years. They are large language models, neural radiance fields, and recognition.

1. Large Language Models (LLMs)

LLMs—recently made infamous by ChatGPT—are powerful engines that rapidly synthesize data and answer questions. Tools like these are also quite adept at combining structured and unstructured information. For example, in construction, LLM engines can quickly generate summaries of uploaded field reports. 

But LLMs do so much more than automatically summarize manually-gathered notes. These engines have real potential to integrate many different systems, platforms, and data sources. Recently, for example, LLMs have been asked to generate code that asks APIs or other programming interfaces to utilize other models and libraries. 

In other words, the construction industry could eventually reap benefits from LLM far beyond text summarization and generation. In time, this type of technology could perform truly complex tasks, such as automatically generating detailed field reports from scratch by reviewing text and visual data, including visual data gathered by NeRF technology.  

2. Neural Radiance Fields (NeRF)

NeRF technology generates 3D (and even 4D) models of an object, structure, or scene using limited 2D images. What’s more, NeRF can integrate footage over space and time from the exact same perspective, showing users how some aspect or area of a site has changed over time. In some instances, geo-referenced reality capture can be performed using hardware as simple as a smartphone, 360 camera, or drone.

In construction, this reality mapping technology allows stakeholders to turn back the clock and see through walls that were sealed six months—or even six years—ago. Thanks to such technology, an engineer can pinpoint the layout of a series of pipes without the cost, delay, or disruption of demolition using an as-built digital twin. And what’s more, NeRF technology offers measurability, allowing stakeholders the opportunity to remotely visualize a job site and inspect and measure it with total confidence. 

NeRF technology, like LLM, also has the potential to integrate with other machine learning tools, including recognition. 

Related: How to Choose the Right Reality Capture Tool, Process & Team

3. Recognition

Finally, strides in recognition tools are helping machines identify objects, images, and so much more. These technologies can rapidly recognize floors, ceilings, windows, fire extinguishers… virtually anything that appears in an image. And when NeRF and recognition work together, AI can begin recognizing whatever’s been visualized in a 3D model of an airport runway, a record-breaking skyscraper, or even a quick-serve restaurant.

While recognition is powerful in and of itself, it becomes infinitely more helpful when we can use it alongside other machine learning models. 

Related: Remote Construction Monitoring: The Key to Building Sweden's Tallest Office Tower

A new advance in recognition technology could supercharge automated construction project monitoring 

Experts now believe that a new recognition model from Meta called Segment Anything could bring even more change to the construction industry. That’s because this technology makes it easier than ever before to deal with images in terms of objects, not pixels. In the past, when technology recognized images, these tools were actually labeling pixels into different things clustered together into one object. But now, Segment Anything instantly identifies each discrete object in an image. 

While this advance may seem minor, it has big implications for the construction industry. That’s because if recognition tools can truly segment and identify objects, they become infinitely more powerful when combined with LLMs and NeRF models. Suddenly, these three technologies can work together, offering stakeholders measurability from NeRF, object clarity from recognition, and LLM that can tap into all that data to count objects and quantify work in progress. 

In other words, these technologies could become a key component of AI-powered visual progress monitoring, visual production management, visual quality assurance and quality control, and physical asset inspection.   Using this new recognition technology alongside LLM and NeRF, we can not only identify fire extinguishers within a “reality mapped” structure but also quantify how many extinguishers there are and where they’ve been laid out. We can also begin to ask questions such as, “What is the total linear feet of drywall installed on the first floor so far?” 

As with all technology, innovation and progress will take time

In a perfect world, these three AI tools—when combined with schedules—could help synthesize data and share findings in a more natural, human way, perhaps answering construction risk management questions such as, “Which team is most behind schedule?” And it’s true that one day, LLM may also be able to tap into recognition technology to review NeRF visualizations and answer questions such as, “Which of my parking lots most needs repairs?” 

Another dream scenario? Retail and multi-site construction stakeholders could ask LLM, “Show me all my storefront signage,” and then receive a report with pictures of the recognized objects identified from NeRF-generated models.

So why can’t a machine do this quite yet? Why can’t a construction-focused LLM start handling every last detail of your project monitoring today? One of the biggest challenges to integrating AI into everyday construction workflows is that every person wants different information. And beyond that, many of these technologies are only a couple of years old. 

To create a frustration-free tool that monitors construction progress and responds to every last question just the way we want it answered, these models and technologies need to be further developed (both separately and together) to interpret, synthesize, and communicate information about our projects in a truly valuable way.

About Reconstruct

Reconstruct brings the job site directly to stakeholders by generating as-built, precise 2D floor plans and 3D models of a building or structure as built. Within Reconstruct’s Visual Command Center, even the most remote of stakeholders can monitor progress, perform a facility condition assessment, overlay design and BIM against what’s been built, use 4D BIM to visualize schedule, or tap into Reconstruct Project Snapshots to rewind construction to an earlier date or time.