Are we tired of talking about AI yet? The marketing hype is undeniably here and seems to have staying power.
The reality is we have been talking about this stuff for years. Like in 2015 when I helped create a computer vision suite for major farm co-ops to automatically identify crop disease in the field, or 2019 when I hosted a panel about AI at Shadow Summit, 2020 when I wrote an article for the ConstructionTech Review about AI in AEC, or even in 2021 when we invested in Aren who uses machine learning and computer vision to automate and improve bridge inspections. And after all, AI is not a “new” concept. Researchers have been talking about it since the 1950s when Alan Turing published his famous research around the question of “can machines think?”
But this latest buzz is palpable, and for good reason. While I stand beside my supposition that we are a long way away from the true potential of AI, it does feel like we are shifting into a new era where AI will be present and visible in our day-to-day business and personal lives.
Why is that? I liken where we are today to the early days of web development. When I was creating web apps in the early 2000s it was quite a difficult decision between desktop and web. The universe of libraries, frameworks, tooling, knowledge, and community for web programming was non-existent. Having a fully-fledged “web based” application (or “ASP”, as we called it then) was rare and offered a brag-worthy competitive advantage over desktop software competitors.
Fast forward to today: frameworks such as Rails, .NET, Angular, or React, a sea of open-source libraries and projects, cheap infrastructure providers such as AWS or Azure, and a vast knowledge-base makes developing web software significantly easier as compared to “back then.” What was proudly touted as a competitive advantage is now table stakes.
The same is becoming true about AI. When I was working on the previously mentioned computer vision suite for agriculture, we only had a few tools like TensorFlow to pick from and had to undertake significant model training (and tooling to train the model) on our own. Even relatively simply use cases required non-trivial R&D efforts and were pushing commercially available hardware to the limits.
Now, thanks in a large part to massive investment in the space from the likes of OpenAI, Meta, Microsoft, and Google, what used to be difficult is becoming easier. Just like incumbent software providers eventually adapted from desktop apps to web apps, they will soon be able to integrate AI features much more easily into their products via APIs, SDKs, and open-source tooling (with implementation time measured in days/weeks and not years).
For startups, the net result is simple. If your entire product is based around a specific AI use case, and that use case can be integrated into an incumbent’s platform in a few sprints thanks to the likes of OpenAI or others, your whole value proposition is going to become a feature faster than you can read the Llama2 research paper.
My prediction is that we will see a consolidation of many AI point solution companies (including many that are getting heaps of venture funding) as the ability to create undifferentiated AI features becomes a commodity. This is especially true in AEC where the incumbents benefit from the slow-moving nature of the industry and lock-in due to fragmentation. The long-term winners will be those that can couple their AI features into a broader, more valuable platform, or who have AI that is truly differentiated from the commercially available or open-source models of tomorrow.