You Could Make a Killing Solving the A.I. Context Problem
There are a lot of reasons why the current hype around artificial intelligence is overblown.
Don’t get me wrong, the recent advancements in generative A.I. are impressive, game-changing, almost magical. But those advancements are advancements only in a specific niche of A.I., and they’re most definitely not magical.
You can still make a boatload of money with A.I., but it’s not snap-your-fingers money, it’s more like try-and-fail-a-dozen-times money.
As someone who has been diving deep into natural language generation since 2010, I’ve developed a unique insight into how far generative AI and NLG have come over the last decade and, more important, what the mainstream implications are for the current wave of A.I. advancements, which are basically just advancements in GPTs, which are basically just automated content producers across text, audio, video, code, etc.
I was part of the original team at Automated Insights, and I’m the co-inventor of the first NLG engine that went to market — an engine that still produces content like Yahoo Fantasy Football matchup recaps and Associated Press Quarterly Earnings Reports.
This recent push in A.I. technology focuses squarely on the same automated content we started platforming in 2010. In the beginning, I constantly had to fight the misperception that creating content out of data was just a form of Mad-Libs-style word association. NLG is so much more than that.
But something NLG is most definitely not is predictive.
NLG Creates Content by Determining Context
Creating the algorithms to write the articles, while not easy, definitely wasn’t the most difficult part of what I did. Let’s use sports as an example, specifically a football game. All I had to do was put together words, phrases, parts of speech, sentences, paragraphs, and so on, based on what happened during the game.
This was the part that seemed the most magical, but the much more difficult undertaking was creating algorithms to actually figure out…