Minutes after the polls closed Saturday in a tight Louisiana gubernatorial race, Lori Hendry, a conservative media personality from Florida, posted a message to her nearly 300,000 Twitter followers.

“I don’t care what anyone says,” Hendry wrote. “I think Democrats have cheated in both of these elections.” She was referring to the re-election of Gov. John Bel Edwards in Louisiana, and the election of Andy Beshear as governor of Kentucky the week before. Both men are Democrats.

The tweet didn’t garner much attention at first, but by Monday morning, it began to gain steam. In a matter of hours, the tweet racked up more than 8,000 retweets and 20,000 likes.

It drew the attention of VineSight, a company that tracks social media misinformation, which found that more than half of the accounts retweeting Hendry’s link showed signs of using automation. In other words, they relied on technology to spread the message.

And it’s not the first time VineSight has seen this activity around recent elections, though it appeared to be smaller scale than some previous attempts.

“What we’ve seen in Louisiana is similar to what we saw in Kentucky and Mississippi — a coordinated campaign by bots to push viral disinformation about supposedly rigged governor elections,” Nir Hauser, VineSight’s chief technology officer, said. “It’s likely a preview for what is to come in 2020.”

The accounts VineSight found aren’t entirely automated, Hauser said, instead using a hybrid approach. A human may operate it, augmented by software that automatically sends tweets in response to certain triggers.

Accounts that blend human and automated activity fall into the general category of “bots,” but are sometimes referred to more specifically as “cyborgs.”

“Though we might think of bots as mindless drones, there is always a puppet master pulling the strings,” Hauser said. “Occasionally, the humans managing the bots will directly tweet from a bot account which gives the impression that the account is authentic. But these same accounts often retweet hundreds of thousands of tweets and demonstrate other bot-like attributes.”

These types of accounts are used by misinformation pushers to hide from Twitter’s bot-detection tools, researcher Renee DiRestra said.

“As people and Twitter became more aware of the impact automated accounts were having on getting things trending, dominating share of voice, etc, Twitter made changes to its definition of quality accounts and how they weighted in trends,” DiRestra said. “Partially automated accounts were an attempt to evade those changes.”

Hendry had no evidence for her claim. She wrote that she felt it “strange” a Democrat was elected governor in the two states while the attorney general and legislatures went Republican. Despite President Donald Trump’s personal intervention, Democratic Gov. John Bel Edwards beat out Republican challenger Eddie Rispone by 40,000 votes. Trump also campaigned for the current Republican governor of Kentucky, Matt Bevin, who was defeated.

Hendry’s tweet wasn’t the only evidence-free claim of election fraud to receive a boost. VineSight’s analysis found several other accounts making similar claims before and after the polls closed.

While Twitter said it did not suspend any accounts for bot activity, it did permanently suspended three accounts flagged by NBC News for breaking the company’s policies against trying to manipulate online conversation through the use of multiple accounts.

One suspended account that went by the handle “GunLovinTrumpGirl” warned that the South was going “communist” and referenced conspiracy theories around the voting process. “They are gunning for Texas next,” it wrote.

An NBC News search of Twitter using Hoaxy, a tool that identifies the spread of claims with bot-like behavior, also found other accounts with signs of automation making similar claims, as well as accounts with no such indicators.

However no large-scale networks of bots appeared to be involved, researcher Josh Russell told NBC News. “There may be a few individual ones,” he said.

A Twitter spokesperson said the company did not find evidence of automated accounts or bot activity during the Louisiana election. Instead, it saw some low-level attempts to spread misinformation, propelled largely by organic conversation and authentic accounts.

Social media manipulation around the 2016 election sparked action from tech platforms and congressional hearings, but malicious actors continue to try to find new ways to exploit basic disinformation strategies like amplifying baseless claims from real people.

Some of the claims that were boosted in the Kentucky gubernatorial race earlier this month by potential “cyborgs” may have found a sympathetic audience of pundits willing to promote similar views. Screenshots of a viral tweet claiming to have “shredded Republican mail-in ballots” were boosted by what appeared to be accounts using some automation, according to VineSight.

As the conversation grew on Twitter, Bevin, who initially refused to concede after losing by about 5,000 votes, claimed the race suffered from “irregularities,” though he did not cite any evidence.

Graham Brookie, director of the Atlantic Council's Digital Forensic Research Lab, said in an email that the use of cyborg bots points to the continuous evolution of misinformation strategies.

“In the United States since 2016, we’ve seen less blunt-force tactics of spreading disinformation, like large scale botnets, in part due to heightened scrutiny from the platforms, journalists and policy makers,” Brookie said, adding that the methods are more sophisticated, including targeting specific communities. “The ugly fact remains that our most consistent vulnerability from disinformation is public officials or influencers who willingly peddle it and media coverage that amplifies it as news.”