An Evening of Fun, Testing Peltarion’s “Author Style Predictor” AI

To kick off the New Year I tried out a whimsical experiment: “stress testing” an online web application which claims to predict what famous historic author you write like. Released online for public use by Peltarion—an AI technology company located in Sweden—the tool consists of a custom neural network algorithm “trained” on publicly available digital books from Project Gutenberg. Peltarion claims that by using the tool anyone can accurately find connections in writing style between their own writing and a famous author’s in as little as eight words. (They provided a detailed description of how they put the demo together at https://peltarion.com/knowledge-center/tutorials/author-style-predictor, and I touch on key technical details later in this post.)

This demonstration is one of Peltarion’s public efforts intended to highlight the company’s AI capabilities. It is advertised as a final product, and not as a “beta” or “in-development” tool.

Bold-font claims, and garbage sample data.

Like all modern so-called “deep learning” algorithms, how the algorithm here makes its determination is a veritable black-box: the tool provides no insight into why it claims you are the given author. The application in this case doesn’t even provide confidence intervals in its result, and instead boldly proclaims at the top of the results page that “You write exactly like [author’s name]”.

"ad jdjkl n w[p;orjklnmf nlk;n m jn;lki nfjkdnfddddd laa PHHHHHHHHHHHHHH OWNDPNSP" -Benjamin Franklin
It says “exactly like” so it must be true.

Still, we can naturally infer how well (or, maybe, how poorly) an AI performs by feeding the application some original sentences, tweaking them slightly, and seeing if we get a different result. This is a crude mechanism for trying to infer how a neural net AI is doing its work, but it’s done out of necessity.

AI researchers have been trying in recent years to get their algorithms to provide some kind of qualitative descriptions explaining their internal logic, but so far a solution has been elusive. The most creative approach I read of involved stacking AIs on top of other AIs, and training one AI to explain the other’s decision-making (source), but I have heard little serious discussion of this technique in recent years. While an AI may be able to tell you that a shoe is a shoe or that a dog is a dog with greater than 50% accuracy in some cases, it is inherently incapable of telling you on its own *why* it determined one thing was a shoe while another thing was not; a well-known and potentially unsolvable defect in the technology.

Still, to test this tool, I invented a completely nonsensical sentence and then—as my experiment—checked to see if making subtle modifications to the sentence changed the AI algorithm’s results. Again, no answers exist to the critical question of “How or why did the tool make its determination?” we can still tease out some answers.

Next Page