Don’t Count On It!

Enough about Artificial Intelligence already. I promised myself I wouldn’t write about AI again. Then a friend shared a recent experience. She planned to drive her granddaughter from Denver, Colorado to Buffalo, New York and wanted to take in some of the sights along the way. After learning about ChatGPT at the local library, she asked for a route and roadside attractions along I-80. ChatGPT obliged with stops to “provide a mix of natural beauty, historical sites, and cultural experiences, making your journey along I-80 from Denver to Buffalo diverse and enjoyable.” Sounds wonderful, right?

Don’t get me wrong. I think the Bonneville Salt Flats, Salt Lake City’s Temple Square, Wyoming Territorial Prison, and the California Trail Interpretive Center in Elko, Nevada are fascinating and educational. The only problem: all these sights would require a 1,000 mile detour! My favorite attraction was the world’s largest Cheeto in Algona, Iowa, which was a mere four hour excursion off I-80. Other attractions didn’t exist, such as the world’s largest fork in Iowa. On the bright side, ChatGPT didn’t route my friend through Outer Mongolia.

AI developers acknowledge that the software will hallucinate—produce incorrect or fabricated information. So why do these hallucinations matter to us as writers? “Real facts” are important when writing our book. While it seems obvious that non-fiction must be as accurate as possible, fiction readers crave more than only entertainment. They want to come away from our books learning about an era, a career, a technology, or a culture they weren’t familiar with. Fans of police procedural or historical novels are quick to point out that a certain type of pistol only has six, not nine, rounds or buttonholes weren’t widely used prior to the Renaissance.

My friend isn’t alone in getting bad advice from ChatGPT. Ask the lawyers who were sanctioned for submitting AI generated briefs citing nonexistent cases. Or the scientific journal that issued a retraction for an article filled with nonsense illustrations.

My advice: if you use AI, don’t count on the results without independent verification. That’s the safer route my friend took.

6 replies
  1. Lois Winston
    Lois Winston says:

    Great advice, Brooke! Many years ago, I railed against Wikipedia when it first became a thing. Anyone could post information. Too many authors were relying solely on Wikipedia for their research, much of it turning out to be wrong. Wikipedia has come a long way, and it’s much more accurate now but still not perfect. The same is true for Google maps. It’s rife with inaccuracies, and it’s easy to tell when an author has relied on it for her only research of an area she’s never traveled to. AI is in its infancy, and we’d all do well to stay away from it until the growing pains are worked out.

  2. Debra H. Goldstein
    Debra H. Goldstein says:

    So much for the “direct” route. Perhaps better word parameters were needed to avoid miles and miles of detours. I’m with camp that says use A1 within its limits, but verify!

  3. Gianetta Murray
    Gianetta Murray says:

    We were taught in high school to always get a minimum of two sources of information. All the schools in the county held an annual trivia hunt where you had to provide two sources for every answer. This is a basic skill that we seem to have lost in the age of information, when we need it most to deal with all the misinformation swirling around us. It all starts with education. It always has.

  4. Donnell Ann Bell
    Donnell Ann Bell says:

    Two rules is a good rule of thumb for most. That was true of journalism as far as crediting sources. Now with anonymous sources the norm, I dislike that thinking. Such a wonderful post, Brooke. I want to write my own books and research from experts and trusted sources. Have I gotten things wrong in fiction? I have. All these companies racing to be ahead of the curve be damned. AI should come with a warning: information not back checked or certifiable. USE AT YOUR OWN RiSK

Comments are closed.