AI usage: Best-practices - How to be smarter than your AI (sometimes!)

By: Tom Cloyd; reviewed: 2025-08-19:0827 Pacific Time (USA))

[under development]

NOTE: Bookmark this location. It cannot be reached (located) from any of the other regular pages of this website. If you click to some other page not listed below, you can only return here using your browser’s back button or using the direct link.

Page contents…

About articles I publish that have been written by AI

I have been using AI (artificial intelligence) for quite some time. I have also been educating myself on the general topic of AI so that I can use it well.

When we ask an AI model to answer a question it can give an incorrect answer, or even a nonsense answer. (But, then, so can a person!)

I have worked mainly with three of the major models (I discuss this more below), and I’ve generally gotten excellent results. I am particularly happy with the Anthropic “Claude” model. I now have a paid subscription to this model, so that I can use its advanced features.

I often use AI models to research a topic for me, as a supplement to regular Internet search. When I do this I ask for a detailed answer to a question, accompanied by cited sources that can be checked.

If I think the result research report is likely to be useful to others, I may publish it on one of my two websites, but always with proper author attribution, and only after I have assured myself that the article can be trusted. That means that I have checked the main assertions of the article against the cited sources and confirmed that the assertions correctly represent the sources.

This is something that any thinking person should do with any article, regardless of how it was written. Something isn’t to be trusted just because we understand what it’s saying! Unfortunately, most written material we encounter does NOT have cited sources. (For example, consider newspapers; or TV news broadcasts, or virtually all magazine articles or website pages!!!) This means that what is being asserted cannot be checked. Most people don’t concern themselves about this, and this is one reason we have such a problem in our times with misinformation.

At times, I will publish in a private location AI-written articles which I am still checking, usually because my early indications are that it can be trusted. I will always link to this statement, here on this page, when I do.

About this article - IMPORTANT

This is a very early version. Most of the planned content is not yet here (as of Monday 2025-09-01).

How I currently use AI

AI LLM’s preferred:

All three of these models produce rich output of great potential value, when pushed into their “deep research” modes. For important questions, I use all three, save the output in markdown text files, and study each carefully - both for the ideas and for the sources.

This is significant work, but clearly faster than old-style prowling-the-stacks library methods, although this prowling is not completely avoided in many cases.

In summary, AI proves to be a fast and hard-working research assistant whose work in every case must be gleaned and carefully checked. (But that would be the case with any research assistant.)

I access these models most often, in this order:

  • Claude - I have consistently found reports from this LLM to be richly valuable.
  • ChatGPT - One of the nice features of this LLM is that it responds to an initial prompt with 3 questions whose clear function is to increase the specificity of the user’s request. This not only reduces the work it has to do but also increases the likelihood of user satisfaction with the outcome report.
  • Google Gemini - This LLM’s deep research reports are invariably thorough and absorbing.
 

☀   ☀   ☀