Things That AI Tools Often Get Wrong

 

Wow, ChatGPT is very powerful, but it is not flawless. Imagine if your spreadsheet was right only 98% of the time. You have to use AI tools with that in mind. Check its work. Here are some things that it often gets wrong, like 60 to 90% of the time!

ADVERTISEMENT -

.
 

In experimenting with ChatGPT tools, we've discovered a few things that AI tools do well. 

Most of them can do a decent job discussing general concepts and procedural steps and overviews of law.

But when you ask it to supply citations, more often than not, it seems prone to just make stuff up!

 

The land of make-believe case law and statutes

This technology is not fully baked.  While it may be getting better, it should NOT be used to research case law and statutes unless you are ready to take its reporting with a healthy dose of skepticism and don't trust anything it says, no matter how confident and detailed its answer is.

ALL AI tools we tried failed when we tried to use them to report case law and statutes accurately.

These tools seem so eager to please that they seem too eager to lie just to give you the answer you were hoping to find and make up a citation for it.

This makes the tools almost unusable at this stage for any serious deep legal research.  

Statutory limits that are adjusted for inflation

Another thing it often gets wrong is failing to recognize when statutory limits are adjusted for inflation. 

In our tests, we had to repeatedly remind the AI bots to go back and check to see if the statutory limit was still current or adjusted for inflation. Often there was no mention of indexed amounts until we speficially asked it to check. 


ADVERTISEMENT -



Jurisdictional relevance: US

Legal Consumer - Marion County, ILLaw. The content of this article pertains to all US states and counties.