There’s a lot of controversy over AI/Large Language Models (LLMs). It’s a still very new technology that is capable of some truly impressive things while also being capable of mistakes that are so inhuman as to seem disqualifying. I’ve written a bit about how some of the forecasting around AI’s economic impact are wishful at best and hucksterism at worst, but there really are impressive capabilities on display.
My background is not in programming. I’ve been working with data and analysis for about 14 years now and it’s mostly self-taught/trained via places like Datacamp and Udemy. LLMs have taken my limited ability and brought it to a whole different level.
In the span of less than two days using Claude AI’s professional model, I was able to take a set of Excel files I built using Congressional records and deploy a comparison dashboard to the internet. There are other subscriptions that I’m using to support this but the code driving it was 100% generated by me engaging with the LLM. It was not always right and I frequently had to dive into the code to identify errors in approach, but the output is pretty impressive.
Previous efforts at similar dashboard development have taken weeks, now it was less than two full days. I don’t think I could have done this with no knowledge of the relevant coding structures, but it’s not far away. Anyways, check it out and let me know what you think!
Discussion about this post
No posts
Intelligent LLM use is key. Having done the research up-front to prime the pump and the know-how to double check the output are both super important.