AI Sucks at Guitar Tones
Mar 10, 2026
Even though I kind of suck at it, guitar is my main hobby. Recently, I bought an audio interface and a studio-quality monitor headphone. If you don’t know anything about guitars, by plugging an electric guitar into an audio interface and connecting the audio interface to a computer,the direct signal can be manipulated (just like how amps and pedals do) and recorded . In the end, you would plug your headphones into the output of the audio interface and have a working setup. You can just play and record your guitar using a DAW (Digital Audio Workstation) software.
My goals when buying the gear were simple:
- A much wider variety of guitar tones using amp simulations.
- Playing in an apartment without annoying my neighbors.
- High-quality recordings.
After fighting for a day with my DAW and audio interface setup, I finally got everything right. Now, it was time to discover good tones. It was a rabbit hole. I got a case of choice paralysis.. There were too many options, styles and settings. I have had dozens of amps to choose from, tons of different pedals, EQ settings, post-FX and even synthesizers that have insane effects that do not sound like a guitar at all.
Ideally, a craftsman should know his tools. However, I wanted a shortcut. I just wanted those great tones immediately. Then I went on to ask LLMs (Gemini Pro and ChatGPT 5.4 Thinking) to give me “Hotel California” solo tone for my guitar. Confident as always, they started giving me detailed exact configurations. Then I tried the configurations. They weren’t even close. Almost always they hallucinated the possible settings and wanted me to set the knobs that do not exist. Even when I guided them and provided the correct option layout, they were still completely off the mark. As I continued the conversation they just got farther from the sound.
As always, they were giving me “additional tips” that I didn’t ask for at the end of each prompt. However, it varied from conversation to conversation. For playing solo part of some pop song, Gemini first recommended me to “strict down-pick”, then in another message it recommended me to “strict alternate-pick”.
I know that multi-modal LLMs exist however the one I was interacting with wasn’t using any modality other than text. So it has no chance of verifying the sound and comparing it. Even if it did, I’m still pessimistic about how it would perform. These models were just using the text data available. Since there are very few guides on the internet regarding the specific tones and settings of these songs, it failed. Even for popular songs like “Hotel California” AI didn’t do a good job at all.
However, even a remotely skilled musician would get close enough to most tones. Yeah, achieving the exact tone or a very close tone is most of the time an impossible task many people die on the hills for, but “meh, close enough” is not that hard. LLM tones were definitely not “close enough” to my ears.
As a software engineer, I see LLMs doing wonders at coding everyday. Yes, they can mess up on big, complicated project architectures however “Hotel California” of coding which is Dijkstra’s Algorithm or Merge Sort, no modern flagship LLM would make a mistake for that task. Since code is a universal, deterministic piece of text which has tons of material on the internet, this is expected. As with many things in real world guitar tone is messy, subjective and perceptual and LLMs suck at it for now.