INDEX
Keigo Oyamada’s Proactive vs. Manabu Deto’s Cautious Approach to AI in Music Production
Just as differences emerge in your use of synthesizers and stage setups, there seems to be a subtle contrast in how each of you approaches technology.
Deto: That’s true.
Interestingly, the younger Deto leans more toward analog preferences, while Oyamada is somewhat more forward-looking.
Oyamada: That might be the case.
To be direct, what are your thoughts on introducing AI technology into music production?
Deto: For example, using AI to isolate specific parts from stereo mixes—like what The Beatles’ recent reissues have done with demixing—that seems like a useful tool.
But generating everything from chord progressions to lyrics and arrangements just from a single prompt? Honestly, I feel that’s a step too far for me.
Oyamada: It’s a bit awkward to say this after what I just heard… but I’m actually really hooked on AI right now [laughs]. I’m personally experimenting with it a lot.
Deto: How are you using it?
Oyamada: I generate countless ideas, then use that as raw data to remix, play parts myself again, reprogram everything… something like that.
I see, so you treat what AI produces as just one material to work with…
Oyamada: Exactly. Otherwise, it wouldn’t really involve any creativity on my part. The remixing function is also fascinating—you can control how much of the original source to keep by percentage, change genres, and so on. You can endlessly play with your existing stereo mixes, even tweak your past recordings. The technology has advanced a lot in the past few months.
Deto: When used that way, it’s clear that you’re the main creative force, so I can understand why it’s interesting.
Oyamada: But maybe someday AI will be able to flawlessly do even the “creative” approaches, like deliberately avoiding what AI would normally do and doing things differently.

Some say that the mechanism behind human “creativity” itself actually shares a lot with how AI works—large-scale learning and output based on that.
Oyamada: Yeah, exactly. The process might not be so different after all.
In text generation, for instance, it seems like AI is starting to simulate things like “subverting context” or producing “misfires” that feel intentional.
Oyamada: In music, I haven’t really seen anything truly shocking yet, but who knows what’s coming. With visuals, you sometimes get these weird glitches or errors that are strangely compelling.
But even that kind of noise might eventually be something AI can reproduce with surgical precision from the start. And on the flip side, a few years from now, we might be saying things like, “Man, that glitchy error vibe from around 2025 was perfect,” and there’ll be a whole “vintage AI” aesthetic (laughs).
An evolution of that feeling of nostalgia for glitches.
Oyamada: Exactly. So for now, I feel like we’re still in a transitional phase. Especially for music generation, audio quality is still a hurdle. Even remix-capable AIs tend to have lots of overlapping frequency bands, so a human still has to clean things up.
Do you think there’s potential to use AI in live performance?
Oyamada: Hmm, maybe, but generation takes time, so you’d have to wait during the set.
Deto: That’s true.
Oyamada: It could be interesting to use it for improvisation—playing off constantly evolving AI-generated sounds.
Deto: I had no idea you were experimenting with AI that actively.

Oyamada: These days, whenever musicians get together, this kind of topic comes up a lot—but most people are actually pretty hesitant about it. Among the people around me, the only one who genuinely seems to be enjoying it is Towa Tei. I think deep down, using AI still comes with a kind of guilt. And honestly, I feel that too, at least to some extent.