This works is an experimental performance, executing the musical performance and data visualization through the collaborative efforts of a human live coder and the AI coder which has acquired the sound synthesis programming language. By continuously giving condition oriented repetitions to the AI, it writes codes unthinkable to humans, creates an algorithmic-based compositions, and performs the compositions. I use the AI to generate cords with the sound synthesis language “TidalCycles” for the performances. During the performance, the AI is generating the cords appearing on the editor sector on the screen. Listening to the performance information generating from there, the performer codes the acoustical characteristics, as well as navigates the AI’s generative outputs, and creates the sound. By having a human to navigate AI’s generative outputs, we aim to have a mutually complemental live coding with humans and the AI. We are reminded the state of the AI without any physical and graspable existence, as we see the auto-generative texts, as well as the visualization of the task-oriented sounds. I hope through this work, where the AI works in complete independence and performs alongside a human, the audience has the chance to understand the ungraspable state of the AI, as well as think of it as an externalized system.