2025-01-12

kickratt: krlogo new (Default)
With a little modifying of the conditional statements & performance auditioning of the pure data patch presented in part 1 of my posts, the results have produced this song (groove): island. There was no preconceived idea for this song. The song comes about from building the pure data patch and listening to the results. All of my algorithm patches are structured in a 4-piece instrument band concept. Drums, bass, left hand chords & right-hand melodies. I'm structuring my algorithms for improvised pop genre song types. Pure data song design patches can be structured for any genre. In pure data patch design, I strive on making the generated instrument performance sound like a band in key. Achieving results that sound like music from different genres takes some tooling. Creating an algorithm that generates a specific type of song is only limited by your understanding of MAX, pure data or other midi generating system and music.

The drums are set to fixed scale (can be any key) midi percussion zone configuration. Which helps out greatly when drum pattern predicting. Structuring the inbound generated midi drum notes to a fixed scale/ zone will make it easy to set up an outbound predicted fixed scale/ zone from the AI model. This way every predicted midi note will be assigned to a percussion instrument in your midi rig. Even if it's scrambled you might hear what you are looking for. Drum pattern prediction can produce some zany results. The bass & right-hand piano parts are broken up into two separate harmonious instrument sequences. The chords are the total number of triads (three note combinations (around 20)) are predetermined from the scale (F#). These chord combinations are selected when triggered from the patch during performance, with a harmonic relation to the notes chosen in the bass & melody sequences. Always room for design improvement when it comes to harmony. A closer look at the harmony of the left & right hand of a piano is examined in my first music video post, "day".

For the purpose of presenting how I use AI to create original (no copyright issues) music in this series of posts, "Island" represents the pure data generated song that will initiate a synthetic dataset for AI prediction. The creative initiative if you will for the dataset's theme. Representing a point in the song composing process where the artist decides whether or not to start building a dataset based on the algorithm's design performance. Or continue on the algorithm in pure data to achieve something different.

kickratt: krlogo new (Default)
A pure data designed algorithm (part 1) generating about 10 hours of midi data using five different configurations (conditional statement alterations in the same key & pitch) to produce a dichotomy in the 10 sperate hour long performances. From these 10 hours of midi performances around 150 midi files between 4-35kb in size have been edited out, constructing a synthetic dataset almost 5mb in size. At this point, all of the original midi song files have been broken up into individual instrument files; drums, bass, chords & melody. Each instrument assigned to a specific instrument dataset. In total there are 4 datasets of 150 midi files. I convert all the midi files with Music21 into text and then back to midi after prediction. Breaking up the instruments into separate midi files allows for moving the tracks around within datasets for different LLM models (GPT&LSTM) and with the ongoing arraigning of the current score in development.

The video in this post is a unique one. This is the first prediction made from the original island midi score trained on the synthetic dataset.

TRAINING STATS

number of training files = 150
batch size = 30 files
number of iterations (number of training files/batch) = 5
one epoch = 5 iterations

The bass guitar heard in this video is the first instrument predicted for a single epoch. For this audio, this first predicted track has been placed it back into the original midi score. Comparing the bass guitar in the island video of my previous post (part 3) to the bass guitar in this video (part 4) demonstrates a first step using the LLM to predict a new midi score for this single instrument in the group.

Note the interesting artifacts of the bass guitar performance. While these notes are in key, they are reminiscent of stray(bad) midi notes often produced in live midi performances. Velocity and note durations can be a prediction issue. The predicted outcome will change even more so with increased epochs. Ai prediction is a very cyclic procedure. When auditioning and recording these midi outcomes, little attention is given to the audio. Please excuse the lower quality instrument sounds used to demonstrate in this video.

Also to note, are the different drum performances heard in the two videos. Drum pattern prediction differs from note prediction and will be explained in a future post.