pure data polyrhythmic metronome - part 2
The video demonstrates the heart of the midi production. This clock is found in all of my algorithm performances. The clock has three branches: pitch / 2, pitch & pitch x2 controlled by the central pitch slider. Off of each branch you have an 8-step sequencer. You can expand this to 16, 32...step sequencers. You could add more branches of pitch division. This core clock covers almost all of the possibilities for tempo in midi song generation. The expression [/2] objects, seen in the vid to the right of the integer [int +1] object can be altered to create any possible polyrhythmic at any speed.
When creating synthetic midi datasets, controlling the clock to create different midi files is the first layer of files for the datasets. If I run the F# algorithm seen in my previous post for an hour, it will generate an hour-long midi performance file that can be broken up into a number of different files. Currently I audition the midi files before I add them to a dataset. As the song composition progresses new midi files added to the dataset will be in a new theme direction. To get new unique prediction as the song progresses, I even get rid of old files from the dataset. I think there is something key here in music dataset development. I can force a unique prediction by removing the past data from the dataset in favor of the new data. New data created through the prediction. A forced learning approach from the cyclic process & a start to dataset management for the LLM.
Running the algorithm at different tempos creates layers of file types for the dataset. Pitch altered generations of the F# algorithm will greatly increase creative predictions from the LLM when added back into the dataset. When generating midi for datasets, I often run the algorithms at slower speeds to reduce the number of stray notes produced by the algorithms.
Running the algorithm after altering the conditional objects is another layer. Altering the conditional objects will dramatically increase or decrease randomness which you could equate to creativeness. Altering the conditional objects of your algorithms can add a humanized element to your dataset.
Final edits of these dataset files range in length between 2-5 minutes.
This clock performance is running on a Ryzen 5950x 64gig.
Now this is generating.
The video demonstrates the heart of the midi production. This clock is found in all of my algorithm performances. The clock has three branches: pitch / 2, pitch & pitch x2 controlled by the central pitch slider. Off of each branch you have an 8-step sequencer. You can expand this to 16, 32...step sequencers. You could add more branches of pitch division. This core clock covers almost all of the possibilities for tempo in midi song generation. The expression [/2] objects, seen in the vid to the right of the integer [int +1] object can be altered to create any possible polyrhythmic at any speed.
When creating synthetic midi datasets, controlling the clock to create different midi files is the first layer of files for the datasets. If I run the F# algorithm seen in my previous post for an hour, it will generate an hour-long midi performance file that can be broken up into a number of different files. Currently I audition the midi files before I add them to a dataset. As the song composition progresses new midi files added to the dataset will be in a new theme direction. To get new unique prediction as the song progresses, I even get rid of old files from the dataset. I think there is something key here in music dataset development. I can force a unique prediction by removing the past data from the dataset in favor of the new data. New data created through the prediction. A forced learning approach from the cyclic process & a start to dataset management for the LLM.
Running the algorithm at different tempos creates layers of file types for the dataset. Pitch altered generations of the F# algorithm will greatly increase creative predictions from the LLM when added back into the dataset. When generating midi for datasets, I often run the algorithms at slower speeds to reduce the number of stray notes produced by the algorithms.
Running the algorithm after altering the conditional objects is another layer. Altering the conditional objects will dramatically increase or decrease randomness which you could equate to creativeness. Altering the conditional objects of your algorithms can add a humanized element to your dataset.
Final edits of these dataset files range in length between 2-5 minutes.
This clock performance is running on a Ryzen 5950x 64gig.
Now this is generating.