2025-01-11

kickratt: KaOzBrD 2 (KaOzBrD 2)
pure data polyrhythmic metronome - part 2

The video demonstrates the heart of the midi production. This clock is found in all of my algorithm performances. The clock has three branches: pitch / 2, pitch & pitch x2 controlled by the central pitch slider. Off of each branch you have an 8-step sequencer. You can expand this to 16, 32...step sequencers. You could add more branches of pitch division. This core clock covers almost all of the possibilities for tempo in midi song generation. The expression [/2] objects, seen in the vid to the right of the integer [int +1] object can be altered to create any possible polyrhythmic at any speed.

When creating synthetic midi datasets, controlling the clock to create different midi files is the first layer of files for the datasets. If I run the F# algorithm seen in my previous post for an hour, it will generate an hour-long midi performance file that can be broken up into a number of different files. Currently I audition the midi files before I add them to a dataset. As the song composition progresses new midi files added to the dataset will be in a new theme direction. To get new unique prediction as the song progresses, I even get rid of old files from the dataset. I think there is something key here in music dataset development. I can force a unique prediction by removing the past data from the dataset in favor of the new data. New data created through the prediction. A forced learning approach from the cyclic process & a start to dataset management for the LLM.

Running the algorithm at different tempos creates layers of file types for the dataset. Pitch altered generations of the F# algorithm will greatly increase creative predictions from the LLM when added back into the dataset. When generating midi for datasets, I often run the algorithms at slower speeds to reduce the number of stray notes produced by the algorithms.

Running the algorithm after altering the conditional objects is another layer. Altering the conditional objects will dramatically increase or decrease randomness which you could equate to creativeness. Altering the conditional objects of your algorithms can add a humanized element to your dataset.

Final edits of these dataset files range in length between 2-5 minutes.

This clock performance is running on a Ryzen 5950x 64gig.

Now this is generating.

kickratt: KaOzBrD 2 (KaOzBrD 2)
Ethical Ai - a call to create original "synthetic" datasets.

With regards to music prediction, what is ethical Ai?

The issue lies in the datasets utilized. The ethical considerations surrounding the technology underpinning large language models (LLMs) are often overstated. LLMs derive their predictive capabilities related to syntax, semantics, and ontologies embedded within human-generated corpora; however, inheriting inaccuracies and biases present in the training data. The nature of the learning data within a given dataset ultimately determines whether the outputs produced by the LLM are deemed ethical. If a organization develops an AI system and generates modified versions of existing musical works for personal enjoyment, this practice is generally regarded as ethical. Conversely, if the objective is to distribute or profit from these alternative renditions of established music, such actions would be considered unethical.

A studio might develop two moral AI systems. First, a dataset of all the music the artist has produced over the years. Based on earlier compositions, this technique could be used to forecast new music ideas or compositions. An LLM system that uses a dataset made out of created input data is the endeavor. I'm going to demonstrate a system that can be applied to the quest for new music. As AI prediction advances and the original dataset and predictions are re-incorporated into the dataset, the second idea ultimately becomes the first.

There are many ways to generate MIDI input data for AI datasets, including sequencers, drum machines, MAX, Pure Data, Audiomulch, noise, voltages, and scientific data. The potential for discovering new music genres is limitless when exploring the predicted MIDI outputs derived from these original sources. It is clear that large language models (LLMs) can play a significant role in music exploration and can facilitate the creation of new compositions from original datasets. The music industry would benefit from more artists developing their own AI models and utilizing unique datasets to advance this emerging genre. I employ Pure Data to generate my MIDI input, focusing on structuring algorithms that produce MIDI data sculpted to a genre type and in a given musical key(A#b). I advocate the importance of artists building their own datasets and moving away from a reliance on historical and commercial data to produce music.

Technologies establish cyclical procedures for our adherence. Engaging in the iterative refinement of a process is essential to achieving the desired outcome. Is there work to be accomplished? Indeed, determining the specific type of input data required for your dataset and devising methods for generating that data is a time-intensive endeavor. If the objective is to create something original, like band practice, it becomes a labor of love.

(dataset procedures that are suggested online)