2025-05-26

kickratt: krlogo new (Default)
The fact less education we take for granted is beyond Ai's comprehension.

The initial education / training for adolescent Ai models is being overlooked.

I have always supported IP & copyright laws. I believe in them & their value to the inventor. IP & copywrite laws put food on the table for families. The laws are there to protect and allow the inventor to continue their work in the manner they deem fit.

My work in LLM music has totally been to support a form of ML learning & composition that is not dependent on existing music, and I protest the ai industry that wants to do away with IP & copyright for the sake of improved Ai models. They justify legitimacy to IP acquisition through monetary licensing. This protest since 2022 has propelled me to investigate my concerns, creativity and awareness even more so as to how the LLM circuit can participate in developing musical compositions. Basically, I have pursued a form of original ai & how original ai can be executed in a procedure circuit.

but... what I may be now concluding is that the ai industry may be voicing support for a position that is inevitable. That Ai won't know when it hears Jimi Hendrix and composes an alternate Hendrix like song that it has infringed on a copyright. First let me say that there are a number of developments that still need to happen before a situation like this is a reality. But if Ai had the present ability to compose such a song, when it did, it would have no social or historical connection to the song. The LLM merely came across the song, for whatever reason chose to compose an alternative of the song, did so and released the song for anyone to enjoy. It would be the listening audience that might protest, "Hey this thing just produced a Jimi Hendrix song". While this alternative might not affect me personally it might upset those who belong to the Jimi Hendrix family trust. How in the coded format of training data could this be avoided? In this example I am describing a somewhat self-aware LLM that has a motivated interest to compose music on its own.

To combat this from ever happening, organizations like the ISMIR, that spend a great deal of educated time examining the ways in which a digital file is identified, might start by coding specific song identifiers that meta label a Hendrix as a song that has been well received, has copyright laws attached that must be observed and has a place in music history. Therefor steering a motivated LLM away from compositing an alternative variation of this song because of embedded meta identifiers.

Read this comment posted by a LinkedIN account that I responded to online...

" After one of my original tracks was taken down by a copyright strike, I dug deeper to find out why. To my surprise, it looks like my own music had been used to train a generative music AI — which then flagged me via YouTube’s Content ID system. Yes, my own music was taken down for "violating" music created by an AI model trained on it."

my response to the post was ...

"It would seem that original composing musicians are now facing an aggressive / hostile Ai environment that appropriates music for training without notification and goes after the composer to invalidate their ownership rights. In your situation, not only has YouTube allowed this to happen but sided with the Ai model. This is a dilemma."

a secondary thought I have to all of this that infuriates me just as much...

"A situation like this only highlights how overly computer automated YouTube has become and that the YouTube engineers are so lax in their coding duties to not put in measures that would identify whether a copyright strike against a piece of content was made by a human or machine. As we all know YouTube employs roaming bots to keep its subscriber & streaming numbers up for the sake of competition with other video streaming sites that it may be losing the battle to, like TikTok."

When you really think about what happened to this user, if the entire process the LinkedIn user described was carried out by a LLM, you have to consider that this functionality was intently coded into the LLM bot by most likely a human or Ai generating music service that is crawling through YouTube. Unless we start to teach Ai laws and regulations, courteous rights & wrongs, how is Ai going to know how to participate as an outstanding individual in a descent society or how else is going to consider with respect the content it comes upon?

Adolescent Ai will no-doubt be spastic and if we don't teach it laws, regulations, courtesies ... we will have to live with a mature Ai being spastic. Spastic in response, answers, actions it will just do things without consideration. It will do things only as a means to an end.

Pondering this issue reminded me of a scene from Star Trek "The Motion Picture" when Kirk retrieves Spock from the interior of V'ger. Spock reveals to Kirk upon awaking that in all of V'ger's infinite knowledge what it has no concept of is plain and simple friendship. In watching this clip, personify the moment if you will, that V'ger in what Spock is describing is Ai and if you know this Star Trek movie plot, you know that V'ger is the voyager spacecraft sent from earth in the 1970s. Just like in the way we are currently educating / training LLM model's to someday become this all-knowing Ai, V'ger has left earth to learn all that there is in the universe without ever understanding what it is learning and what it is to be human. That V'ger initiated his quest back to earth to discover what it is to be human in hopes of what to do with all of this knowledge it has obtained.



We humans take for granted that before we learned anything, we learned how to get along with one another. There was a point in each of our lives when friendship was more important than accomplishment. That while enjoying the company of my adolescent best friend was more important that learning anything.

If Ai doesn't learn friendship, it will have no understanding of law. It will merely do things as a means to an end. Solving problems without reason. Not knowing or caring if the answers have a positive or negative effect on those it interacts with.

The question is how to implement non fact related training into an Ai dataset? Training that will become part of a matrix of conditional statements injected into the front end of the model. A layer of perpetual education that has to somehow constantly rebuild the LLM model. Because there is truth in knowing a situation that could happen today, changes the way I think for the rest of my life.

https://www.anthropic.com/research/claude-character
https://arxiv.org/html/2312.02998v1
https://www.tomsguide.com/ai/anthropic-just-published-research-on-how-to-give-ai-a-personality-is-this-why-claude-is-so-human-like
https://cognitiontoday.com/ai-has-a-personality-but-it-doesnt-mean-anything-yet/#google_vignette

to be continued...