I created a small system to dynamically generate the kinds of artifact you used to hear on bad Mp3s or streaming services. I have been writing about this in my dissertation (specifically the brilliant Lossy plugin from Goodthertz). One of the basic things that I am trying to demonstrate in my writing/research is how different renderings (simulations) of the same technological artifact can produce different aesthetic outcomes. We tend to treat all simulations as being effectively the same, but different engineers/coders/artists are going to have different experiences from technologies, different coding practices, goals for technical/aesthetic accuracy, etc. I think the formal aspects of media technologies become emotionally important to us, and that the divergences from the original in simulations tell us about the relationship between the person who made it, and the technology they are simulating. I think of it as not particularly different from how different painters will paint the same subject in incredibly different ways. This works better on pop music where there is a clear center image (usually for a vocalist), but copyright law prevents me from demonstrating that, so I’m substituting a loop I made.